Art’s Troubles

    I

    Duly acknowledging that the plural of anecdote is not data, I begin with some stories drawn from the recent history of liberal democracy.

    • In November 2010, the Secretary of the Smithsonian Institution removed an edited version of footage used in David Wojnarowicz’s short silent film A Fire in My Belly from “Hide/Seek: Difference and Desire in American Portraiture” at the Smithsonian’s National Portrait Gallery after complaints from the Catholic League, and in response to threats of reduced federal funding. The video contains a scene with a crucifix covered in ants. William Donohue of the Catholic League claimed the work was “hate speech” against Catholics. The affair was initiated by an article contributed to the Christian News Service, a division of the Media Research Center, whose mission is to “prove — through sound scientific research — that liberal bias in the media does exist and undermines traditional American values.”

    • In October 2015, Dareen Tatour, an Israeli Arab from a village in the Galilee, was arrested. She had written a poem: “I will not succumb to the ‘peaceful solution’ / Never lower my flags / Until I evict them from my land.” A video clip uploaded by Tatour shows her reading the poem, “Resist, my people, resist them,” against the backdrop of masked people throwing rocks and firebombs at Israeli security forces. The day after the uploading, she posted: “The Islamic Jihad movement hereby declares the continuation of the intifada throughout the West Bank…. Continuation means expansion… which means all of Palestine. And we must begin within the Green Line… for the victory of Al-Aqsa, and we shall declare a general intifada. #Resist.” In 2018, Tatour was given a five months’ jail sentence. In May 2019, her conviction for the poem was overturned by the Nazareth District Court, but not the conviction for her other social media posts. The poem, said the court, did not “involve unequivocal remarks that would provide the basis for a direct call to carry out acts.” And the court acknowledged that Tatour was known as a poet: “freedom of expression is [to be] accorded added weight when it also involves freedom of artistic and creative [expression].” The Israeli Supreme Court rejected the state’s motion for appeal.

    • In 2017, the artist Sam Durant made a public sculpture, “Scaffold,” for location in the open grounds of the Walker Art Center in Minneapolis. It was an unpainted wood-and-metal structure, more than fifty feet tall, with a stairway that led to a platform with a scaffold. The work referred to seven executions between 1859 and 2006, including the execution in 1862 of thirty-eight Dakota-Sioux men. Protesters demanded the work’s destruction: “Not your story,” “Respect Dakota People!” “$200.00 reward for scalp of artist!!” Following mediation, the work was surrendered to the activists, who reportedly dismantled it, ceremonially burning the wood. Art critics endorsed the protest: “In general it’s time for all of us to shut up and listen.” “White Americans bear a responsibility to dismantle white supremacy. Let it burn.” The artist himself denied that he had been censored. “Censorship is when a more powerful group or individual removes speech or images from a less powerful party. That wasn’t the case. I chose to do what I did freely.”

    • In April 2019, three Catholic priests in the Polish city of Koszalin burned books that they said promote sorcery, including one of J.K. Rowling’s  Harry Potter  novels, in a ceremony that they photographed and posted on Facebook. The books were ignited as prayers were said and a small group of people watched on. They cited in justification of the ceremony passages from Deuteronomy (“The graven images of their gods shall ye burn with fire”) and Acts (“Many of them also which used curious arts brought their books together and burned them before all men”). In August of the same year, a Roman Catholic pastor at a school in Nashville, Tennessee banned the Rowling novels: “These books present magic as both good and evil, which is not true, but in fact a clever deception. The curses and spells used in the books are actual curses and spells; which when read by a human being risk conjuring evil spirits into the presence of the person reading the text.”

    • In August 2019, the release of the film The Hunt, in which “red state” Americans are stalked for sport by “elite liberals,” was cancelled. Donald Trump had tweeted: “Liberal Hollywood is Racist at the highest level, and with great Anger and Hate! They like to call themselves ‘Elite,’ but they are not Elite. In fact, it is often the people that they so strongly oppose that are actually the Elite. The movie coming out is made in order to inflame and cause chaos. They create their own violence, and then try to blame others. They are the true Racists, and are very bad for our Country!” The studio explained: “We stand by our film-makers and will continue to distribute films in partnership with bold and visionary creators, like those associated with this satirical social thriller, but we understand that now is not the right time to release this film.” Nine months later, with a new marketing campaign, the film duly appeared. The director explained: “The film was supposed to be an absurd satire and was not supposed to be serious and boring……. It’s been a long road.”

    • In Germany, a Jewish activist has been litigating to have removed a thirteenth-century church carving of the Judensau, or “Jewish pig,” an infamous trope of medieval anti-Semitism, from the outer wall of the main church in Wittenberg. A memorial plaque installed in November 1988, containing in Hebrew words from Psalm 130, “Out of the depths, I cry to you,” does not satisfy the litigant. The district court ruled that the continued presence of the carving did not constitute evidence of “disregard for Jews living in Germany.” The judgment was upheld this year by the Higher Regional Court: the presence at the church of both a memorial to the Holocaust and an information board that explains the Judensau as part of the history of antisemitism justified retaining the carving. The campaign to remove the carving has Christian clerical support: “The Judensau grieves people because our Lord is blasphemed. And also the Jews and Israel are blasphemed by showing such a sculpture.” A local Jewish leader took a different position: “It should be seen within the context of the time period in which it was made,” he argued. “It should be kept on the church to remind people of antisemitism.”

    • Two years ago the artist Tomaz Schlegl built a wooden statue of Trump in Moravce, Slovenia. It was a twenty-six-foot tall wooden structure that had a mechanism to open Trump’s red painted mouth full of pointy teeth. The artist explained that the figure has two faces, like populism. “One is humane and nice, the other is that of a vampire.” He explained that he had designed the statue “because people have forgotten what the Statue of Liberty stands for.” The Trump-resembling statue wasn’t actually Trump, but “I want to alert people to the rise of populism and it would be difficult to find a bigger populist in this world than Donald Trump.” It was burned down in January 2020. The mayor of the town, deploring the arson, commented: “This is an attack against art and tolerance…. against Europe’s fundamental values.”

    There is something arbitrary about this group of stories — others could have been chosen, without any loss of coherence in the picture of contemporary artistic freedom. There was the campaign against Dana Schutz’s painting of Emmett Till at the Whitney Museum, against Jeanine Cummins’ novel American Dirt, against Woody Allen’s film deal with Amazon, which was cancelled, and against his memoir, which was cancelled by one publisher but published by another one. There was the decision by the National Gallery of Art and three other major museums to delay until at least 2024 the Philip Guston retrospective planned for 2020, so that “additional perspectives and voices [can] shape how we present Guston’s work” (museum-speak for “we will submit our proposals to a panel of censors”). And though they are all recent stories, the larger narrative is not altogether new. In 1999, Mayor Rudolph Giuliani took exception to certain works in an exhibition at the Brooklyn Museum, notably Chris Ofili’s painting The Holy Virgin Mary. The mayor relied on a newspaper report: the Virgin was “splattered” with elephant dung, the painting was offensive to Catholics, the museum must cancel the show. (The museum offered to segregate some pictures and withdraw Ofili’s, but the mayor responded by withholding funds and terminating the museum’s lease. The museum injuncted him; the city appealed; it then dropped the appeal. The Jewish Orthodox Agudath Israel intervened on the mayor’s side.) In 2004, in Holland, the Dutch filmmaker Theo van Gogh was shot dead in Amsterdam by Mohammed Bouyeri, a 26-year-old Dutch-born Muslim who objected to the film Submission that Van Gogh had made earlier that year, with Ayaan Hirsi Ali, about violence against women in Islamic societies; the assassin left a note to Hirsi Ali pinned by a knife to the dead man’s chest. And in 2005, in Denmark, there occurred the cartoons affair. In response to an article about a writer’s difficulty in finding an illustrator to work on a book about Mohammed, the newspaper Jyllands-Posten  published twelve editorial cartoons, most of them depicting him. There was no immediate reaction. The Egyptian newspaper Al Fagr republished them, with no objection. Five days later, and thereafter, there were protests by fax, email, and phone; cyber-attacks, death threats, demonstrations, boycotts and calls to boycott, a summons to a “day of rage,” the withdrawal of  ambassadors, the burning of the Danish flag and of effigies  of Danish politicians, the exploding of car bombs, appeals to the United Nations, and deaths — about 250 dead in total and more than 800 injured. A decade later, when similar cartoons were published in the French satirical magazine Charlie Hebdo, its staff was massacred in its offices in Paris.

    There is more. Behind each story, there stand others — behind the Allen stories, for example, there is the Polanski story and the Matzneff story. And behind those, some more foundational stories. In 1989, the Ayatollah Khomeini issued his fatwa against Salman Rushdie and his novel The Satanic Verses, which had already been burned in Muslim protests; there followed riots and murders, and the writer went into hiding for years. (The threat to his life subsists.) Also, in 1989, the Indian playwright and theater director Safdar Hashmi was murdered in a town near Delhi by supporters of the Indian National Congress Party; the mob beat him with iron rods and police batons, taking their time, unimpeded. In the United States in those years, there occurred, among other depredations against literature and the visual arts, the cancelling of the radio broadcast of Allen Ginsberg’s poem Howl; the campaign against Martin Scorsese’s film The Last Temptation of Christ; the political, legal, and legislative battles over Robert Mapplethorpe’s The Perfect Moment and Andres Serrano’s Piss Christ, and over Dread Scott’s What is the Proper Way to Display the United States Flag?; the dismantling of Richard Serra’s site-specific Tilted Arc; and the campaign against Bret Easton Ellis’ novel American Psycho. There were bombings, boycotts, legislation, administrative action, the ripping up on the Senate floor of a copy of Piss Christ by a senator protesting the work on behalf of “the religious community.”

    Not all these stories have the same weight, of course. But taken together these episodes suggest that new terms of engagement have been established, across political and ideological lines, in the reception of works of art. The risks associated with the literary and artistic vocation have risen. New fears, sometimes mortal fears, now deform the creative decisions of writers and artists. Literature and the visual arts have become subject to a terrible and deeply illiberal cautiousness. (As a Danish imam warned the publisher of the cartoons, “When you see what happened in Holland and then still print the cartoons, that’s quite stupid.”) The interferences with what Joseph Brodsky called literature’s natural existence have grown brutal, overt, proud. We have witnessed the emergence of something akin to a new censorship conjuncture.

    There are ironies and complications. This new era of intolerance of, and inhibition upon, literature and the visual arts has occurred in the very era when the major ideological competitor to liberalism collapsed, and with it a censorship model against which liberal democracies measured their own expressive freedom. Or more precisely and ironically, in the era when the Berlin Wall and Tiananmen Square occurred within months of each other — the former exemplifying the fall of tyranny, the latter signifying the reassertion of it. When China conceived the ambition to become the major economic competitor of the capitalist liberal democracies, it also initiated a censorship model to which over time the greatest private corporations of these same liberal democracies would defer. Since artworks are also products that sell in markets — since filmmakers need producers and distributors, and writers need publishers and booksellers, and artists need galleries and agents — they are implicated in, and thus both enabled and constrained by, relations of trade and the capitalist relations of production. Corporations will both accommodate censoring forces and be their own censors. As their respective histories with the Chinese market show, the technology corporations tend to put commercial interests before expressive freedoms. And that is another irony: this assault on art took place even as the World Wide Web, and then the Internet, was invented, with its exhilarating promises of unconfined liberty. But the new technology was soon discovered to have many uses. As Rushdie remarked in Joseph Anton, his memoir of his persecution, if Google had existed in 1989 the attack on him would have spread so swiftly and so widely that he would not have stood a chance.

    And all the while a new era of illiberalism in Western politics was coming into being, for many reasons with which we are now wrestling. 1989 marked the moment when liberalism’s agon ceased to be with communism and reverted instead to versions of its former rivals: communitarianism, nationalism, xenophobia, and religious politics. New illiberal actors and newly invigorated illiberal communities, asserted themselves in Western societies, as civil society groups came to an understanding of a new kind of political activity. So if one were to ask, when did art’s new troubles begin, one could answer that they began in and around that single complex historical moment known as 1989. And these contemporary art censorship stories differ from older arts censorship stories in significant ways.

    II

    All these stories are taken from the everyday life of liberal democracies, or more or less liberal democracies. In not one of these stories does an official successfully interdict an artwork. There are no obscenity suits among them. With just one exception, there are no philistine judges, grandstanding prosecutors, or meek publishers in the dock. Customs officials are not active here, policing borders to keep out seditious material. There are no regulators, reviewing texts in advance of publication or performance. So how indeed are they censorship stories at all? We must reformulate our understanding of censorship, if we are to understand the censorship of our times.

    “Censorship” today does not operate as a veto. It operates as a cost. The question for the writer or the artist is not, Can I get this past the censor? It is instead, Am I prepared to meet the burden, the consequences, of publication and exhibition — the abuse, the personal and professional danger, the ostracism, the fusillades of digital contempt? These costs, heterogeneous in everything but their uniform ugliness, contribute to the creation of an atmosphere. It is the atmosphere in which we now live. The scandalizing work of art may survive, but few dare follow.

    Censorship today, in its specificity, must be grasped by reference to these profiles: the censoring actors, the censoring actions, and the censored. With respect to the censoring actors, we note, with pre-1989 times available as a contrast, that there has taken place a transfer of censoring energy from the state to civil society. In the West, certainly, we do not see arrests, raids, municipal and central government actions such as the defunding or closure of galleries, prosecutions and lawsuits, or legislation. Insofar as the state plays a part, it tends to be a neutral spectator (in its executive function) or as a positive restraint on censorship (in its judicial function). In respect of civil society, however, there has occurred a corresponding empowerment of associations, activists, confessional groups, self-identified minority communities, and private corporations. The censors among the activists are driven by the conviction that justice will be advanced by the suppression of the artwork. Their interventions have a self-dramatizing, vigilante quality. Artworks are wished out of existence as an exercise of virtue. The groups are very diverse: “stay-at-home moms” and “military veterans” (disparaged by “liberal Hollywood”), policemen (disparaged by rapper record labels), social justice warriors, and so on. Their censorings do not comprise acts of a sovereign authority; they have a random, unpredictable, qualified character, reflecting fundamental social and confessional divisions. As for the corporations, when they are not the instrument of activists (Christian fundamentalists, say), their responses to activists, foreign governments, and so on tends towards the placatory.

    Correspondingly, with respect to censoring actions,  we find a comparable miscellany of public and private  (when not criminal) initiatives in place of administrative  and judicial acts of the state. The activists, right and left,  have available an extensive repertory of tactics: demonstrations, boycotts, public statements, digital denunciations, petitions, lethal violence, serious violence, and threats of  violence, property destruction, disruptions and intimidations,  mass meetings, marches, protester-confrontations, pickets, newspaper campaigns. As for corporations, the tactics, again, have become familiar: refusals to contract, and terminations of employment, publishing, and broadcasting contracts already concluded; editing books and films in accordance with the requirements of state authorities in overseas markets.

    In all these instances, the wrong kind of attention is paid to an artwork — hostile, disparaging, dismissive. There is no respect for the claims of art; there is no respect for art’s integrity; there is no respect for artmaking. Art is regarded as nothing more than a commodity, a political statement, an insult or a defamation, a tendentious misrepresentation. If it

    is acknowledged as art, it is mere art — someone’s self-indulgence, wrongly secured against the superior interests of the censoring actors. All these actions are intended to frighten and burden the artist. And so artists and writers increasingly, and in subtle ways, become self-censoring — and thereupon burden other artists and writers with their own silent example. Self-censorship is now the dominant form of censorship. It is a complex phenomenon and hard to assess — how does one measure an absence? But recall the Jewel of Medina affair of 2008, the novel about one of the Prophet Mohammed’s wives that was withdrawn by Random House because it was “inflammatory.” Who now would risk such an enterprise? Instead we are, with rare exceptions, living in an age of safe art — most conforming to popular understandings of the inoffensive (or of “protest”), a few naughtily transgressive, but either way without bite.

    As for the censored: what we have described as the given problem of censorship — the heterogeneity of civil society censoring actors; the retreat of the state from censoring activity; the collapse of the Soviet Union as the primary adversary of a liberal order; the emergence of China as a powerful, invasive, artworld-deforming censor; the absence of any rule-governed censorship — has meant, among other things, that the pre-1989 defenses against censorship, such as they were, no longer work. They were deployed in earlier, more forensic times, when the state, the then principal censoring actor, was open to limited reasoned challenge, and when civil society actors were subject to counter-pressure, and were embarrassable. Essential values were shared; appeals could be made to common interests; facts were still agreed upon.

    *

    Art now attracts considerable censoring energy. There is no other discourse which figures in so many distinct censorship contexts. It attracts the greatest number of justifications for censorship. We may identify them: the national justification — art, tied up with the prestige of a nation, cannot be allowed to damage that prestige; the governing-class justification — artworks must not be allowed to generate inter-group conflict; the religious justification — artworks must not blaspheme, or cause offense to believers; the capitalist justification — artworks must not alienate consumers, or otherwise damage the corporation’s commercial interests.

    Yet the properties of art that trouble censors are precisely the properties that define art. An attack on a work of art is thus always an attack on art itself. What is it about art works that gets them into so much trouble? We begin with the powerful effect that works of art have on us. We value the works that have these effects — but they also disturb us, and the trouble that art gets into derives from the trouble that art causes. The arts operate out of a radical openness. Everything is a subject for art and literature; everything can be shown; whatever can be imagined can be described. As the literary critic Terence Cave observed, fiction demands the right to go anywhere, to do anything that is humanly imaginable.

    Art works are playful, mischievous; they perplex, and are elusive, constitutively slippery, and therefore by their nature provocative. Art serves no one’s agenda. It is its own project; it has its own ends. This has an erotic aspect: playfulness has its own force, its own drive. Art preys upon the vulnerabilities of intellectual systems, especially those that demand uniformity and regimentation. Art is disrespectful and artists are antinomian. The artist responds to demands of fidelity, Non serviam. He or she is consecrated to a resolute secularity and an instinct to transgress boundaries: the writer makes the sacred merely legendary, the painter turns icons into portraits. (The religious artist does not altogether escape this effect.) It makes sense to say, “I am a Millian” or “I am a Marxist,” but it does not make sense (or not the same sense) to say, “I am a Flaubertian” or “I am a Joycean.” The opinions that may be mined are typically amenable to contradictory interpretations — they invite contradictory interpretations. And let us not overlook the obvious: parody and satire, comedy and farce, are aesthetic modes. Laughter lives inside literature.

    Identity politics tends to be fought out on the field of culture because identity is among art’s subjects. Art confers weight and depth upon identity; and so it is no wonder that identity groups now constitute themselves in part through their capacity for censoriousness. Race politics, gender politics: art has a salient place in them, as do art controversies, in which the various communities pursue cultural grievances by denying legitimacy to certain symbolic expressions. Identity warfare is attracted to art in much the same way that class warfare is attracted to factories. Politics in our day has taken a notably cultural turn, and so art has become a special focus of controversy. Of course, low politics also plays a role in these outrages against art — the Ayatollah’s fatwa was a power-play against Saudi hegemony, and Giuliani’s protest against a sacrilegious painting was a means of distracting Catholics from his pro-choice record. But the problem cannot be reduced to such politics alone.

    Unlike artists, art cannot be manipulated. Specifically, works of art are immunized against fake news, because they are all openly fabricated. Novels are openly fictional: that is their integrity. The artist is the last truth-teller. As already fictional accounts, artworks cannot be subverted by “alternative facts,” and as forms of existence with a distinctively long reach, and a distinctive endurability, they are more difficult to “scream into silence” (Ben Nimmo’s phrase for the phenomenon described by Tim Wu as “reverse censorship,” a pathology of internet inundation). But this is hardly to say that works of art — and their makers — are not vulnerable. Artworks are accessible: books can be burned, canvases can be ripped, sculptures can be pulled down. They are also susceptible to supervision — by, among others, pre-publication “sensitivity readers.” One measure of censorship’s recent advance is the phenomenon of “publishable then, but not publishable now,” and “teachable then, but not teachable now,” and “screenable then, but not screenable now.” The essayist Meghan Daum relates that when she asked a professor of modern literature whether he still taught Lolita, he replied, “It’s just not worth the risk.” This widespread attitude is of course an attack on an essential aspect of art’s existence — its life beyond the moment of its creation.

    III

    To whom should we look for the defense of art?

    Not the state. Of course, the state should provide effective protection for its citizens who are writers and artists. But the state cannot be art’s ally, in part because of its neutrality and in part because of its partisan tendencies. Even in those states which have a tradition of government patronage of the arts, the state must not take sides on aesthetic or cultural questions. Art criticism is not one of the functions of government, and the history of art under tyrannies, secular and religious, amply shows why not. Moreover, the state, or more specifically government, has its own interests that will most certainly interfere in the free and self-determined development of art and literature: its desire for civil peace, which may cause it to intervene in cultural controversy; its privileging of religious freedom, as defined by the confessional communities themselves; its desire for the soft power that art of a certain kind gives; its majoritarian prejudices; and so on.

    What is more, the arguments for state involvement in the arts usually exclude too much art, preferring instead national projects with social and economic benefits, which are usually inimical to art’s spirit. Whatever the individual artist’s debts and responsibilities to her society, as an artist she works as an individual, not a member, not a citizen. It has often, and correctly, been said that the social responsibility of the writer is to write well. When the conditions of artistic freedom are present, the artist represents only her own imagination and intellect. John Frohnmayer, the chairman of the National Endowment of the Arts during the culture wars of the late 1980s and early 1990s, mis-stepped when he wrote: “We must reaffirm our desire as a country to be a leader in the realm of ideas and of the spirit.” That is not an ambition that any writer or artist should endorse.

    Not the right. Simply stated, there is no decent theory of free speech (let alone free art speech) that has come from the illiberal right in any of its various, and often contradictory, reactionary and conservative versions. We will not find a defense of free intellectual and artistic speech in the counter-Enlightenment, or in the illiberal reaction to the French Revolution, or in the conservative or reactionary movements of the late-nineteenth century and early mid-twentieth century. The very notion of free speech is problematic to those traditions. They promote authority’s speech over dissenting speech. They reject the Kantian injunction, sapere aude, dare to know; they reject its associated politics, the freedom to make public use of one’s reason. They esteem reason’s estrangement — prejudice — in all its social forms: superstition, hierarchy, deference, custom.

    In the United States, to be sure, the situation is different. There is, after all, the First Amendment. Conservative articulations of freedom of speech are frequent and well-established. But if one subtracts from their positions what has been borrowed from the liberal order and what is merely self-interested (it is my speech I want heard), is there anything that remains upon which the arts may rely for protection? Let us disaggregate. There are the increasingly noisy and prominent activists of the alt-right, the Trumpists, the neo-Confederates, the militia groups at the Charlottesville “Unite the Right Free Speech March,” and the like. In the matter of free speech they are the merest and most discreditable of opportunists: we should not look to the champions of statues of Confederate generals to protect free speech. Then there are the publicists and the pundits, the Fox commentators, the Breitbart journalists, and the like. They are part borrowers, part opportunists. We should not look for a renewal of free speech thinking to the authors of  The New Thought Police: Inside the Left’s Assault on Free Speech  and Free Minds; Bullies: How the Left’s Culture of Fear and  Intimidation Silences Americans; The Silencing: How the Left is Killing Free Speech; End of Discussion: How the Left’s Outrage Industry Shuts Down Debate, Manipulates Voters, and Makes America Less Free (and Fun); Triggered: How the Left Thrives on Hate and Wants to Silence Us, and so on. Their defenses of free speech altogether lack integrity; they are merely ideological (and often paranoid) in their polemics.

    And then there are the lawyers, the right-wing academics, think tanks, and lobby groups, the administrators, legislators and judges, and the corporations. The widely noticed “turn” of the political right towards the First Amendment had led only to its redefinition in the interests of conservative grievances and objectives: to the disadvantage of liberal causes (anti-discrimination measures, exercise of abortion rights free of harassment, university “speech codes,” and so on); to the disadvantage of trade unions (compulsory deduction of fees enlists employees in causes they may not support); to the benefit of for-profit corporations (conferring on “commercial speech” the high level of protection enjoyed by “political speech”); to the general benefit of right-wing political campaigns (disproportionately benefited by the striking down of campaign finance law in the name of corporate — or “associational” — free speech); and to the benefit of gun-rights activists (advancing Second Amendment interests with First Amendment arguments). So, again: part borrowers, part opportunists. These three prominent currents of American conservatism, united by their self-pity and their pseudo-constitutionalism, have nothing to contribute to a climate of cultural and artistic freedom. In the matter of a principled free speech doctrine, we can expect nothing from the right.

    Not the left. There is no decent theory of free speech, let alone free art speech, that has come from the left. (Rosa Luxemburg is an exception.) There are only leftist critiques of liberal doctrine, external and immanent, respectively. In the external critique, liberal rights are mere bourgeois rights; they are a fraud, of instrumental value to one class, worthless to the other class. This criticism was pioneered by Marx, and successive generations of leftists have regularly rediscovered it in their own writing. A recent example is P.E. Moskowitz’s book The Case Against Free Speech, in which we read that “the First Amendment is nearly irrelevant, except as a propaganda tool … free speech has never really existed.” In the immanent critique, liberal rights are recognized but must be dramatically enlarged, even if they put the greater liberal undertaking in jeopardy; certainly, received liberal thinking about free speech is too tender to commercial interests, while weakening the interests of non-hegemonic groups (including artists and creative writers). Free speech requires campaign finance laws (to enable effective diversity of expressed opinion), restrictions on speech that inhibits speech, and so on.

    While liberals may safely dismiss the external critique, they are obliged to engage conscientiously with the immanent critique. The elements of greatest relevance to art free speech relate to two discourses deprecated by the immanent critique. One is “hate speech,” the other is “appropriation speech.” It is frequently argued that minority groups characterized or addressed in a “hateful” way should not have their objections defeated by any free speech “trump.” Jeremy Waldron has given the most compelling (not least because it is also the most tentative) liberal critique of hate speech. He understands hate speech in terms of “expression scrawled on the walls, smeared on a leaflet, festooned on a banner, spat out onto the Internet or illuminated by the glare of a burning cross.” What then of literature and the visual arts? Here he is somewhat casual, writing in passing of “an offensive image of Jesus, like Andres Serrano’s Piss Christ.” Regarding “appropriation speech,” in this case the censor arrives on the scene as a territorialist, and addresses the over-bold artist: “This art, this subject, this style, etc. is mine. Stay in your lane. You cannot know my situation; you lack epistemic authority. You strain for authenticity, but it will always elude you.” This cultural nativism owes an unacknowledged debt to Herderian values and counter-Enlightenment ideas: the spiritual harmony of the group, the irreducible individuality of cultures, the risks of contamination and theft, and so on — in many ways a rather unfortunate provenance.

    Sometimes hate speech and appropriation speech combine: “In your mouth, this is hate speech.” Sometimes, the one is treated as an instance of the other: “Appropriation speech is hate speech.” Though this hybrid is at least as old as Lamentations (“ani manginatam,” “I am their song,” the author writes of his vanquishers), it is largely a post-1989 phenomenon. Against it, the literary artist, the visual artist, is likely to respond with Goethe: “Only by making the riches of others our own do we bring anything great into the world.” Notwithstanding all this, however, and the broader switching of sides with the right on free speech (which is often overstated), the left remains an occasional ally.

    Not the confessional communities. Religions are constitutively, even if not centrally, coercive systems. Within those systems of conformity, there are censorship sub-systems, protective of divinity and its claims, of institutions and clergy, of practices and dogmas. The master prohibition of these sub-systems relates to blasphemy. Religions are coercive of their own members, and in many cases also of non-members. Whether or not they hold political power, and no religion has been averse to it, they hold communal and social and cultural power. They certainly do not respect artistic autonomy, though they have permitted great artists to flourish in the doctrinal spaces in which they were commissioned to work. There is no decent theory of free speech that has come from any of the major religions. Certainly not from the monotheisms: they take ownership of speech. It is sacred both in its origins (“In the beginning was the Word”) and in its most elevated uses (Scripture, worship). Its lesser and other uses are denigrated or proscribed. Historically speaking, freedom of speech developed as a revolt against ecclesiastical authority.

    Religions are invested in art, and they control it when they can — both their own art and the art of non-members. They subordinate the artist to confessional and institutional purposes. Christianity does so the most — its aesthetics are theological: just as God the Father is incarnated in God the Son, so God the Son is incarnated in the Icon, writes the art historian and philosopher Thierry de Duve. The Christian work of art, though it may be breathtakingly beautiful, affirms the theological and historical truth of the Christian story. The model religious artist is the Biblical artisan Bezalel, and the model religious artwork is his sumptuous construction of the Tabernacle in the desert. “Bezalel” means, in Hebrew, “in God’s shadow.” The general stance of the church towards art may be termed Bezalelian. “Artists avoid idolizing the arts,” writes a contemporary Bezalelian, “by resisting any temptation to isolation and instead living in the Christian community, where worship is given to God alone.”

    Religion has too many red lines; it is too used to being in charge; it cleaves to the non-negotiable (“the Bible is our guide”); it must have the last word. And when the drive to subordinate art is denied, when the desired orthodoxy is frustrated or broken, a strong sense of grievance is generated, and this in turn leads repeatedly to scandalized protests — to the burning of books and the destruction of artworks. In a word, to iconoclasm, in its old and strict sense, as the doctrinally justified destruction of art with heterodox meanings, or the use of force in the name of religious intolerance.

    To be sure, confessional communities are ardent in defense of Bezalelian artists — of wedding photographers who refuse to photograph, and bakers who refuse to make cakes for same-sex marriages. And there is some truth in the argument that religion and art have common adversaries in the everyday materialism of consumerist societies, and could make common cause against everyday philistinism and banality. The history of the association of religion with beauty is long and marvelous. But in the matter of securing artistic freedoms, the confessional communities are simply not reliable. Certainly they have not been allies in recent times.

    Not writers and artists. Though they are anti-censorship by vocation; though they named censorship (“Podsnappery,” “Mrs. Grundy”); though much of the best anti-censorship writing in modern times came from them (Wilde, Orwell, Kundera, Sinyavsky), advocacy is for writers and artists an unfair distraction and burden. It takes them away from artmaking. In 1884, the novelist George Moore, in Literature at Nurse, wrote: “My only regret is that a higher name than mine has not undertaken to wave the flag of liberalism.” Called upon to defend their work, artists get understandably irritated: “I don’t feel as though I have to defend it,” answered Ofili regarding The Holy Virgin Mary. “The people who are attacking this painting are attacking their own interpretation, not mine.” Moreover, their work is often opaque to them. It always holds more meanings than they know, than they designed. Byron cheerfully admitted as much: “Some have accused me of a strange design / Against the creed and morals of the land, / And trace it in this poem every line: / I don’t pretend that I quite understand / My own meaning when I would be very fine… “ And artists are often poor advocates in their own cause. They too readily concede the principle of censorship; they pursue vendettas, and they grandstand; they turn political; they contradict themselves; they advance bad arguments, which sometimes they mix up with better ones; they misrepresent their own work. What is more, they frequently undermine in their art the defenses that are commonly deployed on their behalf.

    “But every artist has his faults,” Maupassant once said to Turgenev. “It is enough to be an artist.” In this censoring moment, that should be the beginning of wisdom.

    IV

    This leaves the liberals. Will they rise to the defense of literature and the visual arts? Freedom of speech, after all, is integral to a liberal society. As a historical matter, free speech is liberalism’s signature doctrine. It is embraced by all the major liberal thinkers; it is incorporated into all the international legal instruments that comprise the liberal order. Execrations of censorship are to be found everywhere in canonical liberal discourse — in Milton, in Jefferson, in Mill, in Hobhouse, in William James. Censorship stultifies the mind, they all affirm. It discourages learning, lowers self-respect, weakens our grasp on the truth and hinders the discovery of truth. Liberals typically figure prominently among the champions of oppressed authors and banned books; they tend to recoil, with a certain reflex of contempt, when in the presence of affronted readers or minatory censors.

    But there is a problem. Liberalism has traditionally cast a cold eye on literature and the visual arts, and has been peculiarly unmoved by their vulnerability. Literary and artistic questions have not been pressing for liberals, in the matter of free speech. We may even speak of a failure within liberalism to value literature and the visual arts, or to value them in a way that translates into a defense of them within a broader defense of free speech.

    To begin with, there is an historical circumstance that contributes to the explanation for this peculiar neglect. The defense of free speech in the liberal tradition is significantly tied up with the political virtue of toleration of religious dissent. This is reflected, for example, in the First Amendment to the American Constitution: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press …” The free exercise of religion requires the free exercise of speech. Liberalism was tied at its inception to the defense of confessional dissent. Starting from a position in which loyalty to the state requires loyalty to its ecclesiastical institutions (in Protestant states) or to the ecclesiastical institutions favored by it (in the case of Roman Catholic states), liberals asked: Can the state accommodate citizens who wish to give their loyalty to it, but not to its ecclesiastical institutions? They gave several reasons for their affirmative answer. Tolerance is itself a theological matter. It derives from a respect for the individual conscience. It is not just a defense of theological dissent; it is itself an act of theological dissent. But none of this, of course, has anything to do with the welcoming of art, of artists, of artmaking. What was applied to religious works, practices, beliefs, and collectives was not applied to literary works, practices, or collectives. No question of tolerance in respect of the creative writer or artist arose for liberalism, within its own historical trajectory. (Indeed, when illiberal elements sought to exercise a censoring influence over art, they were often accommodated by liberals).

    This is not to say that liberal arguments for free speech are limited to religion. But if we look at its arguments, we search in vain for literature and the visual arts. Instead we are instructed, quite correctly, that the validity of a proposition cannot be determined without exposing it to challenge, and that a silenced opinion may be true, and that a false opinion may yet contain a portion of the truth, and that true opinions may become mere prejudices if we are forced to defend them — and so all opinions must be permitted. We are also told, again correctly, that free speech is the precondition for controlling abuses and corruptions of state power, since it empowers citizens to act upon the government, and impedes the freedom of governments to act on citizens. It reverses the flow of power, governments do not limit citizens; citizens limit governments. And also that free speech is the precondition of deliberative democracy: autonomous citizens cannot act autonomously, that is, weigh the arguments for various courses of action, if they are denied access to relevant facts and arguments. The promoting of public discussion requires a vigorous, generous free speech regime. The liberal tradition also includes, particularly in Humboldt and Mill, the ideal of self-realization, which broaches the large realm of free communication and free culture.

    But where does art figure in all this? Almost nowhere. Alexander Meiklejohn, the American philosopher and educator who wrote authoritatively about freedom of speech, did observe that “the people need novels and dramas and paintings and poems, because they will be called upon to vote” — a defense of the arts, but not in their integrity, a utilitarian defense. (He denied that the people needed movies, which are engaged in the “enslavement of our minds and wills.”) We can instead trace a liberal indifference, and in some cases even a liberal hostility, toward literature and the visual arts. How many liberals would have endorsed Schiller’s declaration that “if man is ever to solve the problem of politics, he will have to approach it through the problem of the aesthetic, because it is only through Beauty that Man makes his way to Freedom”? The great liberal thinkers have not found artworks to be useful texts to think with. Indeed, the liberal complaint that the literary sensibility has a reactionary character dates back to the French Revolution, its adversaries and its partisans. Writers such as Paine and Cobbett directed some of their most venomous attacks against a literary imagination whose origin they saw in a morally bankrupt, libertine, aristocratic culture. The confrontation thus framed, the decades that followed merely deepened it, with creative writers fully returning fire. Poets and novelists made nineteenth-century liberalism their declared enemy (Baudelaire, Dostoyevsky); twentieth-century liberalism is modernism’s declared enemy (Joyce is the honored exception); reactionary politics and avant-garde art are taken to be, in M.H. Abrams’ phrase, mutually implicative. This ignores, of course, the enlistment of the arts in the modern revolutions; but liberals are not revolutionaries.

    It is therefore little wonder that when one surveys the modern intellectual history of liberalism, there are very few liberal thinkers for whom, in the elaboration of a theory of free speech, literature and art figured. I count two, both of them outside the Anglo-American tradition: Benjamin Constant and Alexis de Tocqueville. Here is Constant, in a direct affirmation of inclusiveness: “For forty years I have defended the same principle — freedom in all things: In religion, in philosophy, in literature, in industry, and in politics.” Constant defended this freedom against “the majority,” which in his view had the right to compel respect for public order and to prohibit expression of opinion which harmed others (by provoking physical violence or obstructing contrary opinions) but not to otherwise restrict expression. Constant was himself a man of letters, a novelist of the poignancies of love — a Romantic, who brings to mind Victor Hugo’s description of Romanticism as “liberalism in literature.”

    As for Tocqueville: in Democracy in America he wrote about democracy’s inhibiting effects on fresh and vigorous thought. “There is a general distaste for accepting any man’s word as proof of anything. So each man is narrowly shut in himself, and from there, judges the world.” This does not lead to debate. Each man mistrusts all others, but he is also no better than others. Who then to trust? “General sentiment,” by which Tocqueville means the tyrant he most fears in an open society: “public opinion.” He famously observed that “I know of no country where there is less independence of mind and true freedom of discussion than in America.” But then he went on to offer a brief account of the significance of literature in the growth of democratic sentiment, and a longer account of the type of literature that a democratic society might foster. Literature, he believed, is a counter to despotic tendencies. This is not literature passing as political theory; this is literature in its aesthetic integrity.

    Constant and Tocqueville — but not Mill. This is surprising, since it was literature — the French writer Marmontel in particular — that saved Mill from his nervous breakdown and alerted him to the emotional limitations of utilitarianism. And yet it is Mill’s name that we must give to liberalism’s defeat in its first major test in respect of arts censorship. It was in 1858 that he completed On Liberty, one of the very scriptures of modern liberalism — but which, in this context, must be remembered as the great work on freedom of expression in which the philosopher failed to address three major setbacks to artistic freedom that happened even as he was writing it: the trial for obscenity (“an outrage to public morality and religion”) of Flaubert’s Madame Bovary, the trial for obscenity (“an insult to public decency”) of Baudelaire’s Les Fleurs du Mal, and the passage in Parliament of the Obscene Publications Act, which allowed the British state to seize and destroy works of art and literature without even giving their makers a right to be heard. It must also be added that in this failing Mill had successors in the weak response of liberals to the attack on Rushdie: not only were they few in number, but their defenses of the novelist rarely included defenses of the novel, of the dignity of his aesthetic project, of the autonomy of art and its right to blaspheme. The same blindness to art and its rights disfigured many liberal interventions in the American controversies of the late 1980s and early 1990s. They attacked Jesse Helms and company for many good reasons; just not this one.

    *

    We have discovered a problem. Even liberals are not good on literature and the arts, and this matters now more than ever before. How might things improve? We could attempt to give liberals reasons why they should take literature and the visual arts seriously. We might make the case for a liberal literature — the case advanced finely by Martha Nussbaum in her discussion of The Princess Casamassima, which she reads as contending for “liberalism as the guiding principle in politics,” taken by her to include “a demand for artist’s freedom of expression.” But what about works of art that contend for a conservative politics? No, the case for artistic freedom must be made only on the grounds of art as such. Writers and artists will not find relief from their troubles unless art itself, aesthetic expression as such, is explicitly inducted into the class of protected free speech.

    There are many reasons to do so. I will give only some. Art is a human good. An attack on literature and art is an attack on capacities and practices that constitute human beings as human and allow us to flourish. When we attack writers and artists, we attack ourselves. We are species-constituted by our artmaking and art-experiencing capacities; we realize ourselves by our artmaking and art-experiencing practices. The arts aid mental development and social harmony; they offer representations of a transfigured world. Art contributes to our understanding of ourselves and of the world; art makes it easier for us to live peaceably together. That is to say, it makes us more transparent to ourselves, and it makes the world more transparent, as well as less threatening and more beautiful. Artworks are goods whose desirability cannot adequately be expressed in individual terms — that is to say, they are “public” or “communal” goods.

    We must recognize (and value) the form of existence of the writer and the artist. People who pursue the literary and artistic life are pursuing an estimable life, and the fruits of their pursuit, their literary and art works, should be secure. They have a “plan,” in the liberal sense of the word; in more heroic and Tocquevillian terms, they seek to forge their own destiny. They certainly pursue a conception of the good life. That is, to make use of a distinction drawn by Jeremy Waldron, they are to be held to account not by reference to what they have done, but rather by reference to what in general they are doing. Free speech occupies a special place in this “plan.” It is the precondition to the artistic vocation. None of this has anything to do with the seeking of privileges. That some will pursue this plan in a degraded manner is not to any point. The pornographer stands to the art world as the fundamentalist stands  to the religious world. Each is reductive, blinkered, unthinking — but it would be an inconsistency to grant toleration to the one and deny it to the other.

    The makers of art (and the audiences for art) merit recognition as a distinct group. Artists are not best imagined as individuals under contract; they should be recognized as members of their own communities, with their own practices and institutions. Artmaking is the characteristic activity of art-communities. And if art-makers are in their own way a group, then they, and their art, merit the protective attention that identity- and religious-groups and their products typically receive in liberal societies. Indeed, the liberal state should take positive steps to ensure that art-making flourishes when threatened by confessional or other “identity” groups. In certain respects, the art community is the ideal community, and a model for all given communities; the free speech that it needs is the free speech that we would all need for our ideal existence.

    The art community is many communities. None is coercive. All are time-bound: specific formations do not last. They have no transcendental quality. They are fully secular. They are self-constituting: they do not require myths of origin. They are non-exclusive. They are unboundaried; there are no impassable barriers to entry. They are open to the world; they address the world; their solicitations are gentle and may always be refused. Literary and art communities are communities for anti-communitarians. They can never be a menace to society, in the sense that fanatical communities, or fanatical members of other communities, are a menace.

    Art is a liberal good: to defend literature today is to defend liberalism, not as an ideology or a political doctrine but by modelling the benefits of its freedoms. How do we name the members of a liberal society? One way is to call them citizen-readers. Among art’s forms and kinds, it is the novel — with its many standpoints, its diversity of human types, its provisionality, its interest in ambiguity and complexity — that comprises the distinctive art form of a liberal democratic society. To make war on the novel really is to make war on liberal democracy.

    Literature and the visual arts have so many things in common with liberal societies. They are both committed to a certain process of making explicit. “The liberal insistence,” writes Waldron, “[is] that all social arrangements are subject to critical scrutiny by individuals, and that men and women reveal and exercise their highest powers as free agents when they engage in this sort of scrutiny of the arrangements under which they are to live.” He goes on, “society should be a transparent order, in the sense that its workings and principles should be well-known and available for public apprehension and scrutiny.” Does this not describe the work of the writer? And they are both reflexive: for the liberal, identities should be treated as a matter for continuous exploration, receiving at best only conditional and contingent statement. And they both tend to the agonistic. By the agonistic, I mean interests or goods in irresolvable conflict — one that cannot be settled and cannot be won. There can be no resolved triumph of one over the other. The understanding by each of the other is bound up with each’s self-understanding; neither recognizes itself in the account given of it by the other. Liberal societies exist to accommodate agonistic conflicts, and art exists to explore them. It also has its own agons — with religion, with philosophy, with science, with history. The work of art, said Calvino, is a battleground.

    Both liberal societies and the arts are committed to a flourishing civil society. Precisely because literature, in its difference from other writing, solves no problems and saves no souls, it represents a commitment to the structural openness of a wholly secular space, one which is not programmatic, not driving towards any final, settled state in which uniformity rules. The artwork, like the open society, is promiscuous in the invitation that it extends. It is available to all; all may enjoy it; all may interpret it; all may judge it. Both liberal societies and the arts, in sum, have the same necessary condition. That condition is freedom. Illiberal societies prescribe a literature and visual arts that is both a diversion (“bread and circuses”) and an instrument of legitimation (“soft power”). But liberal societies need the existence of a freeliterature and art. Works of art are liberal public goods.

    From time to time, and in our time almost daily, events occur that prompt the question: Is liberalism equal to the challenge? I do not believe that the censorship of literature and the visual arts is the worst evil in our world, but it is a bad thing, and there is too much of it around. In these censoring times, liberals should strive to give to aesthetic expression an honored place in their theory of free speech.

    A New Politics, A New Economics

    The major political phenomenon of the past decade has been a popular revolt against the economic arrangements that took form at the end of the twentieth century. The revolt is global. It takes both left- and right-wing forms, and often presents itself as overtly anti-immigrant or otherwise ethnonationalist, but the undercurrent of deep economic dissatisfaction is always there. Inequality in the developed world has been rising steadily for forty years now. The aftermath of the financial crisis of 2008 activated the politics of economic populism: in the United States, the rise of Bernie Sanders, Elizabeth Warren, and other politicians on the economic left, plus the Tea Party movement and to some extent Donald Trump and his acolytes who rail against globalization, Wall Street, and the big technology companies. In Europe, there is Brexit, new nativist parties (even in Scandinavia), and the Five Star movement in Italy, among other examples. What all of these have in common is that they took the political establishment utterly by surprise. And all of them regard the establishment, and any consensus that it claims to represent, with contempt.

     The dynamic of this moment brings to mind the politics of the early twentieth century. During the nineteenth century, succeeding waves of the industrial revolution created (along with enormous and highly visible wealth) a great deal of displacement, exploitation, and want, which at first manifested itself in radical rebellions — in the United States they took the forms of agricultural populism and labor unrest. This was followed by a series of experiments in translating economic discontent into corrective government policy. Then as now, popular sentiment and electoral politics came first, and the details of governance came later. Most of the leading intellectuals of the Progressive Era were deeply uncomfortable with populism and socialism. The young Walter Lippman, in Drift and Mastery, called William Jennings Bryan, three-time presidential nominee of the Democratic Party, “the true Don Quixote of our politics.” But Lippmann and his colleagues shared the view that private and institutional wealth had become more powerful than the state and that the imbalance had to be righted, so they set about devising alternate solutions. We are now in the early stages of a similar period of forging a new political economy for this still young century. It is going to be a large, long-running, and not very orderly task, but those who don’t take it seriously are going to find themselves swept away.

    It shouldn’t be necessary, but it probably is, to stipulate that economies are organized by governments, not produced naturally through the operations of market forces. National economies do not fall into a simple binary of capitalist or not; each one is set up distinctively. Government rules determine how banks and financial markets are regulated, how powerful labor unions are, how international trade works, how corporations are governed, and how battles for advantage between industries are adjudicated. These arrangements have a profound effect on people’s lives. The current economic discontent is a revolt against a designed system that took shape with the general assent of elite liberal and conservative intellectuals, many of whom thought it sounded like a good idea but were more closely focused on other issues to pay close attention to the details. To begin the discussion about a new system requires first developing a clearer understanding of the origins of the current one.

    In an essay in 1964 called “What Happened to the Antitrust Movement?,” Richard Hofstadter noted that for half a century, roughly from 1890 to 1940, the organization of the economy was the primary preoccupation of liberal politics. Hofstadter meant antitrust to be understood as a synecdoche for a broader concern with the response to industrialism in general and the rise of the big corporation in particular. He was not mourning liberalism’s shift in focus; instead, he was typical of midcentury liberal intellectuals in thinking that the economic problems that had preoccupied the previous generation or two had been solved. And that view of the postwar decades still resonates even all these years later, in the economically dissatisfied political present. During last year’s presidential campaign, Donald Trump’s “Make American Great Again” and Joe Biden’s “Build Back Better,” both backward-looking slogans, share the embedded assumption that at some time in the past, roughly when Hofstadter was writing, the American economy worked for most people in a way that it doesn’t now. But was that really true? And if it was, what went wrong?

    Most people would probably say that the economy really was better back in the mid-1960s — that it had earned, through its stellar performance, the conventional view that it was working well — and that what changed was globalization: in particular the rise of the United States’ defeated opponents in the Second World War, Japan and Germany, and previously unimaginable advances in communications and data-processing technology,  and the empowerment of Saudi Arabia and other oil-producing Arab countries. But if that is what most people think, it highlights a problem we have now in addressing political economy, which is a belief that economic changes are produced by vast, irresistible, and inevitable historic forces, rather than by changes in political arrangements. That is a momentous mistake. A more specific account of the political origins of the mid-century economy, and of what blew it apart, is a necessary precondition for deciding what to do now.

    In the presidential election of 1912, Theodore Roosevelt ran on a program he called “The New Nationalism,” and Woodrow Wilson on “The New Freedom,” with the third major candidate, William Howard Taft, having a less defined position. This was the heart of the period when economic arrangements were the major topic of presidential politics. (The perennial Socialist candidate, Eugene Debs, got his highest-ever total, 6 per cent of the vote, in 1912.) Advised by Lippmann and other Progressive intellectuals, Roosevelt proposed a much bigger and more powerful federal government that would be able to tame the new corporations that seemed to have taken over the country. Wilson, advised by Louis Brandeis, called for a restoration of the economic primacy of smaller businesses, in part by breaking up big ones. It is clear that Hofstadter’s sympathies, as he looked back on this great debate, were on Roosevelt’s side; he considered Wilson’s position to be sentimental, impractical, and backward-looking, in much the way that Lippmann had thought of Bryan’s economic inclinations as quixotic. Wilson won the election, but Roosevelt probably won the argument, at least among intellectuals. (Politicians, because they represent geographical districts, have a build-in incentive to be suspicious of economic and political centralization.) The years immediately after the election of 1912 saw the advent of the Federal Reserve, the income tax, and the Federal Trade Commission — early manifestations of the idea that the national government should take responsibility for the conduct of the American economy.

    The argument between Roosevelt and Wilson never entirely went away. During the New Deal, when the economic role of the federal government grew beyond Theodore Roosevelt’s wildest dreams, there were constant intramural debates within economic liberalism, between centralizers such as Adolf Berle, the highly influential Brain Truster-with-out-portfolio, and de-centralizers such as Thurman Arnold, the head of the antitrust division of the Justice Department. Despite major defeats, notably the Supreme Court’s striking down of the National Industrial Recovery Act in 1935, the centralizers generally had the better of it, especially after the American entry into the Second World War, when the federal government essentially took over industrial production and also set wages and prices, with an evidently happy result.

    After the war, Berle and his younger allies, John Kenneth Galbraith among them, celebrated the taming of the once menacing industrial corporation, thanks to the forceful and long-running intervention of government. Big corporations remained economically dominant, but because they were now answerable to a higher authority, they no longer ran roughshod. It is important to note that these were not the benign, socially responsible corporations one hears touted today — they were forced to be socially responsible, by govern-ment legal and regulatory decree. The liberal debate about corporations in the postwar years was primarily sociological and cultural, over whether they had eroded the American character by engendering a pervasive “conformity” — not over whether they exploited workers or dominated government. The economy was growing in ways that — in sharp contrast to today’s economy — conferred benefits at all income levels. As Hofstadter put it, “The existence and the workings of the corporations are largely accepted, and in the main they are assumed to be fundamentally benign.” Only conservatives, he asserted, with their resistance to modernity, failed to accept the reality of corporate dominance.

    Partly because the main economic problems seemed at that point to have been solved, and partly because mainstream midcentury liberal thought was almost unimaginably unaware of national problems such as race, women’s rights, and the environment that demanded urgent attention, most liberals turned their energies toward those neglected non-eco-nomic topics. Hofstadter wrote that antitrust “has ceased to be an ideology and has become a technique, interesting chiefly to a small elite of lawyers and economists.” But that glosses over a crucial element in the development of economic liberalism.

    Keynesian economics, which was in its infancy during the heyday of the New Deal, had become so prestigious by the 1960s as to have become the conventional way of thinking about government’s role in addressing economic problems — not just among economists, but by anybody who had ever taken an undergraduate economics course. For Keynesians, the most potent economic tools at government’s disposal were adjusting the money supply, tax rates, and overall government spending — not directly controlling the economic activities of corporations, through antitrust, regulation, and other means. (Adolf Berle used to boast that half the industries in America were regulated by federal agencies, and it was inevitable that the other half would be soon.) So the kind of government economic role advocated by a long line of liberal intellectuals, even as they squabbled over the details, fell out of the conversation.

    It is always easy to see the vulnerabilities of a regime in retrospect. The mid-twentieth-century economic order depended on the corporation to provide a range of social benefits — good wages and salaries, employment security, pensions, health care, social services, and a measure of personal identity — that in most other developed nations would likely have come from government, or the church, or a stable local community. The American political system didn’t seem willing to expand the New Deal into a full-dress social democracy, and corporations were available to perform these quasi-state functions — but that meant they were bearing a lot of weight. They did not command the loyalty of those whom they did not enfold in their warm embrace, so they had a limited number of political allies.

    Even more important, the corporation-based social order rested on the assumption of their economic invulnerability. Corporations had to be able to afford the social burdens being imposed on them by government. What could cut into the economic resources that would require? Three possibilities come to mind: a demand by shareholders that they get a higher return; a weakening of customer loyalty; or competition from other businesses. Adolf Berle’s classic work (with Gardiner 

    Means) The Modern Corporation and Private Property, which appeared in 1932, declared that corporations’ shareholders, their supposed owners, had no power because they were so widely scattered: how could the hundreds of thousands of individual owners of stock in AT&T force management to do anything? After the Second World War, Berle only increased his estimate of the power and stability of the largest corporations, and of the irrelevance of their shareholders. So that was one potential threat assumed away. Galbraith agreed, and made the claim of corporate immortality even more capacious by observing that corporations were also invulnerable to fluctuations in consumer taste, because advertising had become so effective. There went another threat. And much of the rest of the world was still flat on its back after the Second World War, which took away the threat of competition, at least from abroad. Berle and others regular predicted the demise of Wall Street — heavily constrained by regulation since the advent of the New Deal — as a force in the American economy, because big corporations, ever larger and more powerful, would have so much capital of their own that they would no longer need access to the financial markets. Another common claim in that era was that innovation would, and could only, come from large corporations, because only they had the resources to operate substantial research divisions.

     

    The corporate social order, taken for granted by many millions of people who lived within it, and not particularly appreciated by political thinkers on the left or the right, began to come apart spectacularly in the 1980s — which was also, not coincidentally, when the rise in inequality began. The forcing mechanism for this was the “shareholder revolution” — a great reorienting of the corporation’s priorities toward increasing its asset value in the financial markets (and therefore its shareholders’ wealth), and away from the welfare of its employees or of society. Most people credit Milton Friedman with launching the shareholder revolution, specifically with an article in the New York Times in 1970 called “The Social Responsibility of Business Is to Increase Its Profits.” This suggested an ideal for corporations that was almost precisely opposite to Adolf Berle’s, but it didn’t propose specific techniques for achieving it. The true chief theoretician of the shareholder revolution was Michael C. Jensen, a University of Chicago-trained conservative economist, who neatly reversed Berle’s life’s work by making the re-empowerment of the shareholder his own life’s work. 

    Jensen proposed such mechanisms as putting a corporation under the control of a single purchaser, at least temporarily, instead of a widely dispersed body of small stockholders (that’s the private equity business), and paying chief executives primarily in stock options rather than salary, so that they would do whatever it took to increase their companies’ share prices. Such measures would permit the corporation to attend to its new sole purpose. Jensen ceaselessly promoted these and related ideas through the 1970s, 1980s, and 1990s, with highly influential publications (he is the co-author of one of the most cited academic papers of all time), his popular teaching at Harvard Business School (whose graduates shifted from being corporate employees to corporate dismantlers), and public appearances before Congressional committees and elsewhere. This coincided with a great wave of mergers, acquisitions, and buyouts that remade corporate America in ways that stripped out the social and political functions that had been imposed on it since the New Deal.

    Since his work had large political as well as economic implications, Jensen may stand as the most under-recognized public intellectual of the late twentieth century. But his influence, like that of anyone whose ideas have consequences, was substantially a matter of context. He arrived on the scene at a time when the kinds of institutional arrangements on which the midcentury political economy rested had fallen deeply out of fashion. The large economic disruptions of the final quarter of the twentieth century, when they are not attributed to inevitable market forces, are often laid at the feet of an organized corporate-conservative effort to remake the political economy, beginning, perhaps, with the future Supreme Court Justice Lewis Powell’s famous memo to the U.S. Chamber of Commerce in 1971 suggesting the building of a new conservative infrastructure of think tanks, publications, and campus leadership training institutes. But this misses a couple of important elements. One is the tension between corporations and finance — that is, between Main Street and Wall Street. When a company like IBM or General Electric dropped its de facto guarantee of lifetime employment and its company-paid defined benefit pensions, this was “corporate” only in the sense of corporations were now being run for Wall Street investors, not in the sense of benefiting Organization Man-style corporate employees.

    Also liberalism was changing, and many of these economic rearrangements happened with liberal (or at least elite liberal) assent. For one of many possible examples, consider that the crusade against the airline-regulating Civil Aeronautics Board, now of blessed memory, which had to approve every route and every fare (and one of whose creators was Adolf Berle), was led by Senator Ted Kennedy, with another future Supreme Court Justice, Stephen Breyer, as his chief advisor. It had the enthusiastic support of Alfred Kahn, the liberal economist who was Jimmy Carter’s appointee as the euthanasiast chairman of the CAB. (Ralph Nader, then probably the leading liberal activist in Washington, was another participant in this crusade.) There was little or no liberal opposition to the supersizing of Wall Street, which mirrored the downsizing of the industrial corporation; the shareholder revolution would not have been possible without dozens of regulatory changes that enabled it, which didn’t attract much notice because at that moment economic deregulation was seen as an uncontroversial good cause. Much of the newly emerging economic Brahmin class was populated by elite liberals: graduates of Ivy League univer-sities who worked at McKinsey or Goldman Sachs or Google, proudly and profitably “disrupting” the old economy for a living. People at such companies became an important part of the funding base of the Democratic Party, playing the role that political machines and unions had previously played. The old instinct that the way to solve problems is by making corporatist bargains among government, labor, and business had faded away. A fluid, fast, transaction-oriented society, which proposed instead to solve problems by dismantling institutional arrangements and putting more innovative, efficient ones in their place, was now the ideal.

    I don’t want to sound facilely dismissive of these ideas. I was entranced by them when I was young. In those days one still saw people who had served in the New Deal strolling through downtown Washington — Tommy Corcoran, Ben Cohen, Joe Rauh. They appeared to me not as honored participants in a supremely successful political and economic order, but as ghosts, men who had outlived their times. “Neoliberal” had not yet become a dirty word. Books proposing to save liberalism by jettisoning its traditional formations, such as Theodore Lowi’s The End of Liberalism and Mancur Olson’s The Rise and Decline of Nations, were mesmerizing. Liberal heterodoxy was in the air. Why couldn’t liberalism off-load all those clunky appurtenances of its past, the labor unions and the interest groups and the government agencies, and just solve problems? Why did we have to defend to the death vast, wasteful, expensive programs such as Social Security and Medicare? Why couldn’t we be less political, more efficient, smarter, more attuned to real needs and less to powerful constituencies? Didn’t the sluggish economy need the kind of jump-start that deregulation and a general embrace of markets could provide?

    Maybe the Civil Aeronautics Board had indeed outlived its usefulness. The problem was that this broad antinomian logic was applied everywhere. With hardly a peep except from self-interested industry groups, the United States ended broadcast regulation, ushering in the age of hot-blooded talk radio and cable news. It set up the Internet to be an unregulated information platform that enriched a handful of immensely wealthy and powerful companies and made no effort to distinguish between truth and falsity. It declined to regulate the derivatives markets that brought down the global economy in 2008. In all those cases, policies that sounded good by the standards of the newly dominant form of economic liberalism wound up having old-fashioned libertarian effects that should have been predictable: more inequality, greater concentration of wealth and power, more disruption of social and economic arrangements that had been comfortable and familiar for many millions of people. The flaws in the new system were not immediately evident to its designers, because they were prospering. But many of the less well educated, more provincially located, and less securely employed eventually made their vehement dissent known through their voting behavior. That is where we are now.

    People get to choose how to involve themselves in politics, as participants and as voters. It would be wildly unrealistic to demand that everyone’s politics be “about” some topic that seems preeminent to you, or that their politics align with an outsider’s balance-sheet determination of their interests. If you are reading this, it’s likely that Donald Trump cut your taxes. Did you vote for him? Or did you vote because of longstanding party loyalty, or your values, or the way the candidates struck you, or what you think the American government should stand for at home and abroad? It is especially foolhardy to imagine that politics can be about economics rather than, say, race, or gender, or religion, or culture — or that it can be rigorously empirical, based on meticulous scientific determinations of the truth. Still, because democratic politics is meant to deter-mine the activities of the state, and much of what the state does is allocate resources, in the end economics runs through just about everything in politics, including matters that do not present themselves as economic.

    Racism would not command the public attention it does if blacks and whites were economically indistinguishable, and most of the proposed remedies for racism entail big changes in how governments get and spend their money. Nativism may express itself as hatred of the other, but it takes root among people who see immigrants as competitors for jobs and government benefits. The bitter controversies over the pandemic have been powered by the highly different ways it has affected people’s health and employment depending on where they stand in the class system. So, even when politics is not obviously about economics, it is still about economics. To address the deep unfairness of the current economic order requires political solutions, but they have to be political solutions that meet people where they are — that do not seem distanced and abstract. That will be the only way to build popular support strong enough to enact them.

    The fundamental test of the American political economy ought to be whether it can offer ordinary people the plausible promise of a decent life, with a realistic hope of economic progress and their basic needs met: health care, a good education, protection from want, security in old age. The country has failed that test for a generation. Until it succeeds economically and socially, it will not function well politically. And to function well politically requires addressing an enormous economic problem, which can come across as dry and statistical, in ways that feel immediate and palpable enough to inspire passionate engagement.

    I am proposing a great remaking of the political economy as a primary task over the next generation. At this moment the most useful next step in that project is not to produce a specific policy agenda, but instead to outline an approach to politics that could create widespread popular support for the larger project. In recent years the gap between voters and technically oriented policymakers who are genuinely concerned about inequality has been very wide — wide enough for pure grievance to take up the political space that ought to be devoted to fixing the problem. I will suggest three guiding principles for how to proceed.

    Work through institutions. Consequential human activity takes place through institutions. It has been an especially self-destructive element of recent thought to exaggerate the disadvantages of “bureaucracy” and other aspects of institutional life and to overestimate how much can be accomplished without them. This turn has coincided with the severe deterioration of the traditional bulwark institutions of American liberalism, such as labor unions and churches. Media and messaging meant to influence public opinion, organizing campaigns conducted only on social media — these are the snack foods of politics, far less effective over the long term than building institutions that have more conventional functions like structured meetings, ongoing rituals, and planned campaigns aimed at specific government policy outcomes.

    It is a familiar irony that the opponents of an inclusive economy have often used anti-institutional rhetoric while building up powerful institutions of their own. During the twenty-first century, we have seen a great consolidation of one economic sector after another, always made possible by favorable political arrangements, which only become more favorable as the sector gains more economic, and therefore political, power. To curb the power of big tech, big finance, big pharma, and big agriculture will require countervailing institutions. Institutions (which are not the same thing as communities) are necessary to achieve change, and also to instantiate change. Awakening consciences and changing minds is noble and necessary, but such advances lack staying power unless they lead to the creation of consequential new laws and institutions. 

    Address inequality upstream, not downstream. It is deeply ingrained in our economic thinking that the solution to inequality is redistribution. That way, in theory, a society can have the best of both worlds: the efficiency, flexibility, and growth associated with unimpeded markets, plus the corrections to markets’ inequities that only the state can provide. The master tool for redistribution is a progressive income tax system, but there are plenty of more specific tools that address economic injustice in the same spirit: unemployment benefits for people who lost their jobs, food stamps for the hungry, retraining for people whose workplace moved abroad. All of these instruments have in common that they offer a remedy after something bad has happened to people, rather than trying to prevent something bad from happening to them in the first place.

    A decade ago the political scientist Jacob Hacker suggested “pre-distribution” instead of redistribution as a model. In this way of thinking, the aim is to throw some sand in the gears of pure market function, so that it cannot so easily disrupt people’s lives. Strong labor laws are a good example: they boost workers’ pay and benefits and make it more difficult to fire them, which is far more dignity-promoting than the Silicon Valley model of economic justice, with no unions, a gig economy, and the cold solace of a universal basic income for those who experience misfortune. Another is restrictions on absolute free trade and outsourcing of employment. Another is making it more difficult for private equity companies to load expenses onto the companies they acquire, which puts them under irresistible pressure to break whatever compact they had with their employees.

    Most working people are focused on the particular place where they live and the particular company where they work. A politician’s signal that she understands this and will try her best to keep those arrangements in place will be far more meaningful than a promise to pursue abatements after people’s lives have been pulled apart. Economic policymakers for years have regarded policies with this goal as the province of petty rent-seeking politicians, the kind who created the Smoot-Hawley tariff back in the 1920s: all they can accomplish is to create a static, declining society; real economic policy has to be redistributionist and Keynesian. It is a longstanding part of conservative lore that liberals scored a landmark and unfair victory when they torpedoed the Supreme Court nomination of Robert Bork in 1987 — but during the borking of Bork, his liberal opponents barely mentioned what was by far his most influential belief, which was that economic efficiency and consumer benefit were the only proper concerns for government as it regulated companies’ economic activities. They barely mentioned it because they had accepted it. That same 

    year, the New York Times published a lead editorial titled “The Right Minimum Wage: $0.00.” (On the day this essay is going to press, the Times’ lead editorial is titled “Let’s Talk About Higher Wages.”) The economic program on which Joe Biden successfully ran for President, heavily emphasizing saving jobs and keeping small businesses open, was by far the most pre-distributionist by a Democratic candidate in decades. The tide is only just beginning to turn, and the Democrats’ relatively new economic constituencies are not going to be pushing the Biden administration to reinvent the party’s notion of an ideal political economy.

    Decentralize power. In 1909, in The Promise of American Life, which is still as good a framing device for twentieth-century American liberalism as one can find, Herbert Croly proposed that the country tack away from the political tradition of Thomas Jefferson and toward the tradition of Alexander Hamilton. In the present, it is necessary to be reminded of what Croly meant by that: to his mind, Jefferson was not primarily a plantation slaveholder, but an advocate for farmers, artisans, and other smallholders, and for localized government, and Hamilton was not primarily an immigrant who took his shot, but an advocate for centralized and nationalized government, and the father of the American financial system. For Progressives such as Croly, it was axiomatic that the world had become far too complex for a Jeffersonian approach to work. Like Theodore Roosevelt a few years later, Croly believed that the national government had to become bigger and more powerful — and also to employ technical, depoliticized expertise that would be beyond the capabilities of local governments. This way of thinking about government has an irresistibly powerful face validity for members of the category of people who would staff its upper ranks. Think about the coronavirus: wouldn’t you want trained public health professionals to have been in charge nationally, rather than governors of highly variable quality?

    Yet Croly’s position is a temptation to be avoided, for a number of reasons. Expertise is not, pace the insistence of the social-media mob and Fox News, merely a pretext for the exercise of power. Experts have both knowledge in their domains, and an obligation to set aside their pure, unruly human instincts and attempt to approach the world more dispassionately. They marshal evidence. They answer, rather than insult or stereotype, people who do not agree with them. That they operate with some degree of honor doesn’t make them infallible or supra-human, of course. Like everybody else, experts live in their own enclosed worlds, and they often operate on distinctive, non-universal, and not fully conscious assumptions that nobody they encounter ever challenges. Technocracy is not a guarantee of truth or wisdom. No matter how smart and epistemologically sophisticated they are, experts miss things. Over the past few decades, the list has been long: the collapse of the Soviet Union; the 2008 financial crisis; the dramatic rise and political empowerment of evangelical religion; the rise of populism. The problem with centralized, elite expert rule is not only that it creates an inviting target, but that it also requires a check on its power, a system built to incorporate alternative views. To paraphrase James Madison, expertise must be made to counteract expertise; and, in a democracy, experts must be prepared to respect and honor what the great majority of citizens who aren’t experts think.

    It is impossible to separate economic and political power in the way that the Progressives envisioned, and their present-day heirs still do. Great economic power, of the kind that the major technology and financial companies have today, requires favorable political arrangements; in return,  it uses its economic power to enhance its political power. The gentle treatment that big finance and big tech have gotten from government, including from Democratic administrations, is closely related to their role as major political funders and employers of past and future high government officials. The federal government is no longer capable of functioning as a countervailing force to all elements of economic plutocracy at all times: a Democratic administration may be able to stand up to Koch Industries, but not to Google or Goldman Sachs.

    A far better vision for liberals should be of a pluralistic society that does not assume that one major element will be so automatically good that it should be super-empowered. Super-empowerment may be the ill that ails us the most. Over the past few decades, inequality has increased substantially not just for individuals, but for institutions. The top five banks control a higher percentage of assets than they ever have in American history. The gap between the richest universities and the struggling mass is greater. The great metropolitan newspapers of the late twentieth century — the Los Angeles Times and the Philadelphia Inquirer and the Chicago Tribune and so on — aren’t great anymore. Book publishing is in the hands of the “big four” houses. Five big companies dominate the technology business. If all these arrangements are working nicely for you personally, you should not take too much comfort from that. Think about what it would feel like if people you find abhorrent had control of these institutions — it is a much better guide than thinking about the system you would want when the good guys, by your lights, are in charge.

    Politics is the arena that allowed these inequalities to flourish, and politics will be how they can get corrected. You should think in particular about what kind of political system you would want, if the bad guys were winning. You would want checks on the power of the President and on the more politically insulated parts of the federal government, such as the Supreme Court and the Federal Reserve. You would want good state and local governments to have room to do what the national government can’t do or won’t do. You would want to prevent economic royalty, individual or corporate, from being able to control political outcomes. You would want Congress to have more power than the President, and the House of Representatives to have more power than the Senate. You would want minority groups to be organized enough to be able to impress their distinctive point of view on a majority that ignores it. In other words, squabbling, bargaining, self-interest, partisanship, and “gridlock” would be signs of political health, not dysfunction. Influence would come from the sustained effort it takes to be effective through democratic means, not from finding workarounds to open, participatory politics.

    That these are ways of structuring politics, not of assuring the victory of one side or of arriving at a policy, ought not detract from their urgency. Politics should make people feel heard and attended to. It should address pressing problems successfully. Politics manifestly is not doing those things now. If the way it is framed and conducted does not change fundamentally, democratic politics, which is to say, democratic society, will not be able to function properly. Tasks that are essential to powerful interests will get accomplished, but not tasks to which they are indifferent, even if they affect the welfare of vast numbers of people. Building a new politics will take a long time, because there is a lot to undo.

    On Playing Beethoven: Marginalia

    Interpretation? Some musicians have little patience for this word, while on the other side there is a recent surge of musicologists who strive to do it justice by elucidating its essence, its development, and its historical peculiarities. After a lengthy period of purely structural reasoning about musical works, topics such as psychology, character, and atmosphere are being considered again. Every tiny portamento or cercar la nota throughout the history of bel canto is being unearthed. Recapitulations are scrutinized with the help of the stopwatch in order to find out whether, why, and by how much they may exceed the scope of the exposition.

    The anti-interpreters consider all this to be a waste of time. All they ask for is a reliable edition of the score. The rest will be provided by their own genius. Here I would like to interpose and remind the reader of the fact that to decipher a score precisely and sympathetically is a much more demanding task than most musicians realize, and a more important one as well. Among the composers who had the skill to put on paper distinctly what they imagined, Beethoven is an outstanding example. Do not register his markings with one eye only: it will not provide you with the full picture. I am thinking of his dynamic indications in particular — Beethoven was well aware of where his crescendi and diminuendi should start or end. The metronome markings are another matter. The unhesitating adherence to Beethoven’s metronome figures even in the most dubious cases (Op. 106, Ninth Symphony) has resulted in performances that hardly leave any space for warmth, dolce, cantabile, for — in the words of the prescription in his Missa Solemnis — “from the heart — may it reach out to the heart” (von Herzen möge es wieder zu Herzen gehen). They also leave no room for Beethoven’s humor.

    While, in the past, it was the cliché of Beethoven the hero and the titan that was harmful to an appreciation of the variety of his music, the danger now comes from the predilection for breakneck speeds and virtuoso feats. Tempi are forced on the music instead of derived from it. My own experience has taught me to trust Beethoven’s markings — if not the metronome indications — almost completely, and to consider them important hints about tempo and atmosphere.

    The terms from largo to prestissimo that Beethoven uses to indicate tempo and character seem to me frequently more suggestive than metronome prescriptions. Listening to some contemporary performances, the majority of allegros sound to me like presto possibile. The diversity of the tempi gets lost. The third movement of the Hammerklavier Sonata, called Adagio sostenuto, turns into an andante con moto. While the speed of the fugue (crotchet = 144) is technically feasible, it prevents the listener from taking in the harmonic proceedings. (For many pianists, playing too fast may come easier than slightly reining in the tempo.)

    Another bone of contention is the metronome’s unshakeable steadiness. There are musicians who do not permit themselves or their pupils to use a metronome because it purportedly contradicts the natural flexibility of feeling. Obviously music should breathe, and it presupposes, not unlike our spine and pulse, a certain amount of elasticity. Yet this does not hold true for all music: not only jazz and pop, but also a considerable part of twentieth century music, would, without a rigorous tempo, be senseless. And there is another beneficial function of the metronome: it prevents progressive speeding up. Many young musicians are unaware of what they are doing to the tempo while practicing, and there are virtuosi who consider it their privilege to accelerate the pace while playing fast notes — a habit no orchestra or chamber ensemble could get away with.

    I cannot acquiesce in the widespread assumption that a soloist may indulge in all conceivable liberties, even the most outlandish ones, because he or she is neither a member of an ensemble nor the helpless prisoner of an orchestra. Quite a few soloists seem to adhere to the belief that only soloistic independence will issue in true music-making that emanates from their innermost interior, unfettered by the strait-jacket of ensemble playing. Any pianist who is about to play a Beethoven sonata should listen to a good performance of a Beethoven quartet — by, say, the Busch Quartet — in advance.

    And there is more to learn from the best conductors, singers, and orchestras than from all-too-soloistic soloists.

    Do you know the story of the eminent pianist who early on in his career was accused by a critic of playing semiquavers as if counting peas — with the result that, from then on, rhythmic steadfastness evaporated from his playing? Many years of appearing with orchestras and dealing with string quartets have confirmed my ideal of a rhythmic control that, in solo music, should never stray too far from ensemble playing. After all, the greatest piano composers — excepting Chopin and, in their young years, Schumann and Liszt — have all been ensemble composers as well, if not primarily. It seems highly unlikely that a composer should harbor two distinctly different concepts of rhythm and tempo, one for soloists, another for ensemble players. “Freedom” of playing should be confined to cadenzas, recitatives, and sections of an improvisatory nature. It goes without saying that Beethoven’s scores are neither entirely complete nor apt to be put into practice by a computer. To prepare the onset of a new idea, to give sufficient time to a transition, to underline the weight of an ending: these were self-evident matters that the performance of tonal music implied.

    Compared to the younger and short-lived Schubert, Beethoven had more time and opportunity to hear his own works performed, and to react to the performances. His hearing trouble was probably not so severe that it would have prevented him from perceiving certain tones and nuances. The Schuppanzigh Quartet, an institution that had already been associated with Haydn, accompanied his string quartet production to its very end. This was the first professional quartet in performance history, and it seems to have been available to Beethoven consistently. When Schuppanzigh stayed away from Vienna for a number of years, Beethoven halted his composition of string quartets, only to take it up again when Schuppanzigh returned. His quartet in E-flat Op. 127 was premiered within the series of “classical” chamber music concerts that Schuppanzigh inaugurated. This performance, however, turned out to be inadequate, and in due course several other performances with different players were organized to give connoisseurs the chance of getting better acquainted with such novel music. (The fact that this was feasible may have been due to the unparalleled triumph of Beethoven’s patriotic creations Wellington’s Victory and “Der glorreiche Augenblick,” or “The Glorious Moment,” which marked the peak of his popularity as well as the low point of his compositional output.)

    The profusion and the distinctiveness of Beethoven’s markings in the late string quartets did not result entirely from imagining them — it was connected to performance practice as well. Only in his fugues do we find a lack of detailed instructions. In these passages the players have to intervene and provide additional dynamic information, unless they are intent on drowning the listener of Beethoven’s “Grosse Fuge” in long stretches of fortissimo.

    Schuppanzigh’s concert series were mainly geared towards string quartets (regularly those of Haydn, Mozart and Beethoven), but they also included quintets, nonets and (“to divert the ladies”) piano trios. Solo piano works were hardly performed in public until Liszt invented the piano recital in London in 1840. Like Beethoven’s late quartets, his late piano sonatas became too difficult to be executed by domestic players. In order to tackle works such as the Sonata Op. 101 you had to be as proficient as Dorothea Ertmann, Beethoven’s much-admired pupil and friend to whom the sonata is dedicated. Works such as Op. 106 and Op.111 were deemed unplayable. Only in the second half of the nineteenth century did they start to seep into musical consciousness, thanks mainly to the advocacy of Hans von Bülow.

    In spite of the commitment of performers such as Bülow, Arthur Schnabel, Edwin Fischer, and Rudolf Serkin, the appreciation of the Diabelli Variations took considerably longer. Only recently has this magnum opus turned into a parade horse of many pianists as well as an endurance test for audiences that have now learned to sit through, and even relish, a work that is fifty-five minutes long and almost entirely in the key of C major. Among the reasons for this delay was the mythological misconception of the late Beethoven as “a loner with his God,” when in fact the profane was no less available to him than the sublime, the musical past no less than the musical present and future. In the Diabelli Variations, virtuosity and introspection, humor and gracefulness, cohabit under one roof.

    According to his assistant Schindler, Beethoven conceived these variations while “in a rosy mood,” and humor (“the sublime in reverse,” according to Jean Paul) reigns over wide stretches of the work. Wilhelm von Lenz, who took piano lessons from Liszt and became the author of the first detailed appreciation of all of Beethoven’s works, calls him “the most thoroughly initiated high priest of humor.” Conveying humor in music had been one of Haydn’s great achievements, and Beethoven linked up with it. Of course, the performer of humorous music should never appear to be forcing the comical. In the Diabelli Variations, the wit ought to become apparent, as it were, by itself, while the enraptured and enigmatic pieces provide the depth of perspective.

    Beethoven had a predilection for placing the ridiculous next to the sublime. The bottomless introspection of Variation XX is followed by a piece in which a maniac and a moaner alternate. After concluding his work on the Sonata Op. 111, his last piano sonata, Beethoven turned to finishing his Diabelli Variations, the theme of which is motivically related to the Sonata’s Arietta. Once more, the sublime and the “sublime in reverse” face one another.

    It has been claimed that Beethoven’s late style narrows down into the subjective and esoteric. What I find in it, however, is expansion and synthesis. Opposites are forced together, refinement meets bluntness, the public is paired with the private, roughness stands next to childlike lyricism. Does the inclusion of the Diabelli Variations into the wider repertory suggest that, these days, we have learned to listen to Beethoven’s late music with open ears? What we can take for granted is that no amount of familiarity with these pieces is going to erase their tinge of mystery.

    The George Floyd Uprising

    I

    Overnight mass conversions to the cause of African American rights are a rare phenomenon in America, and, even so, a recurrent phenomenon, and ultimately a world-changing phenomenon. The classic instance took place in 1854 in Boston. An escaped slave from Virginia named Anthony Burns was arrested and held by United States marshals, who prepared to send him back into bondage in Virginia, in accordance with the Fugitive Slave Act and the policies of the Franklin Pierce administration. And a good many white people in Boston and environs were surprised to discover themselves erupting in violent rage, as if in mass reversion to the hot-headed instincts of their ancestors at the glorious Tea Party of 1773. Respectable worthies with three names found themselves storming the courthouse. Amos Adams Lawrence, America’s wealthiest mill owner, famously remarked, “We went to bed one night old-fashioned, conservative, Compromise Whigs & waked up stark mad Abolitionists.” John Greenleaf Whittier experienced a physical revulsion:

    I felt a sense of bitter loss, —
    Shame, tearless grief, and stifling wrath,
    And loathing fear, as if my path
    A serpent stretched across.

    Henry David Thoreau delivered a lecture a few weeks later under the scathing title, “Slavery in Massachusetts,” in support of blowing up the law: “The law will never make men free; it is men who have got to make the law free.” And in upstate New York, the businessman John Brown, taking the fateful next step, declared that “Anthony Burns must be released, or I will die in the attempt,” which sounded the note of death. Burns was not released. John Brown went to Bleeding Kansas, where the note of death produced the Pottawatomie Massacre in 1856, and thence to Harper’s Ferry and everything that followed.  

    A second instance took place in March 1965, this time in response to a police attack on John Lewis and a voting-rights march in Alabama. The event was televised. Everyone saw it. And the furor it aroused was sufficiently intense to ensure that, in our own day, the photo image of young Lewis getting beaten, though it is somewhat blurry, has emerged as a representative image of the civil-rights revolution. It was Lyndon Johnson, and not any of the business moguls or the poets, who articulated the response. Johnson delivered a speech to Congress a few days later in which, apart from calling for the Voting Rights Act to be passed, he made it clear that he himself was not entirely the same man as before. “We shall overcome,” said the president, as if, having gone to bed a mere supporter of the civil rights cause, he had waked up marching in the street and singing the anthem. He went further yet. In a speech at Howard University, he defined the goal, too: “not just equality as a right and a theory, but equality as a fact, and equality as a result,” which inched his toe further into social democratic terrain than any American presidential toe has ever ventured.  

    And, a week after the Voting Rights Act duly passed, the violent note of the 1960s, already audible, began to resound a little more loudly in the Watts district of Los Angeles, prefiguring still more to come over the next years — violence in the ghettos, and among the police, and among the white supremacists, and eventually on the radical left as well. All of which ought to suggest that, in the late spring of 2020, we saw and perhaps participated in yet another version of the same rare and powerful phenomenon: an overnight conversion to the cause of African American rights, sparked by a single, shocking, and visible instance of dreadful oppression, with massive, complicated, and, on a smaller scale, sometimes violent consequences. 

    During the several months that followed the killing of George Floyd, which occurred on May 25, 2020, close to eight thousand Black Lives Matter demonstrations are reported to have taken place in the United States, in more than two thousand locales in every part of the country. Many of those demonstrations must have drawn just a handful of people. Then again, a protest parading under my own windows in Brooklyn in early June filled eight lanes and took half an hour to pass by, and, far from being unusual, was followed by similar marches from time to time, week after week, eventually dwindling in size, then swelling up again, and never disappearing, not for several months. It is reasonable to assume that, nationwide in America, several million people took part in those demonstrations. These were the largest anti-racist demonstrations in the history of the United States, and they were echoed by still other Black Lives Matter demonstrations in a variety of other countries, which made them the largest such event in the history of the world. The scale of the phenomenon makes clear that, whatever the precise size of the crowds, enormous numbers of participants had to be people who, like Amos Adams Lawrence, went to bed as quiet citizens and waked up transformed into militants of the cause, ready to paint their own placards (a disdain for printed placards or anything else bespeaking the dead hand of top-down obedience was a style of the movement) and carry them through the streets, chanting “Black lives matter!” and other, scrappier slogans (“Why are you in riot gear? / I don’t see no riot here!”) that, until yesterday, would never have been theirs. This has been, in short, a major event not just globally, but intimately and individually, one marcher at a time. The intimate and individual aspect has made itself visible, too, in the wave of professional groups and institutions of many sorts that have announced campaigns of their own to break up the segregated aspect (or worse) of institutional life in America — protests and campaigns in any number of business organizations and academic and cultural institutions, unto Kappa Alpha, the Robert E. Lee-revering college fraternity. And, in conformity with the historical pattern, the undertow of violence and destruction has likewise made itself visible, some of it a low-level political violence on the radical left, some of it in prolonged versions too (which is a fairly novel development); some of it a violence on the radical right, the ominous posturing with guns in public, the wave of right-wing car-rammings, the terrorist plots in Michigan, and some murders; and some of it outbreaks of looting, not on the urbicidal scale of the 1960s, but epidemically spread across the country, hotspot to hotspot.

    The furors of 1854, 1965, and 2020 arose in response to particular circumstances, and a glance at the circumstances makes it possible to identify more precisely the intimate and even invisible nature of the mass conversions. The circumstances in 1854 amounted to a political betrayal. The mainstream of the political class had managed for a quarter of a century to persuade the antislavery public in large parts of the North that it was possible to be antislavery and conciliatory to the slave states at the same time, in the expectation that somehow things were going to work out. Instead, the Kansas-Nebraska Act of 1854, by enabling further triumphs of the slave system, demonstrated that nothing was working out. People who thought of themselves as patient and moderate reformers concluded that they had been played. And, with the arrest of a fugitive slave in antislavery’s principal city, the patient and moderate reformers felt personally implicated, too. They erupted in wrath on behalf of Anthony Burns, who was in front of them, and on behalf of the American slaves as a whole, who were mostly far away. They erupted on behalf of America and the principles of the American Revolution, which they understood to be identical to the antislavery cause (as expressed by Walt Whitman, still another enragé, in his poem on the Burns affair, “A Boston Ballad”). But they erupted also on their own behalf, one person at a time. They were earnest Christians who discovered, to their horror, that they had allowed themselves to be duped by smooth-talking politicians into acceding for a quarter of a century, through association with the abomination of slavery, to their own moral degradation or damnation. 

    The “stifling wrath” (Whittier’s phrase) was different in 1965, but not entirely so. Opinion in large parts of the country had come around in favor of civil rights, timidly perhaps, but with a feeling of moral righteousness. The philosophical battle against segregation and invidious discrimination seemed to have been won, as shown by Johnson’s success, a year earlier, in pushing through the Civil Rights Act. Under those circumstances, to see on television the state troopers of the rejected Old South descend upon the demonstrators in Selma, quite as if the country had not, in fact, already made a national decision — to see the troopers assault young John Lewis and other people well-known and respected for their noble agitations — to see, in short, the unreconstructed bigots display yet again, unfazed, the same stupid, brutal arrogance that had just gone down to defeat — to see this was — well, it did not feel like a betrayal exactly, but neither did it feel like a simple political setback. It felt like a national insult. It was an outrage to everyone who had waked up singing “We Shall Overcome.” It was an outrage to the murdered President Kennedy. Then again, to some people the spectacle signified the futility of political action and self-restraint and, in that fashion, it opened the gates of limitless rage. 

    The political origins of the mass response to the killing of George Floyd are likewise identifiable, though I will confess that, if you had asked me a day before it started to predict the future of radical reform in America, I would have identified a different set of origins, and I would have extrapolated a different outcome. The origins that did lead to the uprising had everything to do with Black Lives Matter as an organization, and not just as a vague movement. Everyone will recall that, in 2013, a Florida vigilante named George Zimmerman was acquitted of the murder of a black teenager named Trayvon Martin, and the acquittal led to furious demonstrations in Florida, California, and New York. A politically savvy young black woman in San Francisco named Alicia Garza posted stirring responses to the incident on Facebook which included the phrase “black lives matter,” simply as a heartbroken thought and not as a slogan, and which was reposted by others using #blacklivesmatter. Garza and a couple of her Californian friends, Patrisse Cullors and Opal Tometi, converted their hashtag into a series of social media pages and thus into a committee of sorts. 

    Garza was a professional community organizer in San Francisco, and, as she makes plain in her account of these events, The Purpose of Power: How We Come Together When We Fall Apart, she and the little committee did know how to respond to unpredicted events. The next year, when the police in Ferguson, Missouri, shot to death Michael Brown, a spontaneous local uprising broke out, which was the unpredicted event. Garza and her group made their way to Ferguson, and, by scientifically applying their time-tested skills, helped convert the spontaneous uprising into an organized protest. Similar protests broke out in other cities. The Black Lives Matter movement was launched — a decentralized movement animated by a sharply defined outrage over state violence against blacks, with encouragement and assistance from Garza and her circle, “fanning the flames of discontent,” as the Wobblies used to say, and then from other people, too, who mounted rival and schismatic claims to have founded the movement. 

    In New York City, the marches, beginning in 2014, were large and feisty — marches of young people, sometimes mostly white, sometimes multihued, with flames fanned by the New York Police Department, whose uniformed members managed to choke to death Eric Garner, guilty of the peaceable crime of selling bootleg cigarettes. I did a little marching myself, whenever an attractive cohort was passing by. Some of these marches were, in fact, attractive. Then again, some of them seemed to be youth adventures, a little daffy in their anti-police fervor. I kept expecting to discover, at the rear of one march or another, a graduate-student delegation wheeling an antique caboose loaded with dogmas of the university left, barely updated from the identity politics of the 1970s and 1980s, or shrewdly refitted for the anti-Zionist cause. And, to be sure, Angela Davis, who spent the 1970s and 1980s trying to attach the black cause in America to the larger cause of the Soviet Union, came out with a book in 2016 called Freedom Is a Constant Struggle: Ferguson, Palestine, and the Foundations of a Movement, trying to merge, on intersectionalist grounds, Black Lives Matter in Missouri to the Palestinian struggle against Israel. 

    As it happens, the anti-Zionists had some success in commandeering an umbrella group of various organizations, the Movement for Black Lives, that arose in response to the upsurge of Black Lives Matter demonstrations. But the anti-Zionists had no success, or only fleeting successes, in commandeering Black Lives Matter itself. Nor did the partisans of any other cause or organization manage to commandeer the movement. Alicia Garza makes clear in The Purpose of Power that, in regard to the maneuverings and ideological extravagances of sundry factions of the radical left, she is not a naïf, and she and her friends have known how to preserve the integrity of their cause. Still, she is not without occasional extravagances of her own. In her picture of African American history, she deems the “iconic trio” of freedom fighters to be Martin Luther King, Malcolm X, and, of all people, Huey Newton, the leader of the Black Panther Party in the 1960s and 1970s, “the Supreme Servant of the People” — though Garza’s San Francisco Bay Area is filled with any number of older people who surely remember the Supreme Servant more sourly. 

    An occasional ideological extravagance need not get in the way, however, of a well-run organizing project. In San Francisco, a black neighborhood found itself suddenly deprived of school busses, and, as Garza describes, she and her colleagues efficiently mobilized the community, even if that involved the followers of Louis Farrakhan, of whom she appears to be not too fond. And lo, bus service resumed. Mobilizing a few neighborhoods around police violence is not any different. Still, the ideological impulses are sometimes hard to repress. From Garza’s standpoint, the overriding necessity during the presidential campaign of 2016 was to denounce the Democratic Party for its evident failings. Militants of Black Lives Matter duly made dramatic interventions in the campaign — at one of Bernie Sanders’ events, in order to denounce Bernie for failing to give black issues a proper consideration; and at an event of Hillary Clinton’s, in order to denounce Hillary for her own related inadequacies. But those were less than useful interventions. They seemed likely only to dampen popular black enthusiasm for the Democratic Party, precisely at a moment when the cause of anti-Trumpism depended on black enthusiasm — which led me to suppose, back in 2016, that Black Lives Matter was bound to remain a marginal movement, brilliantly capable of promoting its single issue, but incapable of maneuvering successfully on the larger landscape. 

    The leftwing upsurges that, in my too fanciful imagination, seemed better attuned to the age were Occupy Wall Street, which got underway in 2011, and Sanders’ 2016 and 2020 presidential campaigns. Occupy mostly evaded the dismal fate that skeptical observers predicted for it (namely, a degeneration into mayhem, Portland-style); and the Sanders campaigns only partly indulged, and mostly evaded, its own most dismal possibility (namely, a degeneration into full-tilt Jeremy Corbynism). Instead, the two movements gathered up large portions of the American radical left and led them out of the political wilderness into the social mainstream — in the case of Occupy, by transforming the anti-Main Street hippie counterculture into a species of hippie populism, 1890s-style, with a Main-Street slogan about “the ninety-nine per cent”; and, in the case of Bernie’s campaigns, by convincing large portions of the protest left to lighten up on identity politics, to return to an almost forgotten working-class orientation of long ago, and to go into electoral politics. Those were historic developments, and, in my calculation, they were bound to encourage the more practical Democrats to make their own slide leftward into a renewed appreciation for the equality-of-results idea that Lyndon Johnson had tried to get at. And then, with the pandemic, a leftward slide began to look like common sense, without any need to call itself any kind of slide at all. In the early spring of 2020, that was the radical development I expected to see — a dramatic renewal of the unnamed social-democratic cause. Not an insurrection in the streets, but something larger.

    Instead, there was an insurrection in the streets. The insurrection owed nothing at all to nostalgias for the 1890s or Eugene V. Debs or LBJ. It was an antiracist uprising. What can explain this?

    The video of George Floyd explains it. Six or seven years of skillful agitations by the Black Lives Matter movement had made everyone aware of the general problem of police killings of black men, one killing after another, not in massacres, but in a grisly series. The agitations had made everyone aware of the furious resentment this was arousing in black communities everywhere. But Black Lives Matter had also tried to make the argument that police killings represent a larger underlying cruelty in American life, something built into the foundations of society. And, until that moment, the agitations had not been able to overcome a couple of widely shared objections to that last and most radical of contentions.

    There was the objection that, however ghastly the series of killings had proved to be, the series did not constitute a unified wave, and nothing in particular was responsible for it. Ijeoma Oluo is a journalist in Seattle, whose book So You Want to Talk About Race is one of several new popular tracts on these themes. And she puts it this way: 

    In this individualist nation we like to believe that systemic racism doesn’t exist. We like to believe that if there are racist cops, they are individual bad eggs acting on their own. And with this belief, we are forced to prove that each individual encounter with the police is definitively racist or it is tossed out completely as mere coincidence. And so, instead of a system imbued with the racism and oppression of greater society, instead of a system plagued by unchecked implicit bias, inadequate training, lack of accountability, racist quotas, cultural insensitivity, lack of diversity, and lack of transparency — we are told we have a collection of individuals doing their best to serve and protect outside of a few bad apples acting completely on their own, and there’s nothing we can do about it other than address those bad apples once it’s been thoroughly proven that the officer in question is indeed a bad apple.

    The second objection was the opposite of the first. It conceded Ijeoma Oluo’s points about police departments. But it went on to argue that, contrary to her contention, the failings of policework are, in fact, widely understood, and a campaign to address the failings is well underway. Perhaps the campaign has not advanced very far in the retrograde America that still flies the Confederate flag, but in other parts of the country, in the enlightened zones, where cities are liberal, and mayors likewise, and police chiefs are reform-minded, the campaign to modernize the police has been sincere, or mostly, and it has been social-scientifically sophisticated, and it has taken aim at racial biases. And if problems persist, these may amount to a failure of communication — the failure to conduct the kind of face-to-face conversations among reasonable people that President Obama promoted at the White House by having a beer with Professor Henry Louis Gates, Jr., and the police officer who had treated Gates as a burglar on his own doorstep. Minor problems, then — problems calling for articulate presentations of up-to-date civic values from liberal politicians and reform leaders.

    But the video was devastating to the first objection. And it was devastating to the second. The video shows a peaceful day on the sidewalks of enlightened Minneapolis. George Floyd is on the ground, restrained, surrounded by police officers, and Officer Derek Chauvin plants a confident knee on his neck. The officer looks calm, self-assured, and professional. Three other cops hover behind him, and they, too, seem reasonably calm, the group of them maintaining what appears to be the military discipline of a well-ordered police unit. Apart from Chauvin’s knee, nothing alarming appears to be taking place. No gunshots ring in the distance, no commotion rises from the street, no shouts against the police or anyone else — nothing that might panic the cops or enrage them or throw them into confusion. And, in that setting, the video shows the outcome. Floyd moans that he cannot breathe. Someone on the sidewalk tries to tell the oblivious Officer Chauvin that something is wrong. And, for the many millions of people who watched the video, the shocking quality was double or triple. 

    If even a firecracker had gone off in the distance, the viewers could have concluded that Officer Chauvin was overcome with fear, and his actions might be understandable, though a more skillful cop would have known how to keep his cool. Or, if only Officer Chauvin had looked wild-eyed and upset, the viewers could have concluded that here was a madman. But, no. Chauvin and the other cops, maintaining their unit discipline, plainly show that all was well, from their standpoint. The four of them make no effort to prevent the people on the sidewalk from observing the event. No one seems embarrassed. These are cops who appear to believe themselves to be operating by the book. 

    And yet, how can they believe such a thing? Everyone who watched that video was obliged to come up with an explanation. The obvious one was that, in Minneapolis, the four police officers do not look like rule-breaking rogues because they are not, in fact, breaking rules — not in their own minds, anyway. Yes, they may be going against the advice proffered by their reform-minded department chief and their hapless mayor, the bloodless liberal. But they are conforming to the real-life professional standards of their fellow officers, which are the standards upheld by the police unions everywhere, which are, in turn, the standards upheld by large parts of the country, unto the most national of politicians. “Please don’t be too nice,” said the president of the United States to the police officers of Long Island, New York, in July 2017, with specific advice to take people under arrest and bang their heads as they are shoved into police vehicles. Why, then, should the four cops in Minneapolis have considered themselves rogues? That was the revelation in the video of George Floyd’s death. 

    And a large public drew large conclusions. To draw momentous conclusions from a single video shot on the sidewalks of Minneapolis might seem excessive. Yet that is how it is with the historic moments of overnight political conversion. There were four million slaves in 1854, but the arrest of a single one proved to be the incendiary event. In the case of George Floyd, the single video sufficed for a substantial public to conclude that, over the years, the public had been lied to about the complexities of policing; had been lied to about bad apples in uniform; had been lied to about the need for patience and the slow workings of the law. The public had been lied to by conservatives, who had denied the existence of a systemic racism; and had been lied to by liberals, who had insisted that systemic racism was being systematically addressed. Or worse, a large public concluded that it had been lied to about the state of social progress generally in America, in regard to race — not just in regard to policing, but in regard to practically everything, one institution after another. Still worse, a great many people concluded, in the American style, or perhaps the Protestant style, that, upon consideration, they themselves had been terribly complicit, and, in allowing themselves to be deceived by the police and the conservatives and the liberals, they had abandoned the black protesters, and they had allowed the police violence and the larger pattern of racial oppression to persist. Those were solemn conclusions, and they were arrived at in the most solemn of fashions, by gazing at a man as he passes from life to death. 

    So masses of people marched in the streets to rectify the social wrong. But they marched also to rectify the wrong nature of their own relation to society. This of course raises the question of what would be the right nature — which happens to be the topic of the new and extraordinarily popular literature of American antiracism. 

    II

    The literary work that shaped the mass conversion to anti-racism in 1854 was Uncle Tom’s Cabin, by Harriet Beecher Stowe, from 1852 — which was much despised by James Baldwin a century later for its demeaning portrait of the very people it was meant to support. The book that, more than any other, shaped the mass conversion in 1965 was Dark Ghetto, a sociological study from that same year, by Kenneth B. Clark — which was much despised at the time by Albert Murray, the author of The Omni-Americans, for what he, too, took to be a demeaning portrait of the very people it was meant to support. The book that, more than any other, has shaped the mass conversion of our own moment is Between the World and Me, by Ta-Nehisi Coates, from 2015 — which was written in homage to Baldwin, and yet is bound to make us wonder what Murray would have thought, if he had lived another few years. 

    Between the World and Me has shaped events because, in a stroke of genius, Coates came up with the three main and heartrending tropes of the modern crisis behind the antiracist uprising — to wit, “the talk;” the killing by the police of a young black man; and the young man’s inconsolable mother. The form of the book is a frank and emotional letter from Coates to his young son, which amounts to “the talk,” advising the son on the realities of black life in a hostile white America. The killing that takes place is of an admirable young black man from Coates’ social circle at college. The inconsolable mother is the young man’s mother, whom Coates goes to visit. In laying out these elements, Coates has supplied a vocabulary for speaking about the realities of modern police violence against blacks, which is a language of family life: an intimate language, Baldwinesque and not sociological, a language of family grit and grief. 

    Then again, he speaks insistently and emotionally but also somewhat abstractly about the black body and its vulnerability — not the beauty of the black body, but, instead, its mortifications, considered historically. These are the physical horrors of slavery long ago, conceived as horrors of an ever-present era, as experienced by himself as a young boy growing up in the rough neighborhoods of Baltimore, or as a child subjected to what appear to have been his father’s disciplinary beatings. This aspect of the book, the contemplation of the body and its mortifications, amounts, in effect, to a theory of America. Or rather, it amounts to a counter-theory, offered in opposition to the doctrine that he describes as the capital-D “Dream.” The Dream, as he lays it out, is the American idea that is celebrated by white people at Memorial Day barbecues. Coates never specifies the fundamentals of the idea, but plainly he means the notion that, in its simple-minded version, regards America as an already perfect expression of the democratic ideal, a few marginal failings aside. Or he means the notion that, in a more sophisticated way, regards 1776 as the American origin, and regards America’s history as the never-ending struggle, ever-progressive and ever-victorious, a few setbacks aside, to bring 1776 to full fruition. A theory of history, in short.

    His counter-theory, by contrast, postulates that, from the very start, America has been built on the plundering of the black body, and the plundering has never come to an end. This is an expressive idea. It scatters the dark shadow of the past over every terrible thing that happens in the present, which is never wrong to do, if the proportions are guarded. Yet Coates adopts an odd posture toward his own idea, such that, in one way or another, he ends up miniaturizing certain parts of his story. When he conjures the Dream, the precise scene that he brings to life is of little blond boys playing with toy trucks and baseball cards at the Memorial Day barbecue, as if this were the spectacle that arouses his resentment. When he conjures his own adult experience with the historic mortifications, he describes a disagreeable altercation on an escalator on the Upper West Side of Manhattan, where a white lady treats him and his toddler son in a tone of haughty disdain, and is seconded by a white man, and the temperature rises — as if this were the legacy of the horrors of long ago.

    The incident on the escalator comprises a climax of sorts in Between the World and Me — the moment when Coates himself, together with his toddler, has to confront the reality of American racism. And yet the incident is inherently ambiguous. He gives us no precise reason to share his assumption that the woman and the man are angry at him on a racist basis — an observation made by Thomas Chatterton Williams in his discussion of the scene in his own book, Self-Portrait in Black and White. Williams wonders even if Coates’ anger at the lady’s haughtiness might not have alarmed the lady and the man, with misunderstandings of every kind likely to have resulted — an easy thing to imagine in a town like New York, where sidewalk incidents happen all the time, and whites presume their own liberal innocence, and blacks do not, and correct interpretations are not always obvious. The ambiguity of the scene amounts to yet another miniaturization. The miniaturized portraits are, of course, deliberate. They allow Coates to express the contained anger of a man who, in other circumstances, would be reliably sweet-tempered. 

    He does present himself as a loving man — as a father, of course (which confers a genuine tenderness on the book), but also in regard to African American life as a whole. And yet something about this, too, his love for black America, ends up miniaturized. His principal narrative of African America is a portrait of Howard University from his own school-days, presented as an idyllic place, intellectually stimulating, pleasant, socially marvelous, affection-inspiring and filled with family meaning, too, given that his father, the Black Panther, had worked there as a research librarian — an ideal school, in sum, designed to generate graduates such as himself, therefore a splendid achievement of black America. But the argument that he makes about the ever-present universe of American slavery and the eternal vulnerability of the black body makes it seem as if, over the centuries, black America has achieved nothing at all, outside of music, perhaps, to which he devotes a handful of words. It is a picture of the black helplessness that racist whites like to imagine, supine and eternally defeated. This was Albert Murray’s objection to the black protest literature of the 1960s, with its emphasis on victimhood — the literature that was unable to see or acknowledge that, in the face of everything, black America has contributed from the very start to what Coates disparages as the Dream, or what Murray extolls as the Omni-America, which is the mulatto civilization that, in spite of every racial mythology, has always been white, black, and American Indian all at once.  

    I do not mean to suggest that Coates’ bitterness is inauthentic. Frank B. Wilderson III is twenty years older than Coates and, with his degrees from Dartmouth, Columbia, and Berkeley, is today the chair of the African-American Studies department at the University of California Irvine. His recent book, Afropessimism, conjures a similar landscape of anger and bitterness, as if in confirmation of Coates, except in a version that is far more volcanic, or perhaps hysterical. Coates during his college years in the 1990s was, as he explains, an adept of Malcolm X, but then outgrew the exotic trappings of Malcolm’s doctrine, without rejecting the influence entirely. Wilderson, during his own youth in the 1970s, was a “revolutionary communist,” in an acutely intellectual, Third Worldist fashion. He was an admirer of the Black Liberation Army, which was the guerrilla tendency that emerged from Eldridge Cleaver’s faction of the Black Panthers on the West Coast (and from City College in New York). The great inspiring global example of revolutionary resistance, in Wilderson’s eyes, was the Popular Front for the Liberation of Palestine, given its uncompromising struggle against the Zionist state — which, being a man of ideologies, he imagined (and evidently still imagines) to be a white European settler colony. And the Black Liberation Army, in his view, was the PFLP’s American counterpart. 

    Revolutionary communism left him feeling betrayed, however, or perhaps singed — damaged and enraged not by his black comrades in the United States, but by everyone else: by the whites of the revolutionary communist movement (namely, the Weather Underground, who gave up the struggle and returned to their lives of white privilege), and even more so by the non-blacks “of color.” He felt especially betrayed by the Palestinians. He was horrified to discover that a Palestinian friend in his hometown of Minneapolis, who despised Israelis, reserved a particular contempt for Israel’s Ethiopian Jews. And, in despair at the notion that even Palestinians, the vanguard of the worldwide vanguard, might be racist against blacks, Wilderson turned away from revolutionary Marxism, and he distilled his objections and complaints into a doctrine of his own — it is a doctrine, though a very peculiar one — which he calls Afropessimism. 

    The doctrine is a racialized species of post-Marxism. Wilderson thinks that, instead of the world being riven by Marx’s economic class conflict, or by the imperialist versus anti-imperialist conflict of Marxism in its Third Worldist version, it is riven by the conflict between the non-blacks and the blacks. The non-blacks regard themselves as the capital-H Human race, and they do so by seeing in the blacks a sub-human race of slaves. And the non-blacks cannot give up this belief because, if they did so, they would lose their concept of themselves as the Human race. Nor is there any solution to this problem, apart from the “end of the world,” or an apocalypse. The idea is fundamentally a variant of certain twentieth-century theories about the Jews — e.g., Freud’s notion that hatred of the Jews supplies the necessary, though unstated, foundation for the Christian concept of universal love. Freud’s theory is not especially expressive, though. Wilderson’s theory expresses. It vents. But the venting is not meant to serve a constructive purpose. Wilderson tells us that he studied under Edward Said at Columbia University, and he was greatly influenced. He admired Said’s resolute refusal to accept the existence of a Jewish state in any form. But Said’s revolutionary aspiration, in conformity with the Popular Front for the Liberation of Palestine, was to replace the Jewish state with something else. Wilderson’s Afropessimism entertains no such aspirations. It is “a looter’s creed,” in his candid phrase — meaning, a lashing out, intellectually violent, without any sort of positive application. Positive applications are inconceivable because the non-black hatred of blacks is unreformable.

    Still, he does intend Afropessimism to be a demystifier, and in this regard his doctrine seems to me distinctly useful. The doctrine beams a clarifying light on the reigning dogma on the American left just now, which is intersectionalism — a dogma that is invoked by one author after another in the antiracist literature, with expressions of gratitude for how illuminating it is, and how comforting it is. Intersectionalism is a version of the belief, rooted in Marx, that a single all-encompassing oppression underlies the sufferings of the world. Marx considered the all-encompassing oppression to be capitalism. But intersectionalism considers the all-encompassing oppression to be bigotry and its consequences — the bigotry that takes a hundred forms, which are racism, misogyny, homophobia, and so forth, splintering into ever smaller subsets. Intersectionalism considers that various subsets of the all-encompassing oppression, being aspects of the larger thing, can be usefully measured and weighed in relation to one another. And the measuring and weighing should allow the victims of the many different oppressions to recognize one another, to identify with one another, and to establish the universal solidarity of the oppressed that can bring about a better world.  

    But Wilderson’s Afropessismism argues that, on the contrary, the oppression of blacks is not, in fact, a variation of some larger terrible thing. And it is not comparable to other oppressions. The oppression of blacks has special qualities of its own, different from all other oppressions. He puts this hyperbolically, as is his wont, by describing the bigotry against blacks as the “essential” oppression, not just in the United States — though it ought to be obvious that, whether it is put hyperbolically or not, the oppression of blacks throughout American history does have, in fact, special qualities. On this point he is right. He is committed to his hyperbole, however, and it leads to an added turn in his argument. He contemplates, as an exercise in abstract analysis, the situation of a black man who rapes a white woman. In his view, the black man ought to be regarded as more oppressed than his own victim. The man may have more force, but he has less power. He is the victim of the “essential” oppression, and she is not, which makes his victimhood deeper. Wilderson’s purpose in laying out this argument is to shock us into recognizing how profound black oppression is. 

    Only, the argument leads me to a different recognition. I would think that, if black oppression cannot be likened  to other oppressions — if a special quality renders the black oppression unique — the whole logic of intersectionalism collapses. For if the black oppression is sui generis, why shouldn’t other oppressions likewise be regarded as sui generis? The oppression experienced by the victims of rape, for instance — why shouldn’t that, too, be regarded as sui generis? Why not say that many kinds of oppression are genuinely terrible, and there is no point in trying to establish a system for comparing and ranking the horrible things that people undergo? There might even be a virtue in declining to compare and rank one oppression with another. A main result of comparing and ranking the various oppressions is, after all, to flatten the individual experience of each, which softens the terribleness of the oppression — an especially misguided thing to do in regard to the racial history of the United States. 

    It may be a mistake to argue with Frank Wilderson III too much. He is a brilliant man with a literary gift that is only somewhat undone by a graduate-school enthusiasm for critical theory. But, at the same time, a cloud of mental instability or imbalance drifts across his book. He explains in his opening pages that his shock at discovering a casual anti-black racism among Palestinians induced in him a serious nervous breakdown, and he appears never to have fully recovered. He describes the sinister persecution that he believes he and his lover underwent at the hands of the FBI, and his account hints of paranoia. Then, too, it is striking how insistently he goes about miniaturizing his own picture of the racism against blacks that he believes to be inherent in the whole of civilization. The great traumatic experience of Wilderson’s childhood appears to have been the moment when the mother of a white friend persisted in asking him, “How does it feel to be a Negro?” 

    He is traumatized by the poor reception of his incendiary ideas at an academic conference in Berlin, not just among the straight white males whose essence it is to be oppressive, but among the women and non-whites whose intersectional essences ought to have impelled in them a solidarity with his oppressed-of-the-oppressed outlook. Especially traumatic for him is a Chinese woman at the scholarly conference, who, in spite of being multi-intersectionally oppressed, fails to see the persuasive force of his ideas. Then, too, a fight that turns nasty with a white woman in the upstairs apartment back in Minneapolis seems to him a recursion to the social relations of slavery times. The man has no skin. Every slight is a return to the Middle Passage. His book resembles Ta-Nehisi Coates’ in this respect yet again, except with a pop-eyed excess. The shadow of slavery times darkens even his private domestic satisfactions. He appears to regard his white wife as, in some manner, his slave master, though he seems not to hold this against her. It is positively a relief to learn from his book that, during his career as communist revolutionary, he went to South Africa to participate in the revolution (by smuggling weapons, while working as a human-rights activist for Amnesty International and Human Rights Watch), but had to flee the country because he was put on a list of “ultra-left-ists” to be “neutralized” by the circle around Nelson Mandela himself — a level-headed person, at last!

    But it is dismaying also to notice that, for all his efforts to identify anti-black racism and to rail against it, the whole effect of Wilderson’s Afropessimism is to achieve something disagreeably paradoxical. He means to make a forward leap beyond Marx, and he ends up making a backward leap to the era, a generation before Marx, when Hegel felt entitled to write the black race out of capital-H History. Hegel believed that black Africa, where slavery was practiced, existed outside of the workings of historical development that functioned everywhere else — outside of the human struggles that make for civilization and progress. Hegel was, of course, hopelessly ignorant of black life. Wilderson is not, and, even so, he has talked himself into reproducing the error. Wilderson, too, believes that blacks live outside of History. It is because blacks have never ceased to be the slaves that Hegel imagined them permanently to be. Wilderson explains: “for the Slave, historical ‘time’ is not possible.” Here is the meaning of the bitterness that Wilderson expresses wildly, and that Coates expresses not wildly. It is more than a denial of the black achievement in America, along the lines that exasperated Murray half a century ago. It is a denial, in effect, of tragedy, which exists only where there is choice, which is to say, where there is history. It is an embrace of the merely pitiful, where there is no choice, but only suffering — an embrace of the pitiful in, at least, the realm of rhetoric, where it is poignant (these are literary men), but lifeless.  

    Ibram X. Kendi appears, at first glance, to offer a more satisfactory way of thinking in his two books on American racism, Stamped from the Beginning: The Definitive History of Racist Ideas in America, which runs almost six hundred pages, as befits its topic, and the much shorter How To Be an Antiracist, which distills his argument (and does so in the autobiographical vein that characterizes all of the current books on American racism). Kendi does believe in history. He thinks of the history of racism as a dialectical development instead of a single despairing story of non-progress, as in Wilderson’s despairing rejection of historical time, or a single story of ever-victorious progress, as in the naive celebration of the sunny American “Dream.” He observes that racist ideas have a history, and so do antiracist ideas, and the two sets of ideas have been in complicated conflict for centuries. He also observes that black people can be racist and white people can be antiracist. He cites the example of the antislavery American white Quakers of the eighteenth century. He is the anti-Wilderson: he knows that the history of ideas about race and the history of races are not the same. 

    His fundamental approach is, in short, admirably subtle. Still, he feels the allure of simplifying definitions. Thus: “A racist idea is any idea that suggests one racial group is inferior or superior to another racial group in any way.” And, with this formula established, he sets up a structure of possible ideas about blacks in America, which turn out to be three. These are: (a) the “segregationist” idea, which holds that blacks are hopelessly inferior; (b) the “assimilationist” idea, which holds that blacks do exhibit an inferiority in some regard, but, by assimilating to white culture, can overcome it; and (c) the “antiracist” idea, which holds that no racial group is either superior or inferior to any other “in any way.” His definitions establish what he calls the “duality of racist and antiracist.” And with his definitions, three-part divisions, and dualities in hand, he goes roaming across the American centuries, seeking to label each new person or doctrine either as a species of racist, whether “segregationist” or “assimilationist,” or else as a forthright “antiracist.”

    In How to Be an Antiracist, he recalls a high school speech-contest oration that he delivered to a mostly black audience in Virginia twenty years ago, criticizing in a spirit of uplift various aspects of African-American life — which, at the time, seemed to him a great triumph of his young life. In retrospect, though, sharpened by his analytic duality of racist and antiracist, he reflects that, in criticizing African Americans, his high-school self had fallen into the “assimilationist” trap. He had ended up fortifying the white belief in black inferiority — which is to say he had therefore delivered a racist speech! Is he fair to himself in arriving at such a harsh and humiliating judgment? In those days he attended Stonewall Jackson High School in Manassas, and, though he does not dwell over how horrible is such a name, it is easy to concede that, under the shadow of the old Confederacy, a speech criticizing any aspect whatsoever of black life might, in fact, seem humiliating to recall. On the other hand, if every commentary on racial themes is going to be summoned to a high-school tribunal of racist-versus-antiracist, the spirit of nuance, which is inseparable from the spirit of truth, might have a hard time surviving. 

    Kendi turns from his own mortifying student oration to the writings of W.E.B. Du Bois. He recalls Du Bois’ famous 

    “double consciousness” in The Souls of Black Folk, which reflected a desire “to be both a Negro and an American.” In Kendi’s reasoning, an “American” must be white. But this can only mean, as per his definitions, that W.E. B. Du Bois was — the conclusion is unavoidable — a racist, in the “assimilationist“ version. Du Bois was a black man who wished no longer to be entirely black. Or worse, Du Bois wanted to rescue the African Americans as a whole from their “relic of barbarism” — a racist phrase, in Kendi’s estimation — by having the African-Americans assimilate into the white majority culture. Du Bois’ intention, in short, was to inflict his own racism on everyone else. Such is the ruling of the high-school tribunal. 

    It is an analytical disaster. The real Du Bois was, to the contrary, a master of complexity, who understood that complexity was the black fate in America. Du Bois did not want to become white, nor did he want to usher the black population as a whole into whiteness. He wanted black Americans to claim what was theirs, which was the reality of being black and, at the same time, the reality of being American, a very great thing, which was likewise theirs. He knew that personal identity is not a stable or biological fact: it is a fluidity, created by struggle and amalgamation, which is the meaning, rooted in Hegel’s Phenomenology of Mind, of “double consciousness.” A man compromised by “assimilationist” impulses? No, one of the most eloquent and profound enemies of racism that America has ever produced. 

    Kendi is confident of his dualities and definitions. He is profligate with them, in dialectical pairings: “Cultural racist: one who is creating a cultural standard and imposing a cultural hierarchy among racial groups.” Versus: “Cultural antiracist: One who is rejecting cultural standards and equalizing cultural differences among racial groups.” And, with his motor running, one distinguished head after another falls beneath his blade. He recalls Jesse Jackson’s condemnation, back in the 1990s, of the campaign to teach what was called Ebonics, or black dialect, to black students. “It’s teaching down to our children,” said Jackson, which strikes Kendi as another example of “assimilationist” error.  But Kendi does not seem to recognize who Jesse Jackson is. In his prime, Jesse Jackson was arguably the greatest political orator in America — the greatest not necessarily in what he said, which ran the gamut over the years, but in the magnificent way he said it. And the grandeurs of Jackson’s oratorical technique rested on the grandeurs of the black church ministry, which rest on, in turn, the heritage of the English language at its most majestic, which means the seventeenth century and the King James Bible. In condemning the promotion of Ebonics, Jackson was not attacking black culture. He was seeking to protect black culture at its loftiest, as represented by his own virtuosity at the pulpit and the podium — or so it seems to me. 

    But then, Kendi does not like the hierarchical implications of a word like “loftiest.” Naturally he disapproves of the critics of hip hop. He singles out John McWhorter, who has seen in hip-hop “the stereotypes that long hindered blacks,” but he must also have in mind critics like the late Stanley Crouch, who condemned hip hop on a larger basis, in order to defend the musical apotheosis that Crouch identified with Duke Ellington — condemned hip hop, that is, in order to defend the loftiness of black culture in yet another realm. In this fashion, Kendi’s dualities of racist and antiracist turn full circle, and Ibram X. Kendi, the scourge of racism, ends up, on one page or another, the scourge of entire zones — philosophy, oratory, jazz — of black America’s greatest achievements. 

    His ostensible purpose is to help good-hearted people rectify their thinking. It is a self-improvement project, addressed to earnest readers who wish to purge their imaginations of racist thoughts, in favor of antiracist thoughts. This sort of self-improvement is, of course, a fad of the moment. An early example was Race Talk and the Conspiracy of Silence: Understanding and Facilitating Difficult Dialogues on Race, by the psychologist Derald Wing Sue, from 2015, a serious book with its share of genuine insights into microaggressions and other features of the awkward conversations that Americans do have on topics of race. White Fragility: Why It’s So Hard for White People to Talk About Racism, by Robin DiAngelo, a diversity coach, is perhaps the best-known of these books — a slightly alarming book because its reliance on identity-politics analyses has the look of the right-wing race theoreticians of a century ago, except in a well-intentioned version. Ijeoma Oluo’s So You Want to Talk About Race, with its breezy air, is the most charming of the new books, though perhaps not on every page. But Kendi’s version is the most ambitious, and the most curious. 

    He does not actually believe in the possibilities of personal rectification — not, at least, as a product of education or moral suasion. In Stamped from the Beginning, he observes that “sacrifice, uplift, persuasion and education have not eradicated and will not eradicate racist ideas, let alone racist policies.” The battle of ideas does not mean a thing, and racists will not give up their racism. The people in power in the United States have an interest in maintaining racism, and they will not give it up. “Power will never self-sacrifice away from its self-interest. Power cannot be persuaded away from its self-interest. Power cannot be educated away from its self-interest.” Instead, the antiracists must force the people in power to take the right steps. But mostly the antiracists must find their own way, in his phrase, of “seizing power.” The phrase pleases Kendi. “Protesting against racist power and succeeding can never be mistaken for seizing power,” he says. “Any effective solution to eradicating American racism” — he means any effective method for eradicating it — “must involve Americans committed to antiracist policies seizing and maintaining power over institutions, neighborhoods, countries, states, nations — the world.” And then, having seized power, the antiracists will be able to impose their ideas on the powerless.

    This attitude toward the seizure of power is known, in the old-fashioned leftwing vocabulary, as putschism. But as everyone has lately been able to see, there is nothing old-fashioned about it. The manifesto that was signed not long ago by hundreds of scholars at Princeton University, calling for the university administration to ferret out racist ideas among the professors, was accepted, and the university announced its intention to set up an official mechanism for investigating and suppressing professorial error. Can this really be so? It is so, and not just at Princeton. The controversies over “cancel culture” are controversies, ultimately, over the putschist instinct of crowds who regard themselves as antiracist (or as progressive in some other way) and wish to dispense with the inconveniences of argument and persuasion, in favor of getting some disfavored person fired or otherwise shut up. And the controversies have spread from the universities to the arts organizations and the press. I would think that anyone who admires Kendi’s argument for seizing power could only be delighted by the successful staffers’ campaign at the New York Times to fire its eminently liberal op-ed editor, whose error was to adhere to the Times tradition of publishing contrarian right-wing op-eds from time to time — though other people may suppose that putsches in the newsroom and in the universities amount to one more destructive undertow in the larger constructive antiracist wave.  

    A difficulty with putschism, in any case, has always been that putsch begets putsch, and the hard-liners will eventually set out to overthrow their wimpier comrades, and the reaction-aries will set out to overthrow the lot of them; and truth will not be advanced. But apart from the disagreeable impracticality of the putschist proposal, what strikes me is the inadequacy of Kendi’s rhetoric to express the immensity and the depth of the American racial situation. It is a dialectical rhetoric, but not an expressive one. It amounts to a college Bolshevism, when what is required is — well, I don’t know what is required, except to remark that, when you read Du Bois, you do get a sense of the immensity and the tragedy, and the inner nature of the struggle, and the depth of the yearnings.

    Isabel Wilkerson’s alternative to this kind of thinking, presented in Caste: The Origins of Our Discontents, manages to be lucid and poetic at the same time, perhaps not in every passage, but often enough over the course of her few hundred pages. She wishes to speak principally about social structures, and not as much about ideas. Only, instead of looking at economic classes, which is what people typically think of when they think about social structures, she speaks about social castes, as in India. The caste system in traditional Indian society is a rigid and ancient social structure, which divided and still divides the population into inherited classes, whose members work at certain occupations and not others, and perhaps dress in certain ways, or are physically distinct, or have distinctive names, and are forever stuck in the eternity of their caste status. 

    There was a vogue in the 1930s and 1940s for social scientists to venture into the scary old American Deep South and, by applying surreptitiously the techniques of anthropology, to look for social structures of that kind in Jim Crow America. Isabel Wilkerson is fascinated by those people — by the anthropologist Allison Davis especially, a pioneering black scholar, to whom she devotes a few enthusiastic pages in her book. She is taken with Davis’ insights and those of his colleagues. She sets out to update the insights to our own era. And, in doing so, she comes up with a marvelous insight, though it takes her until her fourth chapter to lay it out. A caste system, as she describes it, is defined by its antiquity. It resembles a theater play that has been running for a long time, with actors who have inherited their roles and wear the costumes of their predecessors. “The people in these roles,” she explains, “are not the characters they play, but they have played the roles long enough to incorporate the roles into their very being.” They have grown accustomed to the distribution of parts in their play––accustomed to seeing who plays the lead, who plays the hero, who are the supporting actors, who plays the comic sidekick, and who constitute the “undifferentiated chorus.” The play and the roles are matters of habit, but they take them to be matters of reality.

    In a social system of that sort, custom and conformity are ultimately the animating forces. But then, in the American instance, if custom and conformity are the animating forces, there might not be much point in analyzing too deeply the ideas that people entertain, or think they entertain. And it might not be necessary to go rifling through a philosopher’s papers, looking for unsuspected error. Nor should it be necessary to set up language committees to promote new vocabularies and ban the old ones, in the belief that language-engineering will solve the social problems of past and present. That is Isabel Wilkerson’s major insight. She prefers to make social observations.

    She glances at India in search of perspective into caste structures and customs, and, although Indian civilization differs in every possible way from American civilization, she is struck by the American parallels — by the visible similarities between the African-American caste status in the United States, at the disdained or reviled bottom of American society, and the status of the lowest caste in India, the Dalits, or untouchables, at the disdained or reviled bottom of Indian society. She does seem to be onto something, too. She tells us that, in India, Dalit leaders and intellectuals have been struck by the same parallels, and they have recognized the far-away African-Americans as their own counterparts, and have felt an instinctive and sympathetic curiosity. And then, seeking to deepen her perspective, Wilkerson examines a third instance of what she believes to be a caste structure, which was the situation of the Jews under Nazis in Germany. 

    This seems to me only partly a good idea. There is no question that, in traditional Christian Europe, as well as in the traditional Muslim world, the Jews occupied the position of a marginalized or subordinate caste, with mandated clothing, sundry restrictions, humiliations, and worse. Traditionalism, however, was not the Nazi idea. Still, it is true that, on their way to achieving their non-traditional goal, the Nazis did establish a caste system of sorts, if only as a transitional state, with the Jews subjected to the old ghetto oppressions in an exaggerated form. And some of those measures drew overtly on the Jim Crow precedent in America. Wilkerson reminds us that, in preparation for establishing the Nuremburg Laws for Jews in Germany in 1935, the Nazi leaders undertook a study of American racial laws, the laws against miscegenation, the laws on blood purity, and so forth. And with the American example before them, the Nazis established their Law for the Protection of German Blood and German Honor and their larger code. She tells us that, in regard to blood purity, the Nazis even felt that America, with its “one drop” mania, had gone too far! — which is not news, but is bound to horrify us, even so.  

    But she also draws another benefit from making the Nazi comparison, which has to do with the tenor and the intensity of her exposition. The Nazi comparison introduces a note from abroad, and the foreign note allows her to speak a little more freely than do some of the other commentators on the American scene. The foreign note, in this instance, is an uncontested symbol of political evil, and, having invoked it, she feels no need to miniaturize her American conclusions, and no need to introduce into them an aspect of childhood traumas. She does not draw a veil of critical theory over her presentation. Michel Foucault’s focus on the body appears to enter into her thinking not at all. Nor does she feel it necessary to toy with mental imbalances and nihilist gestures. Nor does she look for ways to shock anyone, beyond what is inherent to her topic.

    She points at the Nazis, and at the American champions of Jim Crow — points at the medical doctors in Germany, and at their medical counterparts in America, who, in the grip of their respective doctrines, felt free to conduct monstrous scientific experiments on victims from the designated inferior race. And any impulse that she may have felt to inhibit her expression or resort to euphemism or indirection disappears at once. In short chapters, one after another, she paints scenes — American scenes, not German ones — of mobs murdering and disfiguring their victims, of policemen coolly executing men accused of hardly anything, of a young boy murdered because of a love-note sent to a girl from the higher caste. She paints tiny quotidian scenes of minor cruelty as well — the black Little Leaguer who is prevented from joining his white teammates in a joyous festivity, or, then again, the Negro League career of Satchel Paige, perhaps baseball’s greatest pitcher, who watched his prime years go by without being able to display his skill in the Major Leagues. She does not twist her anger at these things into something understated, or into something crazy. Nor does she redirect her anger at secondary targets — at the white American resistance to discussing these things, or the lack of communication, or the lack of sympathy. Silence and the unspoken are not her principal themes. 

    Her theme is horror, the thing itself — the murdered victims dangling from the trees. Still, she does get around to addressing the phenomenon of denial and complacency and complicity, and, when she does so, her analytical framework allows her to be quietly ferocious. She reminds us that, apart from leading the Confederate troops in their war against the American republic, Robert E. Lee was a man who personally ordered the torture of his own slaves. He was a grotesque. She tells us that, even so, there were, as of 2017, some two hundred thirty memorials to Robert E. Lee in the United States. To underscore her point, she describes in a flat reportorial tone a public hearing in New Orleans on the matter of what to do about a statue of Lee, at which a retired Marine Corps officer spoke: “He stood up and said that Erwin Rommel was a great general, but there are no statues of Rommel in Germany. ‘They are ashamed,’ he said. ‘The question is, why aren’t we?’” — which is Isabel Wilkerson’s manner of staring her readers in the eye.  

    It would be possible to go through Caste and pick it apart, from the standpoint of social theory. But social theory is not really her theme, even if the anthropologists of the 1930s are her heroes and their concept of social caste drives her book forward. Mostly the work is an artful scrapbook of various perspectives on the black oppression in America, divided into short sections —  on the idea of caste, on the Indian social system, on Indian scholars she has met, on her visits to Germany, on Nazi legal codes, on the horrors of lynching, and still more horrors of lynching, on the severity of Jim Crow laws, on the pattern of police murders of blacks, and, then again, on her own experiences. She recounts any number of vexing or infuriating encounters that she has undergone with people at airports or restaurants, the DEA agents who decide that she is suspicious, the waiter who manages not to serve her table, together with vexing experiences that other black people have had — a distinguished black man mistaken for a bicycle messenger in his own apartment building, a student from Nigeria, whose language is English, praised for being able to speak it. 

    Certain of these incidents may seem ambiguous, and yet they do add up, such that, even if one or two of the incidents might be viewed in a kinder light by someone else, the pattern is hard to deny. The meaning of the pattern becomes identifiable, too, given the historical scenes that she has described. And yet, although she has every desire to register and express her own fury, and no desire to tamp it down, she has also no desire to drown in it. She looks for reassuring signs of a liberating potential, and she finds them here and there —  in the moral progress of the Germans and their reckoning with civic monuments. Barack Obama’s presidency strikes her as a not insignificant step forward. As for what came after Obama — well, she concludes the main text of her book with a sentimental anecdote about a surly MAGA-hatted white plumber, unhappy at having to work for a black lady in her leaky basement, who softens up after a while, which suggests the possibility of progress, in spite of everything. 

    I suppose that hard-bitten readers will figure that Wilkerson goes too far in clinging to some kind of optimism for poor old America. But then, I figure that I have some acquaintance with the potential readership for her book and the several other books that I have just discussed, if only because the readership spent several months in the spring and summer of 2020 marching around my own neighborhood. I can imagine that each of those books is bound to appeal to some of those militant readers, and to disappoint the others. Ta-Nehisi Coates will always be a popular favorite, if only because of his intimate voice, which has an attractive tone regardless of what he happens to be saying. Then again, in the course of the uprising, a carload of gangsters profited from the mayhem to break into a liquor store around the corner from my building and to carry away what they could. And those particular people, if they happen to be book readers, which is entirely possible, may look on Coates with a cold eye, given how lachrymose and virtuous he insists on being. They also won’t care for Alicia Garza’s California life-story and organizers’ tips in The Purpose of Power, and they are certainly not going to see anything of interest in the cheerful suggestions to white people in Ijeoma Oluo’s So You Want to Talk About Race. The gangsters might like Frank Wilderson III’s Afropessimism, though. Heartily I recommend it to them. Still other people, large numbers of them, will prefer the scholarly dialectics and historical research of Ibram X. Kendi. 

    And yet, I suspect that among the book-reading protesters, the largest number will prefer, as I do, Isabel Wilkerson and her Caste  prefer it because of her emotional honesty and directness, and because of her anger, which somehow ends up angrier than everyone else’s among the writers, and, then again, because it is refreshing to find someone with a greater interest in the shape of society than in the marks of interior belief, and still again, because of her streak of optimism. I cannot prove it, but, in my own perception, directness, anger, and a streak of optimism were main qualities that marched in the streets during those months —  even if some people were adrift in academic leftism, and other people were looters, and still others rejoiced in singing, “Jesus is the answer / for all the world today.” The protesters chanted only a handful of slogans, which testified to the discipline that mostly dominated those enormous marches. Sometimes — not often — they chanted “George! Floyd!” — which was the most moving chant of all: the note of death, which underlay the vast national event. But mostly the protesters chanted “black lives matter” — which was and is a formidable slogan: an angry slogan, plaintive, unanswerable. And somehow “black lives matter” is a slogan flecked with a reform spirit of democratic hopefulness, not exactly as in 1854, and not exactly as in 1965, and yet, given the different circumstances, pretty much as in those other eras, in conformity with the invisible geological structures of the American civilization.

    Without

    It is a warm winter mid-afternoon.
    We must understand what happened is
    happening. The colossus stands before us with its signature
    pre-emptivity. It glints. It illustrates.
    At my feet the shadow of the winter-dead bushes wave
    their windburnt stalks. Their leaves
    cast gem-cut ex-
    foliations on the patio-stone—bushfulls of shadow
    blossoming—& different-sized
    heads—& in them leaves, flowers, shoots, burgeonings—
    though when I look up again from their grey chop & slip
    what is this winterdead bush
    to me. This is how something happens but what.
    Inside, the toddlers bend over and tap. They cannot yet
    walk or talk. They sit on the floor one in the high chair. They
    wait. They tap but make no sound. The screen they peer
    down into waiting is
    too slow. The trick
    won’t ever happen
    fast enough. They are waiting for their faces to
    dissolve, to be replaced by the
    quick game.
    If you speak to them, they don’t look up.
    The story doesn’t happen fast enough.
    The winterdead heads move in a sudden breeze.
    The wilderness grows almost giddy with alternatives
    on the cold patio. I stand barefoot in it.
    I always do this as it
    always does this.
    It lies on me. Scribbles a summer-scrawl. I watch my
    naked feet take on the shadow-blossoming without a trace
    of feeling. It feels
    good. As long as I see it it feels
    like years, invasions, legends—a thing with something at its heart—
    it moves the way the living move absent of will—
    the wind will define what is happening here—I call
    a name out—just to check—
    at the one wearing the purple jumpsuit
    with the small blue elephant
    stitched into
    it. The young
    of the elephant starve because the matriarch
    is killed before it can be passed on—where water is, where safe passage,
    how
    to forage, how remember, how mourn. But I
    was talking about the logo.
    If you try to rebuild the world you will go crazy.
    Come outside,
    come out take off your shoes.
    What did you do when the world was ending.
    Before the collapse.
    In the lull.
    They look down into the screen. I can hear
    a towee make two notes then stop. Can hear, further off,
    a woodpecker search the hollow. Tap tap. A silence
    which goes in way too deep
    filling this valley
    I think.
    I had not heard it till
    a minute ago.
    Tap tap. Seeking the emptiness. What breeds in it. The festering.
    The nourishment.
    The whole valley echoes. Tap.
    And a single-engine plane now, like a blender.
    When it goes by the sky is much smoother.
    And the brook running through when wind dies down. There it is.

    We Refused

    amputation. Above the
    knee. You
    r so cold. Winter
    light moves up

    your neck to yr
    lips. For the duration of
    this song to u
    mother the cold

    light moves from yr
    lips to yr new
    permanently
    shut eyes. You

    can’t rave any
    more, slapping
    fury over the countdown of
    minutes, u can’t force yr

    quip in. The hills
    where the sun’s heading
    maintain their dead
    rest. No wind. No rain. The new

    wrong temps in-
    filtrate the too-dry
    grove, each stiffly curling silvery
    leaf—all up

    the slopes. All gleams
    momentarily.
    Each weed at
    the foot adds its

    quick rill of
    shimmering. Then off
    it goes. The in-
    candescent touches it, then

    off it goes.
    All afternoon day will do
    this. Touching,
    taking each thing up—no

    acceleration.
    Dry. Cold. Here
    mother is when it reaches
    yr eyes, the instant when it

    covers yr
    lids, curved to catch all
    brilliance, nothing
    wasted, carved, firm,

    while whatever

    is behind them,

    mind-light, goes.

    Maybe it will
    rain again
    the glittering says,
    but until then I

    will imitate the
    sheen of
    nourishment, of plenty, it says, I
    will be yr water,

    yr rivulet of

    likewater—while I, I, out here,

    bless you with
    this gorgeous
    uselessness
    mother, this turning

    of the planet onto
    yr eyes that refuse
    the visible now & ever
    again….

    We kept u
    as long as we
    cld whole.
    I have no idea

    what this realm is

    but it is ours,

    and as long as u
    are stuck in
    appearance I
    wish for the

    wind-glitter
    to come each day once
    to where you lie
    and wash you

    clean. Losing
    information yr gleaming
    shut lids light
    the end of the whole

    of this day

    again. Let it

    happen again.

    The Story of Dalal

    When the mighty men came back from faraway places, they were strangers in their own homes. They were catered to and kept in the dark. At some point the fathers had to be brought in, implicated if you will, in the deeds of their sons and their daughters, but until that day dawned, until a daughter’s transgressions became too public a matter to be ignored, or a son’s ways could no longer be indulged, the men were pampered and left ignorant. In the dark hours, when a reckoning could no longer be avoided, when the code of the place had been stretched to the breaking point, the women had to do things of great cruelty. It was their burden, their task.

    “She is the sister of men” was the highest compliment paid a woman who had to keep the world intact. To the women fell the task of smuggling diamonds from Sierra Leone because the skilled man of affairs who insisted that the high officials of the customs office were in his back pocket had gotten himself deported out of the country. The women were the ones who kept the constituents of a member of Parliament from finally having it out with him. They were the ones who prepared their sons for the duel and who stiffened their backs, reminded them of the hidden defects and capricious ways of their fathers. And it was their responsibility, of course, to keep the daughters in line. It was but a short distance from the daughter’s conduct, after all, to the mother herself. Better grieve for a daughter than play havoc with the order of things. This is the way things were understood here.

    It happened among us that a woman of radiant strength had to “do something” about one of our daughters. The daughter’s indiscretions had become too much to bear.  The pompous and dangerous head of the household had signaled that his patience was running out. The sturdy woman would do the task that was hers to do. Dalal was taken to her father’s village for burial. The young woman, it was announced, had committed suicide. But it was commonly known that her mother had struck. It had about it an air of inevitability. Dalal had rejected all offers of help and punctured all the pretenses of her people’s code. She had taken a step into a world she could not understand, and she had not known where to draw the line. The evasions and the consolations of the old world, the world of her mother and her aunts, were denied her, but the new ways were not yet internalized by the young woman, who had just begun to see the world on the other side of the prohibitions.

    Dalal had been given the best of what a generation on the make thought their children should be given. Parents who toiled in Africa made possible boarding schools, a new prosperity, a new freedom, less encumbered and burdened by inherited ways of seeing and encountering things. The fears of the old world, the need to “walk by the wall” and to “kiss the hand that you cannot confront,” the fear of the unknown and of the alien, the need to placate and to conceal — from all this the young woman seemed released. The limits that had defined the world of her mother and her aunts had irretrievably collapsed, and with their collapse it was hard to distinguish the permissible from the impermissible.

    Dalal had ventured into the world on the other side of the divide; she was the first of her kin to venture beyond the line of the familiar sounds and customs. She developed a sudden and total disdain for the ways of her elders, for their tales, for their dire warnings. They, in turn, were unable to explain how the young woman should juggle the two worlds on the margins of which she had been placed. There came a time when she began to complain about the women from the village, the grandmothers and great-aunts who came visiting and who stayed at her home. She complained about their tattoos, about their wrinkled and toothless faces, about their prayers and the ablutions that preceded them. Above all, she complained of the smell that clung to the old women: she believed that they came with a special smell. And so she recoiled when they approached her and wanted to kiss her and wish her a life of honor and rectitude in the home of a decent God-fearing man. Yes, Dalal, if you go about doing what is asked of you, if you follow the straight path, if you remain untarnished and your reputation remains unblemished, happiness will come your way, and you will go from the home of your father to the home of your husband, an honored woman in whose reputa-tion and whose conduct your father and brothers can take pride. No other man could humble your family by having his way with you. No ill-wisher could point to you whenever men and women sought to devour the reputations of others.

    A relative of Dalal prided herself on the fact that she  had been the first to detect early signs of trouble. The world here came in very small ways and expressions. The unwashed relatives from the village noted that Dalal did not invite relatives and friends to join a meal in the way that such invita-tions should be extended. Dalal would only offer a single invitation. And when the guest insisted that he or she had just eaten, she always took them at their word and left them to eye the food. In the protocol of the villagers you had to extent endless invitations and drag the guest to the table. Then you watched the guests who had “just eaten” stuff themselves with abandon. But the sophisticated young woman who had broken with her world would not play the game.

    Nor would she willingly join, it was noted in retrospect, her mother and her mother’s friends and guests when she was called to do so. In those sessions, young women learned the ways of their elders and the ways of the world. When she  was forced to participate, Dalal was never fully there. She would not engage in the sonorous language and its clichés, she would not play along. When a visiting friend of her mother told her that Dalal and her son Shawki would make an ideal couple, Dalal had no qualms about saying that Shawki was a buffoon, that she had no interest in him whatsoever, that she would not be traded over coffee between two women from an obsolete generation.

    A strange kind of honesty made Dalal see the hypocrisies of her elders’ world. She began to view their deeds with new eyes, and gradually she began to judge. And because she did, she made her elders self-conscious. In her presence, her tough mother and aunts would at times squirm, and animated discus-sions would often come to an end whenever she walked in.

    But Dalal knew many things that they thought had eluded her. She tired of hearing pieties that were betrayed in daily practice. She had seen through the falsities of her elders. A few years before the trouble began, while still a young girl, Dalal had been used as an alibi for many indiscretions by the older women in her life. She recalled the record of each of the virtuous women who later came to lecture her about her own behavior. She laughed at the pretensions of the cuckolded husbands who knew perfectly well what was going on but preferred to look the other way.

    Dalal had seen her pretentious paternal uncle Abu Hassan pass himself off as a man of the world, proudly displaying his women, letting the word out that he had finally seduced the voluptuous Leila and beaten out the competition. She then set this alongside what she knew of Abu Hassan’s wife. Fair-skinned and vain, sure of her beauty and more sure of the prerogatives of her new money, Abu Hassan’s wife exercised her own options as well. Two or three young men were in the wings, and it was rumored that they were being kept and provided for by the lady herself. Abu Hassan, Dalal knew, was both a rooster and a cuckold. In his own code, of course, he was a hunter and victorious. And in the pronouncements of his wife, the lady was queen in her house, a virtuous woman, cleaner than the ways of the cynical city.

    Dalal’s angle of vision enabled her to see the whole thing. Thus, when the virtuous woman said that she had spotted Dalal coming out of one of the furnished apartments on Hamra Street, Dalal recited what she knew of the other woman’s comings and goings. When given a chance to deny what she had been charged with, Dalal refused. She declined to participate in the charade and the theater that was Lebanese honor. Early marriage suggested itself as a remedy. A man, it was believed, could rein in this kind of passion. Dalal would have her own home, shoulder new responsibilities, and the storm would blow over. She could then begin to make her own discreet trips to the tailor and offer the excuses and the evasions of other women of honor and responsibility. A smug official of her father’s generation was the man recruited to cap the volcano. Dalal’s mother insisted that the man was Dalal’s own choice, that it was an affair of the heart.

    A respectable dowry was given to the unlikely couple. That was what money made in Africa was supposed to do — schools for the boys, dowries for the girls. All prayed that the young woman’s story was over. The determined mother had pulled it off. Dalal had walked from the home of her father to the home of her husband.

    But the hopes turned out to be short-lived. As the young woman explained it, surely she deserved something other than what she got. The man in her life was a man of reasonable distinction. He had studied on his own and risen in the bureau-cracy. But like her parents, Ali was a squatter in Beirut. He had about him the kind of clumsiness that Dalal’s generation was so fond of deriding and so quick to see in a man’s speech, in the kind of tie he wore, in the way he shook hands. Ali  was doomed in the young woman’s eyes: he spoke the Shia dialect of the south. His French was not refined enough. His pronunciation amused the young woman who had learned French properly. That mighty badge of distinction, the French “r,” never tripped off his tongue the way it should have.

    This was a world of mimic men. A dominant culture from afar, its acquisition and its display, its word and its jokes, were what set people apart from one another, what gave some of them a claim to power and self-worth. French pronunciation gave away the origin of men and women, the “age” of money in a particular household: new money spoke French in one way, old money in quite another way. Boys who learned it under the husk tree — or was it the oak tree? — as Ali proudly proclaimed to have done, had no chance of passing themselves off as sophisticated men of a very demanding place.

    The young Tolstoy, who grew up in a culture that borrowed the trappings and the language of France for its court and its gentry and its salons, divided the social world into two principal categories: comme il faut and comme il ne faut pas. Tolstoy’s comme il faut consisted “first and foremost in having an excellent knowledge of the French tongue, especially pronunciation. Anyone who spoke French with a bad accent at once aroused my dislike. ‘Why do you try to talk like us when you do not know how?’ I mentally inquired with biting ironies.”

    Dalal’s husband was definitely comme il ne faut pas. He knew nothing of the ups and downs of the relationship between Jacques Charrier and Brigitte Bardot. He was not familiar with the songs of Charles Aznavour and Sasha Distel. He told what for his wife and her companions were dreadfully boring stories about his triumphs in the bureaucracy, how this or that political boss needed his help and his patronage, how he had clashed with the minister and how the cabinet member had backed down because of his own superior knowledge and judgment. And he endlessly recited the familiar tale of how he had come into Beirut a quarter-century ago, how he had studied by the light of a kerosene lamp, how he had been one of the very first Shia boys from his world to graduate from the Lebanese University, how vain city boys taunted him about his village and his past, about the idiom and the twang of the countryside.

    The man of position had achieved all he could have hoped to achieve. But none of it mattered to the irreverent young woman by his side. That kind of tale would have filled the heart of a woman a generation older than Dalal with great pride. A different woman — denied, or spared, the world that Dalal had now seen — would have viewed his triumph as hers, and that of her kin. But this was not the kind of man to cut an accept-able figure in the mind of a young woman who had grown up on a diet of French novels and films, who was courted by young men who had nothing to do other than sharpen their skills for the hunt: those effeminate young men with shirts unbuttoned to their navels, those dandies with their gold chains, with their melodramatic and insinuating puffs on their Gitanes, with new cars purchased by fathers who tackled hell in remote places, were surely no match for the sturdy qualities of Ali, with his yellow socks and bad French.

    A real man, the sober official insisted, should not be compared to such flimsy material. But this flimsy material was the new world, the world to which his treasured young wife belonged. Ali could not take the chic young woman back to where he and her father had come from, to the village where women still dried and saved cow-dung for fuel, where children used the bones of dead animals for toys. How was he to communicate his world, and its wounds and its limits, to someone who had not known it? How was he to tell Dalal of his cruel and terrifying father, who humiliated him at every turn, and of the schemes of his stepmother, and of the distance he had to cover, forever on the run, unable to take anything for granted or to believe that he had anything to fall back upon? His family had thrown him into a mighty storm, and he had been denied even the possibility of a graceful, quiet failure.

    As the young woman picked on the filet mignon that was delivered to her doorstep, he very much wanted to tell her, while knowing full well how much of a bore he would be, of the white bundle that used to come to him while away at school, of the few scrawny potatoes in it, of the endless diet of lentils, of the few thin loaves of peasant bread. Child, Ali wanted to scream, and he often did, where have you been and what have you seen? You were spared such terrors and such needs. Ali’s generation had ploughed and had sown, and Dalal was the harvest. Ali’s generation, the generation of Dalal’s father, had never bothered to inquire as to the ends of such striving and such toil. With a hellish world to their backs, they had kept on the run. And now the journey had culminated in signé shirts and blouses, in spoiled daughters and sons, in endless trips to reputable tailors, in dining rooms whose décor was declared obsolete soon after it had been lavishly purchased and proudly displayed.

    The net that entangled women older than Dalal failed to entan-gle her. She was too far gone to submit and to accept. Hard as her husband and her mother would try to keep her within the boundaries, the young woman had become brazenly independent. She put very little if any effort and time into covering her tracks. The furious beatings administered by her mother and her husband were to no avail. On the morning after, she would plunge into it again, and ill-wishers would report her latest escapades. She was seen going into and coming out of this or that building, she had succumbed to the blandishments of yet another dandy who would proudly report his latest conquest. In the carefree city that outsiders loved so much for its freedom and its joie de vivre, the men and the women who lived there were suffocated and hemmed in by so many curious, watchful eyes. Even the trees had eyes here, wrote the sensitive novelist Hanan As-Shaykh.

    The gossips had seen it coming. The coroner’s and police reports about the terrifying day were met with the usual derision: the verdict of suicide, it was said, was secured by the payment of a large bribe. An ivory tusk, an expensive one of which Dalal’s family was proud, had exchanged hands and now adorned the coroner’s living room. The officials were men of this society, after all: they knew their world and what it drove men and women to do.

    When Dalal’s body was taken to her father’s village, her father and her husband were on hand to receive the condolences of those willing to treat it as a case of suicide. But the day belonged to her mother, the tower of strength, the victim and the killer, sure in her grief that she had done what she did for the sake of her other daughters, of her sons, of her home. The mother wailed, disheveled her hair, tore at her own clothes. She lined up all of Dalal’s shoes, all those elegant shoes that the young woman had bought with the new money, and she spoke to them, it was said, about the young woman who had departed at so tender an age. She wanted Dalal, her Dalal, back. The fancy shoes and the primitive code of honor: this country played them side by side.

    A new and intense piety overtook Dalal’s mother after the deed was done. A few years younger than my own mother, more exposed to the ways of the modern world, she would from then on accompany my mother to the holy shrine of Zaynab, the daughter of Imam Ali, in Damascus. When unable to do so, she would give my mother money and food to give to the poor who gather around the shrine, and to the keepers of the shrine. The “secret” was shared between the two women on one of those journeys to Damascus. My mother was of two minds. She abhorred the deed, but she respected the mighty woman and she knew what pressures and expectations had led her to do what she had done. Dalal, my mother said in defense of the woman, was a “piece of her mother’s liver”: nothing could be more precious than one’s own child. But for some time Dalal’s mother had been walking on eggshells. Dalal’s father, now a prosperous man, had become restless, and there was a danger that he would go beyond the common indiscretions, that one day a clever young huntress would lure him away from his family. A fallen daughter would serve as a convincing pretext and the honorable man would be released in his own eyes from a home that had disgraced him. Sadness and grief, my mother believed, were better than disgrace. Dalal’s mother had done a terrible duty, but decency required that those quick to judge should hold their tongues. Love, even maternal love, was a luxury here. It was given when it could be afforded, when men and women were not up against the wall, when others were not busy clawing away at their reputations, threatening them with exposure and shame, leading them into ditches where even “pieces of their liver” had to be inexorably removed.

    After the deed was done, Dalal’s mother was never as commanding as she was before, her face never quite as bright. She no longer sounded sure of herself. The tough woman who had survived hellish years in Africa, who had single-handedly built a fortune after her husband was deported out of Ghana, who had put aside enough money to aid her father and a pretentious brother who could never make ends meet, who was generous to the multitude of relatives and of stray men and women who walked to her door with a hard luck story, was transformed overnight.

    The letters that I wrote for her to friends and relatives in Africa, which previously had to be read back to her over and over again and repeatedly corrected before they met with her approval, became perfunctory. She trusted the writing, she said, there was no need to read them back. The tales she told in the letters to relatives and friends were no longer crisp and chatty. They had about them a matter-of-fact quality. One letter that she drafted to the overbearing husband, who was always in and out of the country, reported that all was well in the family, that she would see to it that all was well. This letter, and this section in particular, was read to her over and over again at her request. She wanted some hidden meaning to be transmitted, some knife perhaps to jump out of the pages, some sense of the cataclysmic deed that she had done — a reminder to the honorable man that it was she who had to keep the world intact, that he would never quite understand her sacrifice and her anguish.

    But the lines penned by the letter-writer fell short of what she wanted. Arabic, the language of cruel innuendo and hidden meanings and intricate alleyways, failed Dalal’s mother on that day. And with uncharacteristic sharpness, she told the attentive scribe that his style had deserted him, that he should be sure to plan a future that excluded a writing career. Yet she wanted the young man drafting the letters to stick around that day. She could bear no solitude. More than that: the drafter of the letters had been a friend of Dalal. The two of them had exchanged jibes and put-downs, they had tested one another about the latest fads, about books and movies. Dalal had insisted that the Arabic letters at which the young man excelled, which had brought him not only spending money but also access to the secrets of so many families here, gave him away as a product of the old culture, that the formal structure of the letters, the frequent invocation of Allah’s name and blessing and praise, confirmed the old mentality.

    Dalal and her friend who was good at Arabic letters had shared what they had shared. It was enough for Dalal’s mother that the young man stuck around that day. They both knew when they were speaking of the dead. They both knew the hidden language of lament and yearning. The mother very much wanted her daughter’s friend to know, without uttering a word about the entire matter, that it had all been a tragic act of last resort, that nothing else could have been done, that a mother’s grief exceeds the imagination of the closest, the kindest, the most outraged of friends.

    My family’s home was in the village of Arnoun, in the district of Nabatiyya. I was the son of Ali Ajami of Arnoun and Bahija Abdullah of Khiam whose marriage soon ended in divorce. My mother had come to this ill-starred village when she married my father. She had come from a large clan, the Abdallahs, from Khiam, a town in the valley of Marju’un. Khiam was not far away. The distance could not have been more than fifteen kilometers. But that was far enough to make it seem like a distant land. Khiam was a place where children played next to running streams and women had time to tell exquisite and drawn-out tales. The men working Khiam’s fields retired to places in the shade; the exuberant women passing by gave more taunting and playful remarks than they received.

    Arnoun, at the foot of Beaufort, the Crusader castle, was a different kind of place, harsh and forbidding, surrounded by granite cliffs. There was to the place the feel of living in a quarry. Here the banter was less kind, and the men more sullen and brittle. The women struggling uphill to the wells, with jars on their heads, had little time or energy for chatter. There was a pond in Arnoun at the entrance of the village, near the grey mausoleum where my grandmother’s parents were buried, but the pond that drew in the rain always dried up in no time. It was by its cracked, wrinkled surface that I knew the pond.

    Hyenas stalked the place. But as they said in Arnoun, the sons of Adam were more frightening than the hyenas. At the edge of the village, beyond its scattered patches of tobacco, its few fig trees, was the wa’ar, the wilderness — rocky, thorny land without vegetation. The wa’ar was more than a place beyond the village. It was a point beyond censure. It was from 

    the wa’ar, the wild heath beyond the village, that the hyenas turned up. The dreaded creature, it was said, could cast a spell on its victims. Stories were told of infants taken to the wa’ar who were never brought back. Daughters who dishonored their families were taken to the wa’ar. In the legend of the wa’ar, there was a rock where a shepherd had killed his sister who had gone astray. He had taken her there, slit her throat, and left her to die. For many years afterward Arnounis still swore they could identify the rock where the shepherd had done horror’s work.

    My beloved mother, I know of the hellish years that you spent in my father’s village, of the backbreaking toil. It hurts me to know what it was like, to think of how much you endured. I know that you spent a good deal of your life without a man’s protection and a man’s labor and a man’s support. I remember that I am a stepson, that my mother is not there to defend me against a heartless father. I know the tales of hurt you want me to remember. I live amid the tales. But all I want is for the tales to release me from their grip. If this is infidelity, so be it. I want to be your son, I shall always be so. But I do not wish to appropriate your sorrow and your defeat and make them mine. Surely in your galaxy of imams and their sayings, in your endless supply of parables and proverbs, there must exist the possibility of a life lived without man being hunter or prey.

    The Conservatives and The Court

    Earl Warren’s retirement in June 1969 ended his run as Chief Justice of the most progressive Supreme Court in American history. Richard Nixon appointed Warren Burger to replace Warren, and Republican presidents selected the next five Justices over the seventeen years that Burger presided as Chief Justice. And yet the Burger Court, while tacking a bit to the right, continued to embrace activist interpretive method-ologies and to issue progressive decisions. The most famous example, but a typical one, was its decision in Roe v. Wade in 1973. There the Court discerned in the Fourteenth Amendment’s due process clause a “right to privacy” — a right that appears nowhere in that clause — that gave a pregnant woman the prerogative to abort a fetus until viability. The opinion was written by Harry Blackmun, a Nixon appointee, and joined by Burger and another Nixon appointee, Lewis Powell. In 1983 the title of a book by Vincent Blasi, a professor at Columbia Law School, summed up the state of affairs at the time: The Burger Court: The Counter-Revolution That Wasnt. 

    When I entered Yale Law School in the fall of 1986, the conservative legal movement born in reaction to the Warren and Burger Courts’ makeover of American life was in its infancy. In mid-September, the Senate confirmed William Rehnquist, a hard-conservative voice on the Court since 1972, to replace Burger as Chief Justice. That same day it voted 98-0 for Antonin Scalia to replace Rehnquist as an Associate Justice. Scalia was little known outside conservative circles, but he was famous in them for his attacks on jurists who departed from the text of statutes and the Constitution when interpreting them. The Federalist Society, the now-dominant conservative legal organization, had been founded a few years earlier but was still a fledgling force. Conservative ideas were not taken seriously in law schools or the legal culture at the time. Robert Bork, who had left Yale five years earlier, observed that his colleagues found his conservative text-based approach to constitutional interpretation “so passé that it would be intellectually stultifying to debate it.” 

    After Reagan nominated Scalia, Republican presidents chose seven of the next eleven Justices on the Court that is now headed by a George W. Bush nominee, Chief Justice John Roberts. Three of those Justices, Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett, were chosen by Donald Trump. And yet despite the fact that Republican presidents have appointed fifteen of nineteen Justices since Warren, and despite undoubted successes, many conservatives are still waiting for the counterrevolution. Roe has not been overruled. The Court has recently recognized new constitutional protections for gay rights, including a right to gay marriage. Affirmative action, another constitutional solecism for conservatives, still lives. And in June 2020, in a case called Bostock v. Clayton County, the Court, in an opinion by Gorsuch, ruled that the ban on “sex” discrimination in employment in the Civil Rights Act of 1964 made it unlawful to fire an individual merely for being homosexual or transgender. 

    Gorsuch reached this conclusion in reliance on “textualism” — the method of statutory interpretation championed by Scalia, and for decades a rallying cry of the legal right alongside originalism. Many conservatives were shocked that a Trump appointee invoked Scalia’s method to recognize categories of discrimination that conservatives have long sought to deny legal recognition. It was especially shocking since textualism seemed to serve the very judicial activism in the recognition of novel rights that it was designed 

    to foreclose. Bostock represents “the end of the conservative legal movement, or the conservative legal project, as we know it,” said Senator Josh Hawley, a Yale-trained lawyer and former Supreme Court litigator for conservative social causes, in a fiery speech on the floor of the Senate. 

    Hawley was exaggerating for political effect. On issues other than the social conservative ones such as abortion and gay rights that he cares most about, the movement has been hugely successful in changing the legal culture and the composition of the federal judiciary, and in moving public law sharply to the right. And that was before Trump replaced the very liberal Ruth Bader Ginsburg with the youthful and very conservative Barrett, four months after Hawley spoke. The Court’s conservative majority is now larger, younger, and more conservative than it has been in a century, and maybe ever. And yet it remains unclear whether the Court will transform American life as the conservative legal movement hopes, and as progressives dread. 

    The conservative legal movement developed two methodological responses to the perceived excesses of the Warren and Burger Courts. Both purported to be value-neutral mecha-nisms that were designed to restrain judges. 

    The main target of conservative legal jurisprudence was progressive interpretations of the Constitution. The Warren Court (1953-1969) recognized a right to marital privacy, including the right to use contraceptives, in the “penumbras” of the Bill of Rights; up-ended the settled understandings of the Fourth, Fifth, and Sixth Amendments to foster a defendant-friendly revolution in criminal procedure; issued many progressive rulings on race, most notably Brown v. Board of Education; practically eliminated prayer in school; and dramatically reorganized redistricting and apportionment rules governing elections under the guise, mainly, of equal protection of the law. The Burger Court (1969-1986) continued the progressive trend. It decided Roe, temporarily invalidated the death penalty, blessed affirmative action in education, and practically eliminated structural constitutional limits on congressional power. 

    Laurence Tribe of Harvard Law School, a progressive icon, captured conventional wisdom in the academy when he justified these and similar decisions on the ground that the Court’s job in constitutional interpretation is to discern “the contemporary content of freedom, fairness, and fraternity.” As Justice William Brennan, an intellectual leader of the Warren Court, explained, “The genius of the Constitution rests not in any static meaning it might have had in a world that is dead and gone, but in the adaptability of its great principles to cope with current problems and current needs.” The problem with these views, conservatives maintained, was that they had “almost nothing to do with the Constitution and [were] simply a cover for the Supreme Court’s enactment of the political agenda of the American left,” as Lino Graglia of the University of Texas put it. 

    Originalism was the right’s response. It maintained that Justices should aim to discern the original meaning of provisions of the Constitution (including the amendments) at the time they were adopted. Ideas akin to originalism had informed judicial theory and practice since the founding of the nation, but “originalism” became the organizing term and principle of conservative constitutional interpretation in the 1980s — due primarily to a series of speeches by Attorney General Edwin Meese that drew national news coverage and responses from two sitting Supreme Court Justices; to Scalia’s powerful writings on and off the Court; and to the left’s disparagement of originalism during Bork’s failed confirmation for a slot on the Supreme Court in 1987. 

    The basic argument for originalism was that the Constitution is a form of law that should be interpreted consistent with its fixed meaning when ratified. Any departure from that fixed meaning is an illegitimate and unconstitutional arrogation of power by the unelected judiciary. “The truth is that the judge who looks outside the Constitution always looks inside himself and nowhere else,” Bork maintained. Originalism, conservatives argued, promoted democratic decision-making by giving priority to the decisions of the polity that ratified the Constitution rather than the preferences of unelected judges. The theory also purported to ensure decisions “would not be tainted by ideological predilection,” as Meese put it, by restraining judges to application of neutral principles traceable to the Constitution itself. Originalism thus rested on two types of argument: a positivist claim about what counted as constitutional law, and a pragmatic institutional claim about securing judicial restraint. 

    The political and academic left subjected originalism to withering criticism because of its supposedly retrograde implications (which contributed to the sinking of the Bork nomination), and because originalism in its early guise was analytically deficient in a number of ways. Even Scalia acknowledged that originalism is “not without warts,” and he justified it partly on pragmatic grounds as a “lesser evil” to progressive constitutional interpretation. 

    But originalist judges and scholars developed more sophisticated and defensible accounts in response to the critics. And over the succeeding decades, as the number of conservative judges and scholars committed to the method grew, it became influential in constitutional interpretation. The method has many important variations, and it is not universally applied even by conservative judges. Yet there is no doubt that constitutional interpretation across the run of cases now focuses more on constitutional text and original meaning than it did during the Warren and Burger courts. And in political debate, confirmation hearings, and the legal culture generally, originalism has had an even bigger impact. 

    Originalism rose to legitimacy for many reasons. It appealed to ordinary intuitions about what lawyers are supposed to do. The widespread academic attacks on it gave it an implicit legitimation. Progressive scholars failed to generate an equally compelling and accessible justification for their preferred constitutional method, which is often called “living constitutionalism.” Scalia’s brilliantly crafted and forceful originalist opinions often won the argument even when he was in dissent. And a massive conservative juggernaut (about which more in a moment) successfully promoted the doctrine. 

    Perhaps the best evidence of originalism’s influence is its imitation by progressive scholars. Akhil Amar of Yale Law School deploys ingenious readings of Constitutional text and structure, deeply informed by history, to reach a range of contrarian progressive conclusions about the Constitution, especially the Bill of Rights. Jack Balkin, also of Yale, is even closer to conservative originalism in relying on the original meaning, but he does so at a much higher level of abstraction that allows him to generate progressive interpretations. More generally, courts and scholars across the board now take constitutional history, and especially the history of the adoption of the Constitution and its subsequent amendments, much more seriously than before originalism’s ascendance. Originalism has not won over the courts in all constitutional cases — no legal or interpretative methodology has done that. But today it is a legitimate, widely practiced, and growing form of legal argumentation, a remarkable accomplishment since the 1980s.

    The second conservative focus was the Warren and Burger Courts’ progressive approach to interpreting statutes. This approach tended to de-emphasize the text of the statutes and to be guided instead by Congress’ aims in enacting the statute, as discerned, for example, in legislative reports, hearings or floor statements, and other forms of “legislative history.” The departures from statutory text almost always served progressive ends. 

    A classic instance came in 1979 in a case called United Steelworkers of America v. Weber, which ruled that affirmative action to favor black employees did not violate the Civil Rights Act’s ban on employment discrimination based on “race.” The Court rejected a “literal construction” of the statute because a ban on affirmative action was “not within its spirit, nor within the intention of its makers,” which was to promote employment among blacks. 

    Textualism, a cousin of originalism, was a response to cases such as Weber. Conservatives — most notably Scalia — argued that the singular “spirit” of a statute was practically impossible to discern, and that often-tendentious legislative reports written by staffers and speeches by individual members of Congress were not reliable guides to such intention. The role of the judge is to interpret the text of the statute — the only words subject to the constitutionally prescribed lawmaking process of bicameral approval and presentment to the president. Except in rare instances, judges who went beyond the text were usurping legislative authority. 

    This approach sought to ensure judicial restraint and promote democratic decision-making for reasons akin to originalism: by constricting the legitimate sources of interpretive meaning, it curtailed judges’ discretion to import their own values into the statute that Congress enacted, and helped to ensure that the people’s representatives, not unelected judges, made the law. And like originalism, this theory purported to be neutral about ends. The stated aim was not for judges to achieve particular conservative outcomes, but rather to follow the dictates of Congress in whatever direction that led. 

    Textualism, like originalism, has been subject to a fierce academic debate during the past few decades. In courts, it has proven even more consequential than originalism. “Scalia’s textualist campaign was tremendously influential,” noted Jonathan Siegel of George Washington University Law School. “He changed the way courts interpret statutes,” and his influence “is visible in virtually every Supreme Court opinion interpreting statutes today.” The Bostock decision about sexual orientation and transgender rights was basically a fight over the meaning of Scalia’s undoubtedly victorious textualist legacy. Not every jot and tittle of Scalia’s textualism governs in every Supreme Court statutory decision, but the Court’s approach to statutes now always begins and often ends with statutory text. Few if any methodological victories in the Court have ever been so complete.

    As this sketch makes plain, no one is more responsible for the rise of conservative legal thought than Antonin Scalia. His “interpretive theories, communicated in that distinctive, vivid prose, have transformed this country’s legal culture, the very ground of our legal debate,” as Justice Elena Kagan noted in an introductory essay in a volume about Scalia’s legal thought. “They have changed the way all of us (even those who part ways with him at one point or another) think and talk about the law.” And yet the Scalia revolution, as the modern conservative legal movement could aptly be called, did not take place in a vacuum. It was the fruit of a larger political movement that began meekly in the Nixon administration and then caught fire in the Reagan administration. The movement and its associated network had many nodes, but at its center was the Federalist Society. 

    The Federalist Society began as a response to the ideologically one-sidedness of American law schools like the one I encountered at Yale in the 1980s. (The University of Chicago Law School was an exception to this one-sidedness; it had numerous prominent conservatives on its faculty in the 1980s, including Scalia, who helped the Federalist Society get off the ground.) In 1982, law students at Yale and Chicago convened a conference at Yale as a one-off counterpoint to “law schools and the legal profession [that] are currently strongly dominated by a form of orthodox liberal ideology.” 

    The conference was a wild success, and demonstrated the appeal of a forum for conservatives to discuss their legal ideas. The students quickly organized, got funding from conserva-tive donors, and began to open chapters in law schools and (for practicing lawyers) in cities around the nation. “Conservative law students alienated in their home institutions, desperate for a collective identity, and eager for collective activity provided a ripe opportunity for organizational entrepreneurship,” the political scientist Steven Teles remarks in his important study of the movement. Almost by accident, they tapped into and helped organize a larger conservative political demand for changes to the federal judiciary. 

    The Federalist Society was and remains, at heart, a debating club. (I was briefly a member in the 1990s, and I informally supervise the local chapter at Harvard Law School.) Its founders believed that the best way to develop and spread conservative ideas was to host intellectual exchanges between conservatives and progressives. The emphasis on argument exemplified the intellectual seriousness of the group, and its confidence that the best way to legitimate its ideas was to see how they stood up to the ones that prevailed in the classroom and the bar. It also, as Teles aptly describes it, “made the organization open and attractive to outsiders, moderated factional conflict and insularity, and had a tendency to prevent the members’ ideas from becoming stale from a lack of challenge.” The main factional conflict then and now — and one that has flared up in recent years — is between the deregulatory libertarian wing that was most interested in judicial efforts to reduce the size of government and the social conservative wing that abhorred (and sought to stop and to reverse) judicial recognition of progressive rights such as abortion. 

    Yet the Federalist Society evolved into much more than a debating society. It quickly became a focal point for conservative networking for political appointments in the federal government and for clerkships in the federal judiciary. Conservative students thronged to its popular annual convention in Washington, D.C. to watch marquee debates and rub elbows with icons in the movement that the Federalist Society helped to form — Supreme Court Justices, prominent lower court judges, Attorneys General and other Cabinet Secretaries, Senators, and other famous lawyers. It is hard to think of a more important annual conservative gathering, except perhaps the Conservative Political Action Conference meeting. 

    Through these and related mechanisms the Federalist Society flourished in its influence — especially as its student members grew up and began to populate the federal bench through appointments in the George W. Bush administration and especially the Trump administration. It also grew in its attractiveness to young conservatives, especially as a mechanism to advance one’s career. There is no formal pipeline between membership in the Federalist Society and law clerk jobs or executive branch appointments. But membership signals a commitment to conservative legal principles to potential conservative employers and opens many informal channels to them. 

    Despite its prodigious impact on conservative networking, the Federalist Society has sought to maintain neutrality on legal issues and judicial politics. It accurately claims that it does “not lobby  for legislation, take policy positions, or sponsor or endorse nominees and candidates for public service.” Its only formal principles are the ones it announces at the outset of every gathering: “that the state exists to preserve freedom, that the separation of governmental powers is central to our Constitution, and that it is emphatically the province and duty of the judiciary to say what the law is, not what it should be.” 

    And yet its careful efforts at broad-mindedness and political detachment have not stopped the Federalist Society from growing more political over the years. At its annual conventions the organization has increasingly showed off its connections to, and influence over, the legal decisions of Republican administrations. And while it has always had senior Republican officials speak at its conferences, these speeches have grown to be less about judicial politics and more about just politics. In a self-consciously partisan speech at the annual convention in 2019, for example, Attorney General William Barr was interrupted with extended applause after he claimed that “the Left” is “engaged in a systematic shredding of norms and undermining the rule of law.” He added that “so-called progressives treat politics as their religion,” are on a “holy mission,” and are “willing to use any means necessary to gain momentary advantage in achieving their end, regardless of collateral consequences and the systemic implications.” 

    The Federalist Society has also grown intellectually narrower and more homogenous. When I began teaching a quarter of a century ago, many conservative legal theories competed for supremacy among Federalist Society members. But in the last decade especially, originalism and textualism have risen to become the society’s (and the larger movement’s) orthodoxy. “Tonight I can report, a person can be both a committed originalist and textualist and be confirmed to the Supreme Court of the United States,” said Justice Gorsuch, seven months after he joined the Court, at the Federalist Society’s annual convention. Gorsuch received wild applause for this statement, which everyone in the room understood to be the core of what the Federalist Society is about. He also mocked the Federalist Society’s critics, thanked the crowd for its “support and prayers through that process,” and vowed to 

    maintain its principles on the Court. Politico described the surprisingly political speech as a “victory lap at the Federalist Society dinner.” 

    Gorsuch’s pledge of fealty underscored the Federalist Society’s astounding impact on the federal bench during the Trump presidency. “We’re going to have great judges, conservative, all picked by the Federalist Society,” said Trump in 2016. He followed through on that promise by turning over judicial selection to White House Counsel Donald McGahn, a committed Federalist Society member, and Leonard Leo, who for decades served in senior positions in the Society and who remains on its board. Leo took a leave of absence during the George W. Bush administration to help with judges (and was influential in the selection of John Roberts and Samuel Alito), and then did the same during the Trump administration, where he has had an even bigger impact. 

    The Federalist Society accurately maintains that Leo did this work in his personal capacity. But he was the public face of the Society even if he was formally disconnected from it when he working for the White House, and he drew on his deep relationship to its members in that process. After Leo introduced Vice President Pence as “one of us” at a Federalist Society event in 2019, Pence at the outset of his speech stated that the Trump administration and the nation owe Leo — whom Pence identified as “the Vice-President of the Federalist Society” — a “debt of gratitude” for his “tireless work,” a reference to Leo’s judge-picking. 

    Leo is merely exemplary of the deep and multifarious conduits between the Federalist Society and the Trump White House. The organization is so constitutive of the conserva-tive legal movement, and has such a strenuous grip on its imagination, that it would have been enormously influential in Trump’s judicial selection even if Leo had not been there. And its influence has been historic. In one of the defining accomplishments of his presidency, Trump placed Gorsuch, Kavanaugh, and Barrett on the Supreme Court, and over two hundred judges on the lower courts. The vast majority of these judges are proud long-time members of the Federalist Society who had been nurtured by it and absorbed its values over the course of their careers. These judges are on the whole immensely well-credentialed and qualified — a tribute (among other things) to decades of Federalist Society-facilitated clerkships on the increasingly conservative Federalist Society-influenced Supreme Court. 

    This success has invited controversy and pushback consonant with the high-stakes battle for control of an unelected judiciary that has steadily expanded its policy-making writ for a century. The Federalist Society that then-Harvard Law School Dean Elena Kagan said she “love[d]” in 2005 for its commitment to debate and its contributions to intellectual diversity is now widely despised on law school campuses and on the political left more generally. Federalist Society members at many law schools are today often shunned or put down in strident personal terms, inside and especially outside the classroom. They have gotten the message and speak up less often in class than a decade ago on issues such as affirmative action, gay and transgender rights, immigration, and criminal justice. With rare exceptions, top law schools have throughout my lifetime lacked intellectual diversity on a left-right axis in public law, but the attacks there on disfavored conservative positions have never been more open or vicious. The main impact of these attacks is to make law schools even less interesting intellectually, and to drive conservative students deeper into the Federalist Society cocoon.

    Outside of law school, the Federalist Society is often subject to stinging political reproach. Typical is a report in May 2020 by three Democratic Senators that described the Federalist Society as “the nerve center for a complex and massively funded GOP apparatus designed to rewrite the law to suit the narrow-minded political orthodoxy of the Federalist Society’s backers.” The Federalist Society is no more narrow-minded or political than the dominant legal establishment institutions it was created to challenge. If anything, it is less so, since it continues to operate more thoroughly in the world of ideas and argumentation than its rivals. But it is a political organization, and not just the debating society it holds itself out to be. This is so by default if not by design, since it is the intellectual nerve center of the enormously consequential fight for judicial dominance.

    In January 2020, the Judicial Conference of the United States’ Codes of Conduct Committee circulated a proposal to ban federal judges from being members of the Federalist Society or the American Constitution Society (ACS), the progressive organization founded in 2000 as “an explicit counterpart, and counterweight” to the Federalist Society. ACS never achieved nearly the influence of the Federalist Society, and the proposal clearly sought to hurt the latter. The ostensible reason for the proposed ban was that membership in these groups raises questions about judges’ impartiality. The Committee’s true aim was revealed when it declined to propose a ban on membership in the American Bar Association, a group that, unlike the Federalist Society, is heavily involved in legal advocacy — primarily for progressive causes. The proposal was dropped a few months later. But it, as well as the Senators’ report, are signs of the Federalist Society’s enormous political success. 

    The conservative legal movement’s original aim was to separate legal interpretation from personal values in the hope of quell-ing judicial activism. The rising influence of originalism and textualism, many on the right believe, accomplishes this. On this view, Gorsuch’s deployment of textualism to reach a progressive result in Bostock is evidence of success, not failure, since it shows that the methodology is value-neutral enough to produce outcomes contrary to a judge’s personal wishes. The same is true, for example, of Justice Scalia’s occasional originalist opinions that expand criminal defendant rights, and of Justice Thomas’ attacks on qualified immunity — a bête noire in progressive circles for shielding bad cops from liability — as lacking any basis in Congress’ textual commands. To many conservatives these examples illustrate the integrity of their principles. One rarely sees progressive Justices deploying their favored methodology to reach politically conservative results — especially since most lack constraining methodological commitments.

    But despite the packaging, conservative methodologies are not value neutral, and they have not always been deployed consistently or in value-neutral ways. Originalism as understood by most conservatives is oriented toward constitutional meaning in 1789 and the post-Civil War period (when the Reconstruction Amendments passed), and away from the progressive gloss put on constitutional provisions from the 1930s through the 1970s. The politically liberal results produced by originalism are the exception, not the rule. Bostock is also an exceptional instance of textualism, which on the whole leads to politically conservative results. One of many examples is the Court’s reversal of its prior tolerance of plaintiff’s suing for relief under federal law absent explicit congressional authorization — a change that has dramatically curbed the scope of federal rights.

    The rise of originalism and textualism is one reason why the recognition of new constitutional and statutory rights has slowed in recent decades (the Court’s recognition of a robust Second Amendment right to bear arms is an exception), and why American public law generally has moved sharply to the right. On issues ranging from voting rights to structural federalism to free speech and religion to many issues of court access, the Court has curtailed or reversed Warren and Burger Court precedents, and not always through close adherence to originalism or textualism. The Court has also grown aggressively pro-business across a wide range of issues in ways that are often disconnected from judicial philosophy. 

    As conservatives’ power on the Court has grown, judicial restraint — the original justification for originalism and textualism — has diminished. Many conservatives now abjure the deference to democratic enactments that was once the hallmark of conservative legal philosophy, and argue for a more assertive stance to strike down modern state and federal laws based on distant understandings of constitutional meaning. They are also more inclined to reject progressive precedents that conflict with the originalist Constitution. Justice Thomas is a leading proponent of this view on the Supreme Court. As he explained in 2019 in Gamble v. United States: “When faced with a demonstrably erroneous precedent, my rule is simple: We should not follow it.” 

    In many contexts, however, conservative disrespect for precedent is not based on a return to original meaning. A good example is the conservative turn on the First Amendment. In 1971, Bork stated the traditional conservative position in a famous article that argued that the First Amendment should be narrowly construed to protect only political speech. When the Court, in 1976, recognized First Amendment protections for “commercial speech,” Rehnquist was the lone dissent. Yet in recent decades conservatives have embraced the view that Bork and Rehnquist rejected. They have repurposed the First Amendment as a libertarian sword to strike down all manner of disfavored laws, ranging from business regulations to campaign finance restrictions. 

    An extraordinary decision in this vein came in 2018, when the conservative majority overruled a four-decade precedent to rule that the First Amendment prohibited the state from forcing public sector workers to pay for union activity when they did not join the union — a long-standard labor practice. The majority, in an opinion by Justice Samuel Alito, barely glanced at the original understanding of the First Amendment. Justice Kagan in dissent charged it with “weaponizing the First Amendment, in a way that unleashes judges, now and in the future, to intervene in economic and regulatory policy.” It was a fair critique.

    But the main targets of conservative libertarian activism are the federal agencies that, with little concrete guidance from Congress, control policymaking in the United States. “The greatest threat to the rule of law in our modern society is the ever-expanding regulatory state, and the most effective bulwark against that threat is a strong judiciary,” Donald McGahn, the Trump White House judge-picker, told the Federalist Society in 2017. Conservative scholars and judges have in the last decade developed new arguments for achieving this end, including imaginative uses of the First Amendment. But none is more remarkable, or revealing, than their flip on an obscure but consequential doctrine about judicial deference to agency rulemaking. 

    At the dawn of the movement, in the 1980s, the then-very-progressive District of Columbia Circuit — the federal appellate court charged with reviewing most agency decisions — regularly invalidated Ronald Reagan’s deregulatory efforts. As a law professor, Scalia had criticized the D.C. Circuit for imposing its values on agencies in defiance of what Congress had prescribed. During his tenure on the D.C. Circuit from 1982 to 1986, Scalia witnessed this trend up close, viewed it as illegitimate, and deployed several tools to fight it. 

    The main one he settled on was the Chevron doctrine, which took its name from a Supreme Court case in 1984 about the scope of the Environmental Protection Agency’s regulatory power over air pollution. Scalia was not on the Court when that case was decided, and the case was not a big deal when it was announced. But when Scalia joined the Court in 1986, he became its main intellectual champion and began to develop and deploy the Chevron doctrine aggressively. 

    The Chevron doctrine requires courts to accept reason-able agency interpretations of statutes that they are charged with administering. It makes it harder to second-guess agency rules — progressive or conservative — except in cases where they defy clear statutory directives. Scalia argued that this deference comported with Congress’ wishes, acknowledged agency expertise, constrained judges, and promoted account-able decision-making, since agencies were part of an executive branch headed by an elected official, the president, while courts were unelected. The doctrine also dovetailed with conservatives’ infatuation with executive power in the 1980s. (Before then conservatives for six decades had been skeptics of broad executive power, but that is another story.) During Scalia’s time on the Court, the Chevron  doctrine became “a central pillar of the modern administrative state,” as Michael McConnell of Stanford Law School has observed. 

    But then something unexpected happened. About a decade ago, the conservative legal movement started to flip on Chevron and related doctrines of administrative deference. Several factors led to the flip. The conservative view of Chevron had, remarkably, discounted a statutory requirement that courts reviewing agency decisions “decide all relevant questions of law [and] interpret constitutional and statutory provisions,” which some argued — the point remains contested — rules out deference to agencies on many legal questions. Administrative agencies began to use the cover of Chevron deference to make administrative rules that to conservatives seemed to depart more and more from the authorizing statutes for agencies. 

    It was no accident that the conservative turn picked up steam during the Obama administration, which promulgated legally super-aggressive regulations such as net neutrality, the Clean Power Plan (an ambitious environmental initiative), university sexual assault rules, and the implementation rules for Obamacare and Dodd-Frank. For conservatives encountering such rules that seemed to rest on doubtful congressional premises, agency deference seemed lawless. And so they reversed course. Scalia appeared to be backing away from the doctrine at the end of his life. And most younger conservative jurists — including Gorsuch, Kavanaugh, and many conservative legal scholars — are deeply skeptical of Chevron. Court watchers predict that the Supreme Court will overturn or weaken Chevron in the next few years.

    For many religious conservatives, the conservative legal movement’s extraordinary accomplishments are belied by the movement’s failure to reverse Roe, to prevent the rise of constitutional and statutory gay and transgender rights, and to give sure protection to religious freedom in the face of these judicially developed rights. This was the thrust of Senator Hawley’s complaint after Bostock. Social conservatives, he argued, had for decades gone along with the Republican Party’s neo-liberal agenda on trade and taxes in exchange for the promise of “pro-Constitution, religious liberty judges” — a shorthand for judges who will vote the right way on religious social issues. 

    And yet since the Reagan administration, religious conservatives have watched as Republican appointees refused to embrace the social conservative agenda. Two Reagan appointees, Sandra Day O’Connor and Anthony Kennedy, and one George H.W. Bush appointee, David Souter, refused to overturn Roe when that issue was teed up in 1992, on the grounds of “institutional integrity” and respect for precedent. Kennedy — Reagan’s appointee after the Bork nomination failed — was also the architect of the Court’s gay rights jurisprudence, which culminated in his opinion in 2015, joined by the Court’s four liberals, to recognize a constitutional right to gay marriage. More recent conservative appointees seemed to continue this trend. Gorsuch wrote Bostock and Roberts joined it. A few weeks after Bostock, Roberts shocked conservatives when he joined the Court’s four liberal justices to invalidate a Louisiana abortion restriction. Roberts also voted with the liberal wing in the summer of 2020 to deny churches exemptions from state restrictions on worship during the pandemic. 

    Religious conservatives are embittered that, despite the other successes in the conservative legal movement, and despite Republicans appointing over 79% of the Justices since Warren retired, they cannot find five Justices to embrace their agenda. Hawley attributed the failure to originalism and textualism which, he claimed, produce results that are “the opposite of what we thought we were fighting for.” (In 2014, one of the founders of the Federalist Society, Steven Calabresi, argued that the original meaning of the Fourteenth Amendment guarantees a right to same-sex marriage.) Others, such as Adrian Vermeule of Harvard Law School, argue that ostensibly conservative Justices are “educated urban professionals” whose commitments to liberalism dominate their conservative sentiments. Another argument is that the elite press, controlled by progressives, draws conservative Justices leftward through manipulated news coverage. Ed Whelan, a former Scalia law clerk and the president of the Ethics and Public Policy Center, speculates that the type of judicial candidates who have the best chances of being nominated and confirmed — ones good at “charming senators, trotting out a list of liberal friends and admirers, and neutralizing a leftist media” — are ones that are least likely to overrule Roe.

    Ruth Marcus’ book on the Brett Kavanaugh confirmation hearings, Supreme Ambition, contains a different explanation that has infuriated religious conservatives, and that was at the base of Hawley’s critique of the bad bargain they made with the Republican Party. At the first White House meeting on who should replace Scalia after he died — a deliberation 

    that ended in the selection of Gorsuch, who wrote Bostock — White House Chief of Staff Reince Priebus noted that major Republican donors cared little about abortion and same-sex marriage but a lot about chopping down the regulatory state. White House Counsel McGahn, in Marcus’ paraphrase, added that conservatives’ “emphasis on social conservatism and its associated hot-button issues ended with Scalia,” and that now judge-selection is “all about regulatory relief.” McGahn stated that on that criterion, Scalia himself “wouldn’t make the cut.” 

    Episodes such as these — which confirm religious conservatives’ suspicions about the priorities of the Republican elite — have led to a growing split within the conservative legal movement. One intellectual leader on the social conservative side is Vermeule, who argues that “originalism has now outlived its utility, and has become an obstacle to the development of a robust, substantively conservative approach to constitutional law and interpretation.” He believes that reversing the progressive moral agenda in the Court cannot be achieved by faux-value-neutral methodologies, but rather requires an overtly “moral reading” of the Constitution and laws to advance a conservative social vision that he calls “Common Good constitutionalism.” Vermeule also points out that originalism is, ironically, untrue to the Founding since it ignores the classical legal tradition (including natural law) that the Founders’ embraced in creating the Constitution and understanding its terms. Many of my most conservative students and advisees, at law schools around the country, are increasingly disillusioned with originalism and are energized by Vermeule’s critique of it, and his approach to constitutional interpretation. And yet originalism remains dominant. 

    This brings us, finally, to the confirmation of Amy Coney Barrett to replace Ruth Bader Ginsburg. Senate Republicans pushed Barrett through on a short fuse in an election year just four years after they delayed Barack Obama’s election-year selection of Merrick Garland to replace Scalia, and then confirmed Gorsuch after Trump won. These hardball tactics to gain control of the Court enraged Democrats, but they were perfectly legitimate from a constitutional perspective and not terribly surprising. Since the stakes have grown so large, the judicial confirmation process has suffered a three-de-cade downward spiral of diminishing restraint by both sides: Democrats’ unprecedented attacks on Bork, which killed his nomination, followed by their unprecedented filibuster of many of George W. Bush’s appellate court nominees and their elimination of the filibuster for Barack Obama’s appellate court nominees—actions that Republicans reciprocated by eliminating the filibuster for Supreme Court nominees beginning with Gorsuch, before their maneuvers to put Kavanaugh and Barrett on the Court. Norms have been rendered ineffective in this context because the exercise of hard constitutional power promises huge short-term victories.

    It is unclear how Barrett will impact the Court. She is a brilliant jurist who clerked for Justice Scalia and she acknowledges that Scalia’s “judicial philosophy is mine, too.” Social conservatives are hopeful that regardless of judicial philosophy, Barrett is one of their own and will vote their way. They have had this hope before, of course, and have been disappointed. But Barrett’s elevation gives the conservative legal movement a 6-3 majority on the Court for the first time, which means that in every case it can absorb a defection and still win.

    This historical conservative dominance on the Supreme Court has led many progressives to propose dramatic reforms to regain control of the judiciary, including stripping the Court of jurisdiction over cases that might lead to conservative rulings, or “packing” the Court with Justices to give liberals a majority. Conservative charges that these lawful tactics would violate norms ring hollow in light of the tit-for-tat pattern of events related to judicial politics since the 1980s. But for the foreseeable future, conservatives need not worry. Joe Biden has held his cards closely on the judicial makeover project. And the project is dead on arrival in the Senate in light of the Republicans’ strong performance in the recent Senato-rial elections, and of the opposition of Senator Joe Manchin of West Virginia, a moderate Democrat, to court-packing and to the elimination of the Senate’s 60-vote threshold to break a filibuster. For at least two years, and almost certainly longer, Democrats lack the votes to diminish conservative judicial power through structural reform.

    Still, it would be premature for social conservatives to celebrate revolutionary judicial victories on the issues that they care about most. The recognition of gay and transgender rights is practically complete and—unlike abortion rights—is not really legally contested. The most that social conservatives can hope for is that the Court will recognize religious accommodations to the enforcement of these rights. Affirmative action may be on the chopping block, but the practice is deeply entrenched socially, and colleges and other recipients of public funding have developed imaginative ways to use facially neutral identity proxies to achieve preferred outcomes. And Roe will be much harder to kill than many conservatives believe. Roberts has noticeably shied away from overruling the nearly five-decade-old decision. And whatever her first-order views on abortion rights may be, Barrett has staked out what the Princeton political scientist Keith Whittington calls a “moderate” position on overruling decisions and “has urged giving precedents more weight than some originalists would prefer.” The likely course on Roe is a narrowing of the abortion right but not an elimination of it.

    Whatever happens, the Court is destined to become a more politicized and controversial institution. When all is said and done, the Court has only itself to blame. Beginning in the 1960s it reached far beyond its proper jurisdiction to grab enormous control over public policy away from democratic institutions, which sparked a conservative counterrevolution in the 1980s that has now won power and on many issues is doing the same thing in the other direction. It is a sign of advanced constitutional decay that so many important decisions in our democracy are made by five or six unelected Justices, and that confirmation battles have become the most consequential political episodes in the nation after presidential elections.

    The Trouble with China

    In the summer of 2020, otherwise a time of maximum disunity in the United States amid intersectional upris-ings, rioting, and widespread institutional deliquescence, a rock-like national consensus emerged from the political waves: Americans from Nancy Pelosi and Joe Biden to Donald Trump, who vehemently disagreed on everything else, fully agreed that it was urgent to confront the People’s Republic of China, technologically as well as politically, within the United States, in Europe, and strategically across Asia and beyond. Within that consensus there were only stylistic differences, from Pelosi’s quiet assertion of the incompatibility of the regime with human rights anywhere on the planet to Trump’s truculent trade demands. 

    The break with the past is very sharp: from Nixon in 1972 to quite late in Obama’s presidency, the United States did much that accelerated China’s rise to wealth and power from the miserable poverty I saw everywhere in that country in 1976, while doing very little to oppose China, aside from resisting its claim to rule Taiwan. By August 2020, by contrast, the Administration and the Congress were competing in finding new ways of limiting Chinese power, from human rights’ legislation specifically related to Tibet, Xinjiang, and Hong Kong, to the compulsory sale of a China-based social media platform excessively popular with the young and exception-ally intrusive in its tracking. 

    With the Chinese navy engaged in threatening exercises off the coast of Taiwan even as the United States reiterated its promise of defending the island, all is set for rancor to explode in an armed clash. It is therefore urgent to try to understand what has happened, and why. 

    But to proceed one must first toss out any American-centered explanation of what has happened to US-China relations — of which there are many, from America’s hegemonic jealousy complete with ancient parallels (alas, no Thucydides Trust protects the brand) or American-white racial jealousy at the rise of the Han, or an American switch to geo-eco-nomics (the logic of conflict in the grammar of commerce) in response to the loss of plain economic primacy. The usual suspects blame the arms merchants and Pentagonal lobbies. The problem with all these accounts is that they monocausally attribute the new cold war, if that is what it is, to us. The reason why all American-centered explanations must be wrong as a matter of elementary logic is that relations between China and every other country remotely in its league (except Russia) have undergone exactly the same inversion, from amity to weary suspicion to increasingly vigorous defensive reactions, and it mostly happened on the same timetable or near enough. What we should be studying is not American behavior, but Chinese behavior.

    I will give four cases.

    I

    I was once engulfed in a Chinese wedding party in a vast Melbourne hotel whose inebriated celebrants noisily spilled out onto the main casino floor, handing out little boxes of assorted delicacies such as chicken feet to all and sundry, along 

    with cute little bottles filled with wolf-head kaoliang far more alcoholic than vodka. Suburban housewives turned from their slot machines to grimace comically at the chicken feet and 

    laughingly try the kaoliang, and everyone I saw at the hotel was just as indulgent with the invasion of tipsy Chinese that blocked the waiters and interrupted conversations at every table. My local friend, unbidden, explained the bonhomie: “They flew 5000 miles to hold their party in Australia and the least we can do is to be nice about it.” 

    In those days there was a lot of good feeling between Australia and China, as Australian exports to China rose every year to reach 30% of Australia’s total — and that 30% was also some 90% of the growth in total Australian exports. Not only were wedding parties warmly welcomed but also Chinese purchases of Australian firms, some in high-tech, as well as of Australian housing and land mostly unwanted by other foreign investors. Chinese tourists were also uniquely valuable, and not only because they accounted for much of the total growth in tourism but also because most other tourists were headed for the Great Barrier Reef, which was already under excessive pressure from tourist vessels, while the Chinese came for harmless sightseeing, lucrative shopping, and gambling. Crucially, the Australian welcome extended to the Hanban, China’ premier soft propaganda agency, which operates the “Confucius Institutes” of Chinese language and culture. It opened branches in most Australian univers-ties, operated by Chinese personnel who not only provided Chinese language teaching but also helped university administrators to handle the ever-increasing inflow of students from China. In the meantime Australian travel to China was facilitated by the opening of consular offices in six Australian cities (the United States has four), which also provided services for the increasing number of Chinese immigrants — numerous enough to sustain their own newspaper.

    In Australia’s ascending curve of advancement as well as prosperity, China and the Chinese played an ever increasing role, and there was substance as well as symbolism in the elevation of Kevin Rudd to the premiership in December 2007, the first and still only head of a Western government who spoke Mandarin easily and well. It was in Mandarin that he used to explain to Chinese interviewers that the “slight” political differences between the two countries — he quantified the differences at 15% — should not impede ever broadening cooperation. 

    Everything is different now. If Chinese gamblers still travel they will still be welcomed, and China can still import all the Australian raw materials it wants, but government scrutiny of Chinese investments is now much stricter than in the United States. All the Confucius Institute programs in New South Wales, Australia’s most populous state, have been closed outright, while others persist under very tight scrutiny, in the wake of ample evidence that they were operated as a coordinated propaganda arm of Chinese diplomacy, while their staff compelled Chinese students to function as their operatives, individually to spy on fellow students and together to harass any speaker critical of China or supportive of Taiwan or Tibet. As for the leaders of the Chinese immigrant community, they too are under close observation after some were exposed as Beijing’s agents in lobbying Australian politicians. 

    Most dramatic is the changed strategic attitude to China, from Kevin Rudd’s confidence in its leaders’ willingness “to make a strong contribution to strengthening the regional security environment and the global rules-based order” to the rising sense of insecurity that in 2011 persuaded the Austra-lian government to invite American combat troops to train in Australia, not once but on a prolonged basis with increasingly permanent base facilities. Off-duty Marines are now to be seen every day in tropical Darwin, that most movie-typical and of course wildly untypical of Australian cities, with leather-hatted Crocodile Dundee look-alikes in their high-riding jeeps, and seriously dangerous crocodiles lurking just outside town, which do worry the Marines practicing amphibious landings among them.

    That is the most visible evidence that Australia now views China as an outright threat, but the least visible is very likely the most important: Australia’s “strategic dialogue” with Vietnam, China’s favorite bullying target (Vietnamese fishing boats are regularly sunk by Chinese coast guard vessels) but also the most resilient of all its neighbors. Yes, it is true: the Vietnamese are highly confident that they can again defeat the Chinese, just as the French and the Americans were defeated. The most recent of the Chinese invasions — they started some two thousand years ago — occurred in 1979, a full-scale affair intended to force the Vietnamese to withdraw from Cambodia before they could defeat China’s ally, the mass-murdering Khmer Rouge. The Chinese failed: they withdrew after taking heavy casualties and the Vietnamese finished off the Khmer Rouge. 

    So what is the Australia-Vietnam dialogue about? Historical reminiscences of Australia’s part in the Vietnam war, with 61,000 troops and advisors serving over ten years? Hardly. The clue is that in this case “strategic” is a euphemism for intelligence. Australia is one of the “Five Eyes” countries, along with Canada, the United Kingdom, New Zealand, and the United States, which share deciphered signals and electronic intelligence — the only part of Winston Churchill’s dream of a permanent, all-encompassing Anglo-Saxon alliance that was fully realized. Vietnam’s long and porous border with China’s Yunnan province and its daily observation of the Chinese navy (including its Hainan submarine base) mean that it needs no ground intelligence or marine surveillance, but insofar as its Five Eyes partners allow it, Australia can supply electronic intelligence, crucial information about Chinese radars, communications and whatever signals can be deciphered. 

    That is how things now are between the Australians and the Chinese: instead of the optimism exemplified by Kevin Rudd, there are weary preparations to protect Australia by strengthening China’s Asian antagonists. In other words, without either Obama or Trump, and regardless of the posture of the United States toward China, Australian relations with China devolved on the same descending path, from hopeful and even enthusiastic amity to intense suspicion and active security measures.

    II

    In Japan, the reversal was far more abrupt. 

    On March 20, 2009, Japan’s Minister of Defense Hamada Yasukazu was in Beijing at the invitation of his Chinese counterpart Liang Guanglie. Footage shows the relaxed body language of both — of course everything had been agreed beforehand, and there was so much of it, all good, starting with a return visit to Tokyo by Liang within the year. The information-sharing arrangements they set up were remark-ably extensive and much beyond anything routine: the two sides agreed to coordinate anti-piracy operations off the Somali coast, while an overall “maritime contact mechanism” was to function between the defense ministries of China and Japan. There were to be mutual naval visits in the respective ports with shore activities to broaden their effect on the public: warships would advance China-Japan amity instead of being symbols of hostility. The uniformed military were to broaden and deepen the dialogue of the two ministers with an annual defense exchange plan that might progress to regular inter-service staff officer dialogues involving all services and the Joint Staff of Japan. In addition, the military area commands of the Chinese People’s Liberation Army (PLA) and armies of the Japanese Ground Self-Defense Force were to maintain a dialogue, to add another dimension. And to top it all off, China’s National Defense University and its Academy of Military Science would conduct joint programs with Japan’s National Institute for Defense Studies, while PLA University of Science and Technology and Dalian Naval Academy would cooperate with Japan’s National Defense Academy

    I was startled to read all this in the next day’s Japan Times in the metro. What had happened to the US-Japan alliance ? Why were the Japanese suddenly going to share all that military information with the Chinese? I was in Japan for other reasons, but a friend arranged a meeting with the leading opposition politician Ichiro Ozawa of the Democratic Party, who seemed very pleased at the government’s convergence with China. “I do not like Americans,’ He said. “They are too simple — everything is black and white.” The time had come for Japan to disengage from the military alliance, to become more neutral. 

    At the American embassy the foreign-service officer  I talked with said that Japan’s Liberal Democratic Party, long the country’s “ruling party” and until recently the firm upholder of its alliance with the United States, was declining. This political change had thrown everything into turmoil, he explained, and if Ozawa’s Democratic Party came to power anything was possible, including the expulsion of the United States Marines from their favorite base in Okinawa, the closure of the small mainland facilities, and even perhaps the departure of the US Navy from Yokosuka, the home port of the Seventh Fleet. Obama, in their view, was turning to the Chinese to help out in the American financial crisis, and he did not seem much interested in military alliances anyway. 

    And so it seemed that a neutral Japan might indeed emerge, especially if the American market for Japanese exports declined irremediably because of the financial crisis, forcing the country to increase its reliance on China. In September 2009, as the American diplomat had feared, Ozawa’s Democratic Party did come to power, and while Ozawa did not become Prime Minister he was the real power behind the government headed by Yukio Hatoyama, who would be followed by two more Democratic politicians who lasted about a year each, including the unfortunate Naoto Kan, who resigned on September 2, 2011 after a disastrous premiership that spanned the catastrophic March earthquake and tsunami. Ozawa’s serious interest in revising Japan’s national strategy was not shared by Hatoyama, but still it seemed that the United States was on its way to losing perhaps its most important ally in the world, with China gaining at least an economic partner that might become a strategic auxiliary over time.

    This prospect was welcomed by significant Japanese figures, including former Prime Minister Nakasone, once Reagan’s happy counterpart. He argued that Japan did well in the past by sending poor presents to Chinese emperors who chose to view them as humble tribute, and who would send much richer presents in return to his imaginary Japanese vassals. With this historical analogy Nakasone was implying that Japan could switch from its American horse to a Chinese horse to keep riding for free. For other Japanese, the shift was simply a matter of business: the United States was plunged in a deep financial crisis that was cutting American demand for Japanese cars and everything else, while Chinese demand for Japanese goods kept increasing. Even in the Gaimusho, Japan’s Foreign Ministry, the official keeper of the alliance, there was a growing number of converts to neutrality, with some even leaning towards China, so much so that those who resisted the drift became known as the “the Okazaki boys” (though some were women), a reference to the enlightened semi-hard-liner Hisahiko Okazaki, a charismatic rarity among Japanese diplomats who believed in a more vigorous foreign policy.

    That was what the future looked like, but what ensued was the very opposite of the China-Japan convergence that would end the American alliance. And the reversal happened entirely because of a unilateral Chinese decision to revive long dormant maritime claims all around China, which in many cases extended very far from China proper. There can only be theories about why these provocations started, because the Communist Party decides everything in the strictest secrecy. (This opacity has become even thicker now that Xi Jinping decides everything and Politburo meetings are reduced to a formality). But on the question of how it happened, there is no uncertainty: very quickly and very sharply. 

    Coincidentally, I was in Beijing in September 2009 as the guest of a strategic forum run by China’s most elevated military officers at the so-called Academy of Military Science, which is nothing like West Point and more like a military-corporate headquarters (with hotel-sized guest quarters of extraordinary luxury). My hosts were friends of long standing in some cases, including the charming and notoriously hawkish retired Admiral Luo Yuan, but I was also looking forward to the participation of the elegant Vice Foreign Minister Fu Ying. No sooner did she arrive than I noticed that she had changed: her tone had become peremptory and her gestures angular, in the brusque manner of a drill sergeant. America down! China up! It was only later that an explanation occurred to me: since the Obama Administration had urgently asked for China’s help earlier in the year — it sought, and obtained, massive public expenditures to help relaunch global demand in the usual Keynesian manner (I saw the road-building myself in Yunnan) — the significance of the financial crisis was grossly over-estimated in Beijing. Some must have viewed it as the start of the long-forecast “general crisis of capitalism” that Soviet leaders had awaited with growing impatience since 1945. 

    When I was startled by Fu Ying’s abrupt change (she is still around, in an even higher if largely honorific position), I did not know that some months earlier, on May 7, 2009, China had sent a map to the United Nations that marked its enormous maritime claims — most of the South China Sea’s three and a half million square kilometers — with nine dashes in lieu of an actual perimeter, thereby allowing room to extend the claim. But this completely outlandish map did not make the news, and neither did the joint protest of the Philippines, Indonesia, and Malaysia (Vietnam did not bother) of claims to waters within sight of their coasts and very far away from China. That is how the leaders of the Communist Party of China abandoned Deng Xiaoping’s famous injunction to “keep a low profile and bide your time” and suddenly challenged the entire world order, because they mistook the tumble in American finances for the downfall of the United States. As of this writing, the crisis thus opened continues, only it is worse.

    But back in 2009 nobody paid much attention when China advanced a maritime claim against Japan as well, in part because everybody ritualistically expected more amity in Beijing, and in part because China had revived an especially feeble claim to an exceptionally trivial territory: the uninhabited Senkaku islets and rocks, whose combined dry surfaces amount to some four square miles. Of course even the smallest islands can bring vast exclusive economic zones with them, but in this case any fishing value was irrelevant because Chinese fishermen were already allowed free access, while vague rumors of oil and gas under the seabed were ridiculed by the industry. The Japanese government did not respond to the sudden turn in Chinese policy. With the new Democratic Party in power for the first time, there was no eagerness to take on a new problem, which moreover seemed at most a minor irritant. As compared to Beijing’s claims to the three and a half million square kilo- meters of the South China Sea, outlined in hand-drawn dashes on a map from 1947 with about as much legal value as a child’s drawing, the Senkakus drawn to scale would amount to a dot. 

    Then, on September 7, 2010, the Chinese trawler Minjinyu 5179, one of many fishing vessels in Senkaku’s waters, collided with not one but two Japanese Coast Guard patrol vessels. The Japanese boarded the trawler and immediately discovered the cause of the incident: the skipper was drunk. He was detained along with his boat. That same day Japan’s ambassador in Beijing was summoned to the Foreign Ministry to be told that Japan should stop operating in the Senkaku archipelago. Two days later, the foreign minister announced that China was asserting its jurisdiction over the area, and Japan’s ambassador was again summoned to the Foreign Ministry to be told that Japan had to release the trawler and crew immediately. In Japan a routine investigation was underway, for which the skipper of the Chinese fishing vessel was remanded on September 10. Two days later, Japan’s ambassador in Beijing was again summoned, but this time by Dai Bingguo of the Council of State, a figure far above the foreign minister, to urge the Japanese government to be wise, and release the fishermen and the trawler immediately. Concurrently, a slew of China-Japan meetings designed to advance the new era of cooperation were delayed, even though Japan announced that the trawler and its crew members were about to be released, with only the skipper detained for the judicial process already underway. The local court, following normal procedure extended his detention till September 29.

    That is when the fishing incident stopped being an incident about fishing. A nation-wide campaign of incitement lead by the Chinese Foreign Ministry itself via its daily press briefings started in a matter of hours after the incident, so that already on the next day there was a major protest outside the Japanese embassy with flag-waving and loudspeaker demands for Japan’s withdrawal from the islands. More protests followed on a much larger scale, with many demonstrators wearing brand-new “Oppose Japan” shirts, and this time the mob became so menacing that police arrived in great force to block the entire area. Attacks on individual Japanese were reported in several cities, and after an attack on a Japanese school in Tianjin all Japanese schools in China were closed, as mobs formed in front of any Japanese office — while Foreign Ministry spokesmen continued to add to the incitement as incidents spread through China’s cities. For good measure, four Japanese corporate employees were arrested and accused of filming military targets, while a rumor was circulated that China would strangle Japan’s high-tech manufacturing by stopping the supply of rare earths — cerium, dysprosium, erbium, europium, gadolinium, holmium, lanthanum, lutetium, neodymium, praseodymium, promethium, samarium, terbium, thulium, ytterbium, and yittrium — both obscure and indispensable for high-tech, and at the time produced only in China. 

    In Japan, there was no clear response by its prime minister, its foreign minister, or indeed anyone in government, except for the duty official at the very small branch office of the Okinawa prosecutor in the remote island of Ishigaki. Finally, on September 22, Chinese Prime Minister Wen Jiabao issued a formal threat: “If Japan persists willfully and arbitrarily, China will take further actions…with dire consequences.”

    The threat worked: instead of being tried for the damage he inflicted, the captain of the Japanese fishing vessel was released two days later. But the Foreign Ministry spokesman in Beijing who had led the incitement campaign against Japan (even some Toyota drivers were harassed ) did not relent, and gleefully demanded in triumphalist tones an apology from Japan and compensation for holding the skipper, ruling out any compensation for the damage that he had inflicted on the Japanese patrol vessels. 

    In Japan, most people concluded that the country had been humiliated by a weak government, but the resulting crisis of confidence was still unresolved on March 11, 2011, when a powerful earthquake and a uniquely severe tsunami inflicted enormous damage in the northeast of the country, released radiation from a damaged reactor at Fukushima nearer to Tokyo, and traumatized much of the population. Prime Minister Naoto Kan and his government were conspicuously outmatched by events. A little over a month later, the Prime Minister of Australia, Julia Gillard, came to Japan on a long-scheduled visit that she refused to cancel, at a time when nobody else would travel to Japan because of radiation dangers from the Fukushima reactor, wildly exaggerated as usual. In a joint press conference with Japan’s prime minister, she replied to a question about China: “We do have a longstanding defense cooperation between our two countries and a trilateral defense cooperation between our two countries and the United States. We’ve taken the opportunity of our discussions today to talk about furthering the bilateral defense cooperation between us and of course we will be working on a trilateral basis as the United States works its way through its Global Force Posture Review.” That technicality was a nice touch, for it underlined what China meant for Julia Gillard in the wake of the Senkaku boat incident, when the exuberant amity of the defense minister’s encounter was suddenly replaced by outright hostility. China had made itself into a defense-planning problem, a potential threat to be confronted, and not just an economic partner, no matter how much coal, gas, iron ore, and beef it was importing from Australia. In retrospect, Gillard’s visit to Tokyo in the aftermath of Fukushima can be recognized as the foundational act of the “maritime alliance” that now connects and cross-connects Australia, Japan, Vietnam, and India with the United States, an alliance in which Singapore is a tacit member, and of which Taiwan is at least a beneficiary.

    All was set for change in Japan, but it was not until the follow-ing year that the electoral calendar allowed the Japanese public to express its views of the Democratic Party, and of China in effect. In December 2012, Shinzo Abe’s Liberal Democratic Party won 294 seats out of 480 in the Lower House of parliament, with additional seats controlled by allied parties much more than enough to become the “ruling party” once again, with the slogan  “take back Japan” and clear promises of a strong line on the territorial dispute with China, as well as of monetary easing and even higher public spending. The grand-son of one of Japan’s most important prime ministers and the son of a top political leader who died before he could follow in his father’s footsteps, Abe did not double defense spending or issue strident declarations but nevertheless he changed Japan’s entire stance on the world scene. He enacted a sharp attitudinal change in the government, and also a reorganization, both very small in scale and very important. Within the Foreign Minis-try, the “Okazaki boys” became decisively stronger without any need to purge the shrinking number of “panda-huggers,” and within the armed forces there was an upsurge of morale as the protection of Japan’s national territory would no longer be impeded by political timidity.

    But it was a seemingly minor reorganization completed within a year that really set the new course: some diplomats, some military officers, and some intelligence officials were re-assigned to a new National Security Secretariat. That obscure bit of bureaucratic engineering was actually a large break from the past. Until then, Japanese diplomats had coordinated important matters with their American counterparts, and the Japanese military of the different services had coordinated with their uniformed American counterparts, but there was no need for inter-Japanese coordination because it was the Americans who decided strategic matters and managed any crisis. Yet as soon as the new National Security Secretariat started operating, it became clear that Japan had become its own crisis manager and could forge its own foreign policy as an active ally of the United States instead of just a passive follower. 

    The Chinese had issued a demand for negotiations over the Senkakus but did not even obtain a denial — it was simply ignored, because the islands were Japanese and there was nothing to discuss. And there matters stand until today, as Japan also completed its trajectory from optimistic amity towards China to weary suspicion, accompanied by both self-strengthening and a reaching out for allies. That meant the United States, of course, and Australia, whose government had taken the initiative, but it also meant India, with which Shinzo Abe had an unexpectedly strong connection. It had started as an inheritance from his grandfather Kishi’s visit to India in 1957, which achieved almost nothing but was nonetheless important because no Japanese political leader had shown his face outside Japan since 1945, and none would have been welcomed in a long list of countries, especially those that had suffered from Japan’s wartime conduct. India, too, was invaded by Japan, but it was then British India, and Prime Minister Jawaharlal Nehru told Kishi that Japan had been the great inspiration for himself and his colleagues in the struggle for independence, because its self-made modernization and its defeat of Russia in 1905 gave them confidence that they, too, could be independent of imperialist tutelage. So in one country at least Japan was held in high esteem; and from that bit of family history Abe developed a serious interest in India, which developed into a personal connection with his natural political counter-part, Narendra Modi, the leader of the center-right Bharatiya Janata Party and Chief Minister of Gujarat. When Modi became Prime Minister of India in 2014, all was set for a rapid intensification of India-Japan cooperation, in everything from intelligence sharing to road-building. Multiple encounters between Abe and Modi, marked by genuine cordiality, were followed by action on the ground, none of it explicitly aimed at China but all of it driven by China’s new posture, and all of it bound to increase India’s ability to withstand China’s power and to cooperate strategically with Japan — for example, in support of Vietnam. The maritime alliance launched by Australia and taken up with enthusiasm by Japan now embraced India as well. With Abe now gone owing to poor health, his hand-picked successor Yoshihide Suga is set to continue the same policies, including prioritizing India in Japanese economic aid. 

    III

    It was not supposed to be that way. 

    For India’s post-independence elite, exemplified by the elegant Jawaharlal Nehru, China was a fellow Asian country that had also emerged from foreign domination; there was little contact with it, but even less conflict. Then very soon, in the winter of 1950, China became the major Asian protagonist in the Korean War, earning much respect as it stood up to the Americans, undeterred by their immense power.

    India-China relations were codified through the negotiation of an unusually elaborate friendship treaty in 1954 in which much was made of “The Five Principles of Peaceful Coexistence,” which included “respect for each other’s territorial unity integrity and sovereignty” and an agreement on trade and pilgrimages between the “Tibet Region of China” and India, which was signed on April 28, 1954. With that settled, Prime Minister Nehru visited Beijing in October 1954, with Mao at his most friendly: “The United States does not recognize our two countries as great powers.” he said “Let us propose that they hand over their big-power status to us, all right?” In the Nehru entourage there was an overflow of enthusiasm, now remembered by the slogan Hindi Chini bhai bhai, “Indian, Chinese, brother, brother” — the motto of a new era of solidarity between the two most populous nations on earth. The potential for collaboration seemed immense, especially because India’s economy was supposed to be centrally planned like China’s, so joint projects would not have to depend on the whims of short-sighted businessmen out for quick profits. 

    The era of good feelings lasted almost five years, even though the two governments did not actually collaborate on anything much, while some of India’s many communist parties were either “pro-Chinese” or Maoist, and also dabbled in terrorism. But it was over Tibet that India said bye-bye to bhai bhai. On March 30, 1959, with Tibet having exploded in a widespread uprising after Chinese troops entered the country in great numbers to impose Beijing’s rule in earnest, the Dalai Lama and his retinue fled through precarious high mountain tracks across India’s border into remote Tawang, the seat of a vast Lamaist monastery and the terminus of an interminable track down to the lowlands with their passable roads.

    Nehru and his government were not sentimental about the Dalai Lama, though they could not drive him back across the border, not with waves of refugees testifying to the brutal repression underway, and so they were forced to host him, provoking Chinese protests. Yet it was something much more fundamental that changed Nehru’s and India’s attitude to China: so long as Tibet retained its autonomy, it remained a buffer that kept Chinese troops a long way from India, so securely that there was no particular reason to patrol the very long and scarcely accessible Tibetan border segments from the Kashmir cease-fire lines to Nepal, in Sikkim, and then east of Bhutan to the Burmese border.

    When oblivion gave way to scouting, patrols, and ground surveys, the Indians made a startling discovery that started a border quarrel which has been dormant for long periods but is still unresolved, and periodically explodes in acts of violence: the Chinese had seized the northern edge of Kashmir’s Indian side, the vast 15,000-square-mile Aksai Chin plateau, and built a Xinjiang-Tibet highway across it. They had also intruded in the high-altitude vastness of Ladakh nearby, and also at the opposite end of the Tibet-India frontier in what was then the North East Frontier Agency and is now the state of Arunachal — except that these were not intrusions for the Chinese, who claimed not merely those extremities as their own but almost all of Arunachal till the edges of the Brahmaputra river valley. 

    Having belatedly decided to monitor the northernmost segments of their British inheritance, the Indians started running into Chinese patrols that disputed their maps, with numerous incidents and some shooting, until — in October, 1962 — the Chinese launched major offensives with tens of thousands of soldiers both in the west against Ladakh and in the east against Arunachal, advancing fast and deep after surprising and overrunning the thinly spread defenders, with thousands of Indian troops killed and captured. Having demonstrated their vast military superiority, the Chinese withdrew from their new conquests within the month, but not from their prior gains in the Aksai Chin. Nehru was utterly humiliated, and his closest associate, the “progressive” firebrand V. K. Krishna Menon, who had served as a peculiarly anti-military defense minister since 1957 (he favored officers who had eschewed combat in World War Two), was driven from office amid public opprobrium that lasted until his death. Mao Zedong would later tell people that his aim in attacking India was not to start a wider war but to end the border incidents and to persuade the Indians to negotiate seriously over what China needed, which oddly enough is still the Chinese position today, because in Beijing they truly, honestly, cannot understand why “poor and dirty” India wishes to defy China. 

    Notwithstanding all this, India’s ruling Congress Party retained an anti-Western core and wanted good relations with China no matter what, all the more so when China’s great economic advance seemed to offer opportunities for India  as well. With the humiliation of 1962 receding ever more into the past, and with some Congress leaders and officials openly indifferent about exact borders in the trackless and useless high Himalayas, relations gradually improved to the point that by 2011 the new slogan “Chindustan” emerged, amid mutual congratulations, to replace the old bhai bhai. There were again heady promises of vast cooperative projects, and also hard-headed calculations that good relations with India would persuade the Chinese to stop giving all their support, and weapons, to Pakistan. Very serious people in India were respectfully heard as they argued that India had to wean itself from American temptations and finally throw its lot in with China, first of all by inviting its vast “Road and Belt“ infrastructure projects instead of spurning them. 

    But that is not how things turned out. Just like Japan before it, India was forced into an anti-Chinese stance by China’s irrepressible territorial expansionism, which Xi Jinping only intensified when he came to power in November 2012, posturing in uniform whenever possible. Instead of the renewed hopes of amity of 2011, by the time the BJP won the elections and Modi became Prime Minister in 2014 India was already committed to a pan-democratic alignment with the United States, Japan, and their more operational allies, including Israel in the West and Vietnam in the East (with both of which Indian military relations were intense and important). And this is how matters stand in the wake of the fighting last year in the Galwan Valley and on the edge of Aksai Chin, as India completed its trajectory from hopeful friendship towards China to weary suspicion, ready for outright hostility. 

    IV

    In spite of the clear meaning of all that has happened elsewhere, even now many in Washington provincially take it for granted that American relations with China have been shaped, and will be shaped, by Americans, by the plans of the Biden presi-dency, by the doings and undoings of Donald Trump, by the Obama Administration before him (whose NSC advisor Susan Rice once declared that she could “shape” China), and indeed by their predecessors going all the way back to Nixon. 

    It is now a commonplace to assert that American policy-makers have been culpably naïve in believing that China’s increasing wealth, and the rising standard of living of the vast majority of the population, would necessarily compel the regime to liberalize, eventually leading to insurmountable demands for political self-expression, leading to decentraliza-tion and even democracy. At some level the complaint is valid, but it overlooks the vast reality that the regime did liberalize, and to an extent unimaginable when I first visited in 1976, when Mao was still alive, and then again in the aftermath of the violent repression in Tiananmen Square in 1989, until the wholly unexpected reversals that followed the long-expected rise of Xi Jinping in November 2012. 

    In 1976, Chinese men and women in the countryside could freely wear whatever rags they possessed, but in the cities almost everyone I saw was in blue Mao suits, men and women alike, with no trace of make-up allowed. Ordinary people scrambled along looking neither right nor left, as if on their way to urgent missions. (The most urgent of all was to secure some cabbages to dry on balconies for the winter, without which teeth would fall out on a diet of rice, millet, and sorghum.) Day after day I would wander around Beijing without seeing one person smiling, not even parents with children, and this was true of the other cities I visited, including almost tropical Chengdu and breezy Shanghai, not to speak of Lhasa, where the Tibetans freezing in cotton uniforms walked silently among the Chinese troops who were watching over them. I had been warned that I would have trouble breathing in Lhasa because of its altitude, but the real problem was in Beijing: in Maoist economics, “the night soil” of human excrement was a precious resource collected in buckets all over Beijing and then hand-carted across it on the way to the surrounding farmed fields, so the entire city smelled like a toilet. 

    By traveling in China I was already exercising a freedom unimaginable for the officials I was meeting: no travel whatever was allowed for them or indeed anyone else, unless under orders, and that was a real hardship for educated couples because the Communist Party preferred to assign husbands and wives to different cities. It also meant that there was no such thing as tourism, no recreational trips to the Great Wall not far from Beijing or the Ming tombs even closer. When I visited those places, they were deserted. As for entertainment, there were occasional Beijing Opera performances and there were color television broadcasts, but in the former the traditional hyper-colorful historical melodramas had been replaced by the grim class-struggle operas imposed by Jiang Qing, Mao’s ex-actress wife and one of the Gang of Four then ruling China for the moribund Great Leader. As for television, it offered political tirades against Party enemies (mostly the “capitalist roader” Deng Xiaoping, then under house arrest), more Jiang Qing operas, and brief looks at world news unlike any other version thereof, including Soviet television, which seemed almost normal by comparison. In fact there was no television for most Chinese: color sets were very few and even black and white sets were unimaginable luxuries for single families — they were communal if they were present at all. 

    In 1989, by contrast, on the eve of the Tiananmen Square demonstrations and massacre, China was a festival of freedom as compared to 1976. In a mere thirteen years, along with the opening of all parts of China to travel and of large parts of the economy to foreign business (even the American defense industry was engaged in joint ventures), there had been liberalization across the board. People could dress as they liked, couples went around holding hands, and there was the beginning of foreign travel, with business types already on jets to London, New York, and Tokyo and Cantonese more modestly riding trains to Hong Kong. Academics invited by foreign universities could and did visit them, and then return and talk of their experiences of academic and political freedom, in some cases lecturing to that effect in universities across China. The opening to China by Nixon and Kissinger in 1972 had been a strictly geopolitical move, and was much needed at a time when the Soviet Union was rising and the United States was sinking into the Vietnam war, but if any of the protagonists had entertained fantastic hopes that liberalization would also follow, they would have been vindicated by 1989.

    What happened next, however, was that Beijing’s students started demonstrating for democracy and freedom — even as many Sinologists in Western universities were insisting that there was no “Chinese” concept of freedom. To leave no doubts, the students famously built the ten-meter-tall white statue of the Goddess of Democracy and Freedom, whose features were unambiguously European, and placed it in Tiananmen to face the Mao photo on the outer wall of the forbidden city. It was destroyed on June 4, 1989 by the Army troops sent by Den Xiaoping and his Politburo subordinates to drive off the students, with some thousands shot dead, so as to regain control of Tiananmen Square, of Beijing itself, and of China at large, because by then there were student uprisings in many places around the country. In the grim aftermath of the regime’s nation-wide intimidation, some things ended that were never resumed, notably American and European arms sales to China; but after the most prominent student leaders were caught or escaped to the West, most aspects of liberalization resumed, notably foreign travel for the ever-increasing number that could afford it — the substantive achievement  of a freedom that had been absolutely denied to the citizens of the USSR.

    While there was widespread repression after the Tiananmen uprising, there were also limits to the crackdown that were previously unknown. Deng Xiaoping, who had himself been kept under house arrest, and whose son was crippled for life after he was thrown from a third-story window onto the street, was determined to have supreme power in the Party and to keep the Party under control, but he did not murder his rivals as Mao had done, and there were limits even to the punishment of dissidents. Ding Zilin, for example, the Beijing philosophy professor who started the “Mothers of Tiananmen” group together with her professor husband after their son was killed by troops on the square, eventually lost her job (as did her husband), was held under house arrest at times and imprisoned briefly after holding one more protest, and was forced to take a holiday away from Beijing during the 2008 Olympics, but she was not stripped of her pension or thrown out of her university-allocated apartment, and it was not until the ascendancy of Xi Jinping that she was prevented from speaking with foreign journalists. 

    More broadly, after the regression in 1989, progress in all directions resumed with increasing energy, starting with the economy of course but also culturally, with a rising interest in the Western classics, as testified by the appearance of successive translations of the Iliad and the Odyssey, both literal and poetical, by commercial publishers out for a profit, who evidently found enough buyers for such books. Hebrew was the other classical language of increasing interest, and this spilled over into Yiddish literature (I saw multiple translations of Sholem Aleichem’s tales in the Wangfujin bookstore!) to the extent that there are courses in Yiddish in more than one Chinese university, alongside many more in Hebrew for evangelicals and techies alike.

    By 2004 the liberalization of Chinese life had spilled over into foreign policy, with the enunciation by Zheng Bijian, the regime’s policy guru then and later, of the “Peaceful Rise” policy, according to which China would become ever more prosperous and more powerful but would nevertheless respect internal norms, starting with international law itself. China would not threaten any foreign country. Taiwan, always a special case, was to “rejoin” the mainland peacefully, and on its own timetable. In retrospect, many view Zheng Bijian’s proclamation as deceptive, but I do not: having known Zheng for years, I am quite certain that he was sincere, and all the more so because when I confronted him in 2012 about the multiple intrusions and territorial claims that had disillusioned countries from Japan to Sweden, his answer to my complaint was “Shīkòng de mâ,” or “runaway horses.” He was referring to the dangerous arrogance I had seen in Vice Foreign Minister Fu Ying’s body language in the wake of the financial crisis in 2009, misinterpreted as the final crisis of Western capitalism and a license to expand China’s power in all directions. Zheng Bijian’s remark was significant, because some backtracking soon became evident, with a lowering of the temperature on the Senkakus and other territorial claims and a general pulling back that was articulated in a 7,000-character (that is, very long) article by Dain Bingguo, the head of the State Council, which squarely reaffirmed the “Peaceful Rise” doctrine. 

    In that period there was also a genuine political liberalization, exemplified by the rule of Hu Jintao, who had all three top jobs as Chairman of the Central Military Commission, General Secretary of the Party, and President of China, but who nonetheless interpreted his role as a primus inter pares, not another Mao or even Deng Xiaoping, conferring important roles on his colleagues. Hu also allowed provincial bosses wide leeway to develop their provinces — which is how Bo Xilai rose to be the all-powerful boss of Chongqing and its thirty million people, beautifying it very successfully, and frog-marching its municipal employees into obligatory party-hymn singing, while his wife Gu Kailai accumulated wealth until her clumsy murder of a British helpmate. Other provincial bosses were less colorful, but they too went in different directions, and this decentralization, too, was a form of liberalization.

    Thus the belief that China would liberalize with increasing prosperity, and thereby become an increasing accept-able participant of international society, was not all a foolish delusion. Even in the administration of justice there was considerable liberalization, as I discovered in backhanded fashion when driving on a long journey with a Chinese friend of a friend. He was a policeman, and soon started complaining about the judges in his city: “my men chase a thief, they finally catch him, and then a few months later he is walking around again, released for good behavior!” Western-style law degrees and lawyers having become important with the opening of China to foreign investors; the Party started appointing lawyers as judges, and they could not help but introduce legal principles, such as better evidence than the confessions of badly bruised defendants. Fortunately, as my companion said, we still execute drug smugglers, and as for foreign spies (he looked directly at me) the Guoanbu still takes them away, that being the Ministry of State Security, the Chinese KGB. 

    And then, in December 2012, Xi Jinping became the party boss in Beijing, a year later rising to become President of the Republic and Chairman of the Central Military Commission. It was if a locomotive operator simply put the train in reverse, in a process of de-liberalization that has yet to end. The outrages are everywhere: the loss of Hong Kong’s erstwhile freedoms, the mass imprisonment of Xinjiang’s Uyghurs and Kazakhs in dozens of massive re-education centers, and in foreign affairs a coarse and undiplomatic arrogance (“wolf diplomacy”) and a revival of China’s territorial quarrels with Japan, Vietnam, and India. As for the United States, Xi Jinping’s visit to the United States and to Obama’s White House in September 2015 was supposed to resolve the most acute problems — the Chinese theft of American technology and the seizure and militarization of the Paracel islets and the Spratly coral reefs; but Obama discovered that his NSC advisor Susan Rice had been over-optimistic to the point of delusion, and reacted by ordering the first of a series of “freedom of navigation” patrols by American warships that continues still. 

    Xi’s regression had started within the Party itself. The local potentates who had risen under Hu Jintao’s deliberately relaxed rule — he had sought locally appropriate polices — were all placed under investigation, with most found guilty of improper enrichment and stripped of everything they had, their families thrown out of Party housing and they themselves detained for long periods if not actually sent to prison or even executed. (The especially egregious Bo Xilai and his wife were given life terms.)

    Nor did Xi tolerate the influence of the previous generation of party bosses, as Hu Jintao had done. While Hu himself was not arrested, he was silenced, very visibly so when Xi brings him along for decorative purposes, which happens less and less. As for the non-Han nationalities of China, their very limited rights of self-expression were altogether withdrawn, because Xi is not content with the Party formula that was originally Stalin’s: “nationalist in form, communist in content.” He wants Tibetans, Uyghurs, Kazakhs, and all the others — most recently including Inner Mongolian Mongols — to stop using their native languages except in domestic settings, and therefore he sees no need for state education in those languages. They are to become Han-Chinese in form as well as Communists in content. Xi Jinping’s highest priority is to reaffirm the primacy of the Communist Party’s ideology, starting with Marx and Engels (in countless re-enactments and anime showing them in mid-Victorian suits with anatomically correct beards and mustaches) and ending with Mao, whose works are now obligatory reading once again. Mao’s very long speech “on protracted war” has been revived as a manual for defeating the United States. 

    Xi Jinping’s very particular devotion to Mao and his all-out attempt to revive Mao’s Party is the very last thing I would have expected from him. Since he was parked at the Central Party School before his elevation, Xi received a number of foreign visitors, including a former finance minister of Italy, Giulio Tremonti, once my co-author on a book, and he did converse at length with a faculty member whom I also know well. I heard much about Xi Jinping that was unfiltered. Nobody came right out and said it, but knowing the basic facts of Xi’s life it seemed self-evident that he would strive to add legal protections to the Party rules, and would also press for more humane policies, at least within the Party. Those facts are as follows: Xi Jinping’s father, Xi Zhongxun, was a very senior party official, who was abruptly purged by Mao and sent to work in a remote factory in 1963, when Xi was ten. Three years later the Red Guards unleashed by Mao’s Cultural Revolution closed his middle school, looted the family home, beat up his mother, and treated his half-sister Xi Heping so violently that she was driven to suicide. His father was recalled from exile to be paraded, shoved, and beaten as an enemy of the revolution, while Xi’s mother Qi Xin walked alongside him, herself beaten whenever her denunciations of Xi’s father were not loud enough. His seventeen-year-old sister Qiaoqiao was sent off to Inner Mongolia to work on a desolate farm where there was no food, and was saved from starvation by the daughter of her father’s friend, the Inner Mongolia party boss. 

    In 1968 Xi’s father was sent to prison, and the next year, when Xi was sixteen, he himself was sent to work and “learn from the peasants” in the remarkably primitive village of Liangjiahe, Wen’anyi district, Yanchuan county, in Shaanxi province, whose capital Xian, of terracotta soldiers fame, is now highly developed but still contains much abject poverty — though not the raw hunger and freezing cold that Xi encountered when he arrived to live in a cave house. Xi was utterly miserable. He ran away, he was caught, he was sent back. But he had one object with him that was his only consolation: the Chinese translation of Goethe’s Faust, the story of the savant who made a bargain with the devil. Xi read it again and again, learning it by heart. This edition of Faust was the only one available in Chinese, translated by Guo Moruo, an eminent writer and high party official, who was himself persecuted by the Red Guards in 1966, escaping worse harm by abject self-criticism, the repudiation of all his previous books, and the denunciation of all his former friends and colleagues as counterrevolutionaries. With that he survived, but both of his sons were tortured until they committed suicide. Aided by a fawning celebration of the genius of Mao’s talentless wife Jiang Qing and his utter loyalty to Mao, Guo Moruo was eventually readmitted to high party status, complete with his own luxurious manor house, a staff of servants, a state limousine, and a large collection of antique furniture — a Faustian bargain indeed. 

    There is no evidence that Guo Moruo himself influenced Xi Jinping, though Guo found his salvation in total devotion to Mao in spite of everything done to him, just as Xi Jinping 

    would do. But Faust certainly left a very deep impression. When Xi said as much upon meeting Chancellor Angela Merkel, she probably dismissed it as a bit of cultural flattery, but Xi’s devotion was given physical form in 2015, when his own power was fully consolidated: the Shanghai International Studies University was granted funds to translate the immensity of the complete works of Johann Wolfgang von Goethe, novels, poems, plays, memoirs, letters, diaries, treatises, and shorter writings of literary and aesthetic criticism — an ocean of words. The entirety of Goethe is set to appear in Chinese in some fifty volumes containing some thirty million Chinese characters; eighty scholars from six universities and two Chinese state academies are hard at work. And this same Goethean has established concentration camps — one is reminded that Goethe’s oak tree stood on the grounds of Buchenwald.

    Dealing with Xi’s China as a geopolitical threat is proving to be less difficult than anticipated, because its aggressive stance has evoked vigorous responses from allies declared and undeclared, from Australia to India. Dealing with China as a technological competitor is also emerging as a less fearful prospect, because the reverse side of the immense amount of technology theft by China in the last two decades is the weakness of basic science in China, which leaves it bereft when the flow of technology is abruptly cut off by American security measures. But dealing with Xi and his Faustian reversion to a Maoist interpretation of Party rule will be far more difficult, especially if the destruction of Hong Kong’s liberties presages a heightened threat to Taiwan, the one place in the world that the United States must defend by statute. Only one thing can avert greater dangers: China’s reversion to the Peaceful Rise policy of 2004, which was very successful while it lasted. We may yet see this, but only after Xi Jinping’s fall, which we cannot ourselves bring about, because the fate of China lies in Chinese hands. 

    The Review Years

    “You ask me how Commerce began… One day, all of a sudden, Valéry said: ‘Why couldn’t we continue our meetings by publishing our discussions in a review? As a title I suggest 

    Commerce, the commerce of ideas.’ That idea delighted all of us there. The editors (Larbaud, Valéry, and Fargue) were appointed immediately. Adrienne Monnier and I took respon-sibility for putting everything in motion and we started straight away.” These are the words of Marguerite Caetani, describing events in 1924. She was born Marguerite Gilbert Chapin, an American who had arrived in Europe in 1902 and married Roffredo Caetani, Prince of Bassiano. In Paris they called her “the Princess,” though she signed herself Marguerite Caetani. 

    Of the three editors, Paul Valéry was the authority, Léon-Paul Fargue a writer admired above all by other writers, and Valery Larbaud a great literary go-between, a mercurial ferryman whenever one spoke in a certain way about literature (as Italo Svevo and James Joyce could testify). Neither Marguerite Caetani, who financed Commerce, nor the three editors had anything to proclaim. There was never any question of drawing up a program for the review, nor was it ever raised in conversation with friends, however distant or occasional.

    Before the first issue had appeared, Valéry wrote to Larbaud: 

    I’m in receipt in Rome of your esteemed letter of the  12th which takes me back somewhat to the atmosphere of our lunches, infrequent though friendly. The fruit of this union is Commerce … The tedious thing is writing. … I would have been very pleased if we had founded a review where there was no need to write. You realize what advantage! Reader, author, everyone happy. 

     “Without pressing so far into the perfection of the genre, it would have been possible to fulfill what I had thought up when I was 23 and had a phobia about the penholder.

    I wanted to do a review of 2 to 4 pages. 

    Title: The Essential. 

    And nothing more than ideas, in 2 or 3 lines. None other than the lean… 

    It could be signed with initials, for economy

    Marguerite Caetani’s name never appeared in the twenty-eight issues of Commerce. The review’s symbol was a pair of old Roman scales, an image of which appeared opposite the frontispiece of the first issue, beneath the indication of the print run (1,600 copies). Recognizing the proper weight: this was the essential premise of the review. Everything not in this balanced spirit was rejected.

    It has to be remembered that reviews were those that had spines (not the same, therefore, as general periodicals such as The New York Review of Books, The New Yorker, or the Times Literary Supplement). They are now largely a matter of the past, since literary reviews are among the considerable number of cultural forms that have gradually disappeared over the last fifty years. Their golden age, it is now clear, was between the two world wars, with notable early examples in the years straddling 1900 (La Revue Blanche, The Yellow Book, Die Insel).

    Marguerite Caetani was too elegant not to shun like the plague any semblance of a literary hostess. She was a Guermantes, not a Verdurin. This is also why she has generally escaped the attention of many rough and rapacious academics who continue to fill their mouths with “modernism” and “avant-garde.” Marguerite Caetani has not been detected by their narrow radar. Perhaps this is why little of importance has been written about her. All the more conspicuous, therefore, is  the magnificent portrait of Marguerite (or Margherita, as she calls her) that Elena Croce left us in her memoir Due città, or Two Cities. 

    It is a portrait of Marguerite Caetani’s years in Italy, when she ran the journal Botteghe Oscure between 1948 and 1960 in Rome, a journal much vaunted by Anglo-American expatriates of the time, an excellent review, but which gives the sense of a disaster that has already struck — and can only be read as a colonial version of Commerce. To see this, it is enough to put a copy of Commerce next to one of Botteghe Oscure. The comparison is entirely unfavorable to the latter: poor paper quality, less appealing format and page layout, too many contributors (this was the review’s main defect, which risked straying into the realms of well-meaning amateurishness). And yet, as the Italian critic Pietro Citati observed in an interview, “Botteghe Oscure was the finest Italian literary review of that time, infinitely finer than Politecnico, Paragone, etc. etc., which are much better known.”

    The writer and poet Georges Limbour wrote an ode to the index of Commerce, which began with “Artaud” and ended with “Zen.” This was the peculiar wonder of Commerce: almost all the names there resonate, they still have something to say. Or at least they stir curiosity. The same cannot be said about Botteghe Oscure, where one runs through the names in certain parts of the index as in a telephone directory (there were over seven hundred published writers in five languages). Midway in its lifespan the golden age of reviews came to an end, and no one realized that it had happened. The very idea of the literary review — of the little magazine, as Americans used to call it — had come apart. And Botteghe Oscure already seemed more like a weekly almanac than a review.

    “Regal” was the word used by Elena Croce — who was generally austere in her use of adjectives — to describe Marguerite Caetani, explaining that “Margherita had alone occupied her almost sovereign role,” along with Bernard Berenson, the only possible king in the intellectual geography of an Italy that was now remote and almost undecipherable in the years immediately after World War II. She was bound to him by a friendship that was “almost the emblem of discordant harmony.” They were well practiced at exchanging friendly barbs. Berenson said of her: “She is always looking for a new art more ugly than the one before,” touching a rather sensitive point about his friend, who lived always “in wait for a completely new ‘new.’”

    Berenson, a Lithuanian Jew who had emigrated to America and masterfully integrated into the most Waspish part of Boston society, said of himself: “I have spent too much time and money making myself a gentleman” — and he had no intention whatsoever of giving up what he had achieved. Marguerite Caetani, by contrast, had grown up in Boston and her social 

    rise had required no effort. During the years of Botteghe Oscure, when a friend observed that the title of the review might be easily misinterpreted, since botteghe oscure, which means “dark shops” and refers to the ancient Roman market that once occupied the street on which the review was located, brought to mind, for Italians, the headquarters of the Communist Party far more than the address of Palazzo Caetani, her reply was: “But we have lived here for a thousand years.”

    Although he rigorously avoided all contact with the world each morning to write his Cahiers between five (or four) and seven, when faint noises began to be heard around the house, Paul Valéry was nevertheless a consummate literary strate-gist and knew perfectly well that associating his name with a review was a delicate operation and bore heavy consequences. This is clearly set out in the letter that Valéry wrote to Margue-rite Caetani in April 1924, two months before the first issue of Commerce went into print:

    If I had been able to take part in the sittings of the Secret Committee, I would have asked that our program should be stated and all provisions taken to absolutely distinguish this publication from all possible reviews. For there are now so many reviews that there is certainly no need to add another.

    It would be essential to acquire an authority, occupying in the World of Letters, or in the confines of this horrible world, a singular strategic position — that of people of absolutely free spirit, who no longer have a need to make themselves known and fire revolver shots at streetlamps, and who moreover are not connected to some kind of system… I think we will have time to talk about it again on my return, in a few weeks. I will do my best to give you a Letter, on Letters, since it is your wish, even if I don’t know where to find the time to write, considering the engagements (that I don’t keep), the inconveniences, etc. 

     I don’t think it necessary to announce the review in the press with a great fuss and to describe it in advance. I take the view that it is pointless mentioning the name of the “editors” on the front cover… It would be my view that we shouldn’t have the air of addressing the public, and as if standing on a theater stage, but that we should appear as if among ourselves, with the public authorized to watch from the window… But all of this would need to be discussed orally and in person — I kiss your hand, dear Princess, charging you with all my Roman sentiments for the Prince — and to remember me to Fargue, Larbaud, Léger — if you happen to see them these days.

    We open the first issue of Commerce and read the table of contents: Valéry, Lettre; Fargue, Épaisseurs (Densities); Larbaud, Ce vice impuni, la lecture (Reading, This Unpunished Vice); Saint-John Perse, L’amitié du Prince (Friendship of the Prince); Joyce, UlyssesFragments. The first three pieces are by the editors, and the fourth is by the review’s poet in residence (and constant advisor). The fifth is the only opening by the “Secret Committee” to the outside world — but it is Joyce’s Ulysses, and might be enough in itself.

    Let us now look at what comes first in this issue, the place usually reserved for programs, manifestos, declarations of intent — the position for all that might be most public and declared far and wide. Here, however, we find the most intimate, private, and secret of forms: a letter. It corresponds to that Letter, on Letters previously announced by Valéry to Caetani, but it appears without the clarification “on Letters.” Why? And to whom is the letter addressed? To Caetani herself, one might think, seeing that it was she who had asked for the letter. But we find this text reappearing three years later, now with the title “Letter to a Friend,” in an enlarged edition of Valery’s Monsieur Teste. So it was addressed to Monsieur Teste, the totemic forebear, emblem, and cipher for Valéry himself. And Monsieur Teste was the model — the only one, by definition — for an extreme solipsism. Writing a letter to him meant creating a dialogue inside Valery’s own head. It was a task for his double. From all of this it is already apparent that the opening Lettre in Commerce was an example of mental dramaturgy, a literary genre invented and practiced by only one author: Valéry himself, on the basis of Mallarmé.

    At the same time, Valery’s Lettre, via a contorted and specious route, is also — when we read it in the pages of the review — the equivalent of a programmatic declaration, directed at the “World of Letters,” at that “horrible world” within whose confines Commerce ought to occupy a “singular strategic position.” But how is this to be construed? The Lettre is presented as though written on a train, on a long night journey toward Paris, “this paradise of the word.” The noise of the rails, rods, and pistons mingles with an incessant mental activity. It is the “metal that forges the journey in the darkness” — and it follows that “the brain, overexcited, oppressed by cruel treatment, necessarily generates, of itself and without knowing it, a modern literature…” This serves to keep at a distance all the avant-gardisms that fire revolver shots at streetlamps.

    But there is a more important target: as the train gradually approaches Paris, the city where “verbal life is more powerful, more different, more active and capricious than everywhere else,” the “harsh murmur of the train” seems to turn into the “the buzzing of a beehive.” It is not just the World of Letters that comes into view, but the whole “western bazaar for the trading of phantoms.” And at last Valéry’s real target appears: “the activity that is called intellectual.” At this point he embarks on a game somewhere between persiflage and sarcasm. Valéry claims, in all seriousness, not to know the meaning of the word intellectual as an adjective. And he explains to his interlocutor: “You know that I am a mind of the darkest kind.”

    Yet a sudden clarity emerges when he refers to the noun intellectuals. They are the followers of opinion: “Motion-less people who cause great movement in the world. Or very lively people, who with the quick actions of their hands and their mouths demonstrate imperceptible powers and objects that were by their nature invisible […] This system of strange acts, of productions and of wonders, had the all-powerful and empty reality of a game of cards.” A diabolical hallucination was gradually taking form, wherein the author of the letter recognized a feeling of being captured as in a web. But at the same time he was implying that one could never be sufficiently far away and separate from it. It was on this intent, suitably disguised, that Commerce was founded.

    Valéry’s inaugural Lettre in Commerce could be regarded as an apologue, a kind of fable to signify that certain pages, having appeared in a review on a certain day and in a certain company, always have a different significance from that which they assume inside a book. Anyone today, on reading the Lettre, which then became Lettre d’un ami in the final version of Monsieur Teste, would find it difficult to recognize the highly strategic intention that this letter had toward the surrounding world when it appeared one day in the summer of 1924 at the start of the inaugural issue of Commerce. Reviews serve this purpose, too: to multiply and complicate meanings.

    The moment, for a review, is an essential factor. While Paul Valéry was making his night journey toward Paris, Andre Breton was writing the Manifeste du surréalisme. Commerce made its debut in August 1924, whereas the Manifeste would appear in October of that year — and the first issue of La Révolution Surréaliste in December. The covers of the two reviews seem to belong to incompatible worlds: Commerce in pale beige, its lapidary title with no explanations, accompanied only by the date and place of printing; La Révolution Surréaliste in bright orange, with three group photos, the members of the “surrealist headquarters” photographed by Man Ray, as in a school photo, and then the names of a throng of contributors in the summary, and in the middle a striking sentence: “A new declaration of human rights has to be reached,” about which there was no corresponding piece in the first issue. Benjamin Péret, the Surrealist poet who was one of the two editors, had wanted the graphics to resemble a popular science review called La Nature. The printer of La Revolution Surrealiste generally produced Catholic publications.

    Two faraway worlds, one might say, with little in common. And yet, starting from Cahier II, or the second issue, of Commerce, important texts by the surrealists are included among not many other pieces: Louis Aragon, Un vague des rêves (A Wave of Dreams), in Cahier II, which is also an account of the birth of surrealism, and Andre Breton, Introduction au discours sur le peu de réalité (Introduction to the Discourse on the Inadequacy of Reality) and Nadja, in Cahier III and XIII — and also writing by such reprobates as Antonin Artaud, Fragments d’un journal d’enfer (Fragments of a Journal of Hell), in Cahier VII and divergent figures such as the surrealist and ‘pataphysical writer Rene Daumal, Poèmes, in Cahier XXIV. Viewed retrospectively, one might say that these texts are filtered through a close mesh, as well as being among the few still fresh works in the plethora of largely vacuous writings of the group. Surrealism was a spice added to the Commerce marketplace, purified of dross and of any fanciful ambition to shoot at the streetlamps.

    What happened in 1924? According to Aragon, who was its visionary and astute chronicler, that year was swept away by “a wave of dreams” (the title of his long essay in Commerce): “Under this issue [1924] that holds a net and drags a haul of sunfish behind it, under this issue adorned with disasters, strange stars in its hair, the contagion of the dream is spread across the districts and the countryside.” This explains the fact that La Révolution Surréaliste, making its debut at the same time that Aragon’s piece was published in Commerce, would aim everything, even in the most childish manner, at this word: rêve, rêve, rêve — dream, dream, dream — as though it were heightened through repetition. 

    But Aragon himself was a shrewd cultural politician and he immediately compiled a list of “Presidents of the Dream Republic” where — alongside the writer Raymond Roussel, the anarchist assassin Germaine Berton, Picasso, De Chirico, and Freud — were the names of Léon-Paul Fargue and Saint-John Perse, founding members of the “Secret Committee” of Commerce. Even though they were surrealists, these literati did not forget the old ways. There was a subcutaneous circulation from the start between Commerce and La Révolution Surréaliste at the very moment of their launch. The proof? The phrase on “human rights,” which dominated the cover of the surrealist review, was taken from Aragon’s Vague des rêves, which appeared in the autumn of the same year in Commerce, where only there is a hint of explanation found. “All the hope that is still left in this desperate universe will direct its last delirious looks toward our pathetic stall: ‘It’s a matter of reaching a new declaration of human rights.’” The road toward that “new declaration” must have been long indeed, since nothing more was heard about it.

    Fifteen surrealists met on two evenings, in January 1928, to conduct an “Inquiry on Sexuality,” the results of which would appear two months later, with the same title and in the form of a multiple conversation, in issue number 11 of the group’s review La Révolution Surréaliste. The conversation was begun by Breton with a question: “A man and a woman make love. To what extent does the man take into account the pleasure of the woman? Tanguy?” An old question. Puzzled responses. Yves Tanguy: “To very little extent.” Others intervene. Breton steers and comments: “Naville considers therefore that materially the pleasure of the woman and that of the man, in the event of these happening simultaneously, might be translated into the emission of confused and indiscernible seminal fluids?” Pierre Naville confirms. Breton replies: “It’s impossible to verify it, unless one entertains highly questionable verbal relations with a woman.”

    Nothing more is explained: we will never know what these “highly questionable verbal relations” are. They then move on to homosexuality (here called pederasty), about which Raymond Queneau ventures to say that he has “no moral objection whatsoever.” Protests. Pierre Unik declares: “From the physical point of view, pederasty disgusts me in the same way as excrement and, from the moral point of view, I condemn it.” Queneau comments that he has observed “among surrealists a particular prejudice against pederasty.” At this point Breton has to intervene, to put things in place: “I accuse pederasts of proposing a mental and moral deficit to human tolerance that tends to form systematically and paralyze all the enterprises that I respect. I make a few exceptions, a separate category in favor of Sade and one, more surprising so far as I’m concerned, in favor of Jean Lorrain.” (The latter was an openly gay writer and dandy of an earlier generation.) Doubts over these exceptions: “Then why not priests?” Breton explains: “Priests are the people most opposed to this moral freedom.”

    And they move on, from point to point. Jacques Prévert says he would not be interested in making love in a church “due to the bells.” Péret, always extreme, says: “This is my only thought and I have a great urge to do it.” Breton agrees and explains: “I would like it to include all possible refine-ments.” Péret then reveals how he would intend to behave: “On that occasion I would like to desecrate the hosts and, if possible, defecate into the chalice.” But on this Breton makes no comment. They move on. It is established that “no one is interested in bestiality.” Breton takes over, asking: “Would it be pleasant or unpleasant for you to make love with a woman who doesn’t speak French?” Péret and Prévert have no objection at all. But Breton exclaims: “Unbearable. I have a horror of foreign languages.”

    Almost a century later, one cannot avoid noting the unfortunate lyrical affectation of all the surrealists in what they were writing then, as if a dull screen were preventing them from recognizing the childishness of their excessive imagery, as well as their wild aspirations — a kindergarten bordering on a charnel house, from which they had only just emerged, while another was being prepared.

    Still, all that — and more — on the first evening. We could easily continue with the second, which followed four days later. But the point would be the same: certain things are discovered only if a journal is established.

    T. S. Eliot, who was Marguerite Caetani’s cousin, launched The Criterion in 1922 in a situation that was the opposite of what was happening in Paris. For him, in London, it was not a question of too many literary reviews but of too few, especially those with a cosmopolitan character.

    The first person he turned to — not surprisingly — was Valery Larbaud: “I am initiating a new quarterly review, and am writing in the hope of enlisting your support. It will be small and modest in form but I think that what contents it has will be the best in London […] There is, in fact, as you very well know, no literary periodical here of cosmopolitan tendencies and international standards.” The first piece Eliot requested was Larbaud’s lecture on Joyce. The next day Eliot wrote to Hermann Hesse asking him for “one or two parts of Blick ins Chaos.” And he added: “You don’t know me: I am a contributor to the ‘Times Literary Supplement’ ” as well as “English correspondent […] for the ‘Nouvelle Revue Française’; lastly, the author of several volumes of verse and a volume of essays.”

    The Criterion also had a patroness, Lady Rothermere, of whom Ezra Pound disapproved (as moreover he scorned everything else about England): “Do remember that I know nothing whatever about Lady Rothermere, save that she, by her name, appears to have married into a family which is not interested in good literature. I am interested in civilization, but I can’t see that England has anything to do with any future civilization.” What a shame that in the same letter Pound indicated that the “real voice of England” was the Morning  Post, a newspaper that attributed all kinds of evils to Jewish conspiracies.

    When, in January 1926, The Criterion became The New Criterion, moving from the administration of Lady Rother-mere to that of the publishing house Faber & Gwyer, Eliot felt he had to show his cards and wrote an essay that began with these words: “The existence of a literary review requires more than a word of justification.” It is rarely wise to ignore Disraeli’s maxim “never explain” — nor was it in this case. Like a diligent schoolboy, Eliot set out along the path of good sense. The number of contributors should never be too many or too few. Another error to avoid would be “including too much material and representing too many interests, which are not strictly literary, or on the other hand by sticking too closely to a narrow conception of literature.” There should not be a “program” but rather a “tendency.” Authors should follow that tendency, but they should not be in too much agreement either.

    So far, it is hard to disagree. But cracks suddenly start to appear in the sensible and fair-minded approach. One notices an oblique swipe at Commerce, without naming it, which Eliot would condemn as a “miscellaneous review,” whereas the review that Eliot has in mind “should be an organ of documentation. That is to say, the bound volumes of a decade should represent the development of the keenest sensibility and the clearest thought of ten years.” At this point it becomes ever clearer that Eliot is no longer playing the role of an unbiased director but, on the contrary, is anxious to show exactly where he stands — above all that there are those whom he wishes to exclude from his review: “I believe that the modern tendency is toward something which, for want of a better name, we may call classicism.” A tendency that, under that awkward and inappropriate name, was certainly not about modernity but was about Eliot at that moment in his life.

    And he did not stop there. He had to declare who was to be followed. With a sudden candor, Eliot draws up two lists, of good and bad. The bad ones are humanitarian liberals: H. G. Wells, George Bernard Shaw, Bertrand Russell. Fairly predict-able. But who are the good ones? We discover that the first two approved books are Réflections sur la violence by Georges Sorel and L’avenir de l’intelligence (The Future of Intelligence) by Charles Maurras, and they are followed approvingly by Julien Benda, T. E. Hulme, and Irving Babbitt. The striking name is Maurras, since he was synonymous with Action Française, the political movement of the extreme right in France. It was a very peculiar version of “classicism,” then, that Eliot was advocating. 

    In the opening lines of Barbarie et poésie, published a few months earlier, Maurras had written: “We have had to add to literary criticism action in the public square. Who is to blame? It was no one’s fault that the barbarian realm was founded outside the Spirit, in the very structure of the City. The Barbarian down below, the Barbarian of the East, our Demos flanked by its two friends, the German and the Jew, made an ignoble yoke weigh heavily on the intelligence of the nation.” As for the Jew, “the right word seems to have been spoken at a famous meeting between Catulle Mendès and Jean Moréas: — To take Heine for a Frenchman! said the Jew, scandalized. — There’s nothing French about him, replied the Greek, delighted. — But, Mendès observed, neither is he German!The truth…, Moréas began, hesitating somewhat. — The fact is that he’s Jewish, Mendès exclaimed. — I didn’t dare say it to you, replied Moréas.”

    Eliot certainly was not proposing, like Maurras, to “add to literary criticism action in the public square.” But about the Jews he agreed with Maurras. For his part, Valéry, whom Eliot regarded as a “profoundly destructive, even nihilistic mind” (though this did not prevent him from thinking that Valery was also “the symbol of the poet in the first part of the twenty-first century — not Yeats, not Rilke, nor anyone else”), would continue to steer the miscellaneous Commerce without falling into the trap of taking a position. Even “classicism” was not an appropriate formula for him. But until its end in 1939, when taking a position became obligatory, The New Criterion continued to “illustrate, in its limits, the period and its tendencies.”

    One might wonder when and how the numinous and ominous figure of the female surrealist first appeared. A starting point is found on page 17 of the first issue of La Révolution Surréaliste: a series of small square photos of twenty-eight young men, without their names and in alphabetical order. At the center, larger and once again in square format, there is a photo of a woman with no name. Below, in italics, are the words: “Woman is the being who casts the largest shadow and the brightest light in our dreams. Ch. B.,” indicating Charles Baudelaire, first among the prophets.

    Who are the twenty-eight men? The surrealists of the moment, together with their chief patrons: Freud, De Chirico, Picasso. In second place, in sequence, Antonin Artaud “handsome as a wave, likeable as a catastrophe,” according to Simone Kahn, Breton’s wife. And then René Crevel, “the most handsome of the surrealists”; Jean Carrive, the youngest surrealist (sixteen); toward the bottom Man Ray and Alberto Savinio. But who is the woman in the middle, in a police identity photograph, with the sad piercing gaze?

     She is Germaine Berton, described in modern encyclopedias as a “worker, trade unionist, anarchist.” On January 22, 1923 she killed Marius Plateau with a shotgun at the headquar-ters of Action Française, where he was a secretary. He was killed by mistake. The assassin had someone more important in mind, Charles Maurras or Léon Daudet — both political leaders who started out as influential men of letters. During Berton’s trial for murder, Aragon wrote in defense of the accused that it was legitimate “to resort to terroristic methods, in particular to assassination, in order to safeguard, at the risk of losing everything, that which appears — wrongly or rightly — more precious than all else in the world.” Germaine Berton was acquitted, and in 1924 she gave a series of tumultuous anarchist lectures that led once again to her arrest. Not much is known about her life after that, until her suicide in 1942.

    So the female surrealist’s star rose with an aura of blood and death. But there was an alternative image. In the first issue of La Révolution Surréaliste there was also a magnificent photo on page 4, taken by Man Ray, of the naked headless torso of Lee Miller shaded with zebra stripes. The female surrealist would comprise the alarming gaze of Germaine Berton and the recognizable torso of Lee Miller.

    Breton’s Manifeste du surréalisme came off the press on October 15, 1924, and three days later a jointly written pamphlet titled Un cadaver appeared with a text by Breton. What had happened in the meantime? The funeral of Anatole France. Janet Flanner, the most effective and chic of Paris news reporters with her pieces for The New Yorker, noted: “I recall that at Anatole France’s funeral procession, the first of these official honorary ceremonies that I had ever seen, the cortege was followed through the streets by a group of disrespectful Surrealists, who despised his popularity and his literary style, and who shouted insults to his memory (‘Un cadavre littéraire!’) in unison every step of the way. This was possibly the first of their sadistic street manifestations and was considered a scandal, since Paris has so long been noted as a great appreciator of its intellectual figures.”

    Breton contributed to the surrealist pamphlet a short piece of which he must have been proud, since it reappeared a decade later in his book Point du jour (Break of Day), where he wrote that the year 1924 could be considered fortunate because it had seen the death of Pierre Loti, Maurice Barrès, and Anatole France: “the idiot, the traitor, and the policeman.” And he did not stop there: “In France a little human servility has gone. Let there be celebration on the day when cunning, traditionalism, patriotism, opportunism, skepticism, realism and heartlessness is buried! Let us remember that the lowest comedians of our time have found a stooge in Anatole France and we won’t forgive him for having adorned his smiling inertia with the colors of the Revolution. To close up his corpse, why not empty — if anyone wishes — one of those shacks on the quais with those old books ‘he loved so much’ and throw the whole lot in the Seine? Once dead, this man ought not to produce any more dust.” Such a miserable point had perhaps never before been reached in the highly complex history of the avant-garde.

    In contrast with the commotion during the funeral of Anatole France, five years later, as the decade and a whole way of life was drawing to a close, there was silence at the funeral of Hugo von Hofmannsthal, perhaps the only writer who could be rightly called European, among the many who claimed to be. Rudolf Kayser gave an account of it in Bifur, which sought to vie with Commerce: “We attended the funeral of Hugo von Hofmannsthal. In a small baroque village church, we were there, black and silent before that casket, around which the incense, the music, the Catholicism reigned funereal and heavy. Then we went out into a hot summer day. The dead poet and friend guided us, a small procession of people dressed in black. But along the edges the people were lined up, there were thousands of men, women, children who flowed with us into the cemetery. They knew nothing about him, nothing other than his destiny, nothing other than his name. At the graveside, beside the priests, there were some film cameramen. This was our goodbye.”

    What was there before the word “revolution” inevitably thrust itself into the title of the surrealist journal, which in 1929 became Le Surréalisme au service de la Révolution, and finally insisted on being served? There was Littérature, a monthly review, first issue in March 1919, with unremarkable graphics, title underlined, poetry in italics, prose in roman. Looking back, Breton claimed that the title ought to be construed “antiphrastically, in a spirit of mockery.” After the shock of Dada, recently arrived from Zurich, nothing could be treated with proper respect, especially literature.

    But this was not the case. Indeed, here everything has the air of transition, of a judicious blend of established powers and emerging powers, of notables and new recruits. It is enough to glance through the names on the contents page of the first issue: Gide, Valéry, Fargue, Salmon, Jacob, Reverdy, Cendrars, Paulhan, Aragon, Breton. They are all there, those who would continue on for another twenty years, old men and subver-sives, neoclassicists and presurrealists. And there is, in the table of contents, a deft game of precedence. Heading them all were Gide and Valéry, who were now established names. Then the others in random order, to Breton, who was already hoping to call the shots. It is bewildering to read the issue from start to finish, skipping nothing. At the start Gide produces fragments of his new Fruits of the Earth, with an epigraph in bold, imperious, that will be cherished by lovers of bonheur, an ideal motto for future Gentils Organisateurs of the Club Méditérranée: “Que l’homme est né pour le bonheur, / Certes toute la nature l’enseigne” (All of nature teaches that man was born for happiness). Then comes Valéry’s poem Cantique des colonnes, “Song of the Columns,” which now sounds fairly vacuous.

    But try looking through the rest of this issue of Littérature and an embarrassing feeling is gradually confirmed: it is as though everything has been sketched out by the same hand — and a hand with no conspicuous talent. Even writers such as Fargue or Cendrars, who could hardly have been confused with others, are flattened, toned down, as though they were wearing a regular uniform. All of them make improvident use of jumbled images and have a shared incapacity to explain what they are talking about. Exactly a century later, little remains of that Littérature about which they write. Yet we are still struck by the diplomatic aspect: the group photograph, a fleeting convergence of certain names that would soon vanish from the scene, with a canny game of swapping, adding in and taking out.

    The rule of the good neighbor does not apply just to books in libraries, but also to pieces in literary reviews. Indeed, it could be a criterion for testing their nature or quality. Every issue of a review can be seen as a whole, where different voices are intersected and superimposed within a pre-established landscape, with its hedgerows, avenues, fountains, and wild areas. And over time the physiognomy of the place can radically change, as if in a grotesque game. Littérature, which some of its authors regarded as a rash and ruinous enterprise, proved in the end to be a collection of bland lyrical texts, where the factor of novelty ended up almost stagnant and certainly tedious.

    It was the age of plaquettes, those slim volumes, no more than a hundred pages, sometimes less than fifty, often graphically elegant, printed in few copies, generally numbered, by publish-ers who did only that (Au Sans Pareil, K, GLM, L’Âge d’Or, among others), a dust wafting around normal books, which were very often published by Gallimard, or by the N.R.F. as it was still called. Writers could be authors of various plaquettes and of no book. There were already collectors of plaquettes and of autographs. Max Jacob was caught diligently copying out several examples of his poetry which were then to be offered as original versions for various expectant collectors. And they were searching above all for grands papiers, rare copies on special paper. That was the final period of a parallel and morganatic publishing trade, on which various antiquarians-of-the-new lived for a long while, in whose shops, in paper wrappings, there was much to explore. Those plaquettes then re-emerged, as if embalmed, in the windows of the great Parisian bookshop and gallery La Hune, well displayed, when month after month, year after year, someone was rediscovered, Artaud or Crevel or Desnos or Vaché or Cravan. It was a long paper-trail that remained in evidence until the end of the 1970s.

    Commerce came to an end in 1932. But its model, especially typographical, continued to spread throughout the 1930s. The format generally square, the title evocative and alone on the frontispiece, the absence of any preface, the names of the editors opposite the frontispiece, the predominance of new texts blended in each issue with something from the past, as well as something exotic or “oriental”: those are the characteristics of Commerce, which reappear in Bifur and Mesures. Both Bifur, which was founded in 1929, and Mesures, which was founded in 1935, would rely, as Commerce did, on foreign writers previously unknown in France, who became a sort of emblem for the review: Gottfried Benn for Bifur, in the first issue, and Constantine Cavafy for Mesures, introduced by Marguerite Yourcenar as “one of the most celebrated poets of modern Greece, and also one of the greatest, as well as the most subtle, and perhaps the most singularly new, and at the same time charged with the richness of the past,” followed immediately, by fortunate coincidence, by a part of Mount Analogue, a novel by René Daumal. And the cosmopolitan nature of the enterprise is declared in the list of “foreign advisors” for Bifur: Bruno Barilli, Gottfried Benn, Ramón Gomez de la Serna, James Joyce, Boris Pilnyak, and William Carlos Williams (only the last of these made a recognizable contribution). The list is varied and outstanding, but it isn’t easy to be cosmopolitan.

    The film critic Nino Frank, the real inventor of the review along with the artist and poet Georges Ribemont-Des-saignes, was thinking about spending some time in Berlin, for sentimental reasons, when news broke of the burning of the Reichstag. It was a good excuse to get the “Paris Journal” to commission a series of articles. But — Frank explained — by the morning of his departure “I had already forgotten the official reason for my journey.” He found that he was the only passenger in the plane. At Tempelhof he was immediately stopped and questioned, abruptly and politely. Berlin seemed to him like a city of people who “passed without looking, except for certain women, still forlorn and nervous,” while he noticed a sound in the background: metal boxes rattled by the SA, who expected donations. 

    Before leaving, Frank had one more visit to make, also connected to Bifur. He recalled: “A few years earlier, a respect-able and portly, bald-headed man, his eyes shielded by gold-rimmed glasses, rang my doorbell. I am always ill-dis-posed to such intrusions. We couldn’t understand each other since he spoke only his language, and I spoke everything apart from German. It was Gottfried Benn, with whom I had exchanged letters and with whom, in the absence of anything better, I exchanged firm handshakes.” Frank knew only that Benn was “the only poet in his country who had, toward the start of the 1930s, some density, who published little, and things of a fairly glacial incandescence. Untranslatable, they told me, and the same was said, more or less in the same years, about Boris Pasternak.” 

    Invited to his house, Frank found himself “in a poor street where on the door I read that I was about to ring at the office of Doctor Gottfried Benn, a specialist in venereal diseases. A nurse took me to his room, where I met him again, dressed in a long white coat, the man with the gold-rimmed glasses: his manner was friendly and vaguely formal and, between one patient and the next, we had an unusual conversation.” Frank wanted to know something about the state of things in Germany, but Benn spoke about his “poetical itinerary.” He stopped from time to time, “casting a rather cloudy gaze, then began talking again about Dehmel and Hofmannsthal: 

    ‘Pessimisme héroïque’ he said in French, with a heavy accent. Since I took the opportunity immediately to mention once again Hitler and the Reichstag, he shrugged these off with a gesture of slight irritation. Then, seeing my surprise, he hinted that it was a question of leaving things alone, of accepting without censure, of seeing whether these might manage to do better than the others.” But something about the conversa-tion didn’t add up. “With a pirouette unexpected in a person so solemn,” Benn suddenly started talking about gonococcus and treponema. Syphilis, he said, was no great problem, whereas gonorrhea was. Meanwhile “the windows vibrated heavily at the hum of the engines at the nearby airport.”

    Politics was pressing. Hitler had already appeared in Bifur in December 1930, in the form of a misspelling. “On the one hand, the multiplication of bourgeois parties and their collapse, on the other the expansion of the Hittler movement, were the characteristics of the Reichstag elections”: this was reported, at the beginning of the issue, in Franz Carl Weiskopf’s article on the recent German elections, the review’s first intrusion into current affairs. Weiskopf was a member of the Association of Proletarian Writers and his article was most probably imposed by Pierre G. Lévy, the review’s financial backer, a prosperous bourgeois and follower of modernity who “was leaning ever more toward militant Marxism” (in the words of Ribemont-Dessaignes) — but at the same time, in his indefatigable snobbery, had launched the journal to emulate in some way the Princess of Bassiano who ran Commerce.Others were also chomping at the bit, thirsting for political trouble. A few lines after Weiskopf’s article there appeared twenty-five-year-old Pierre Nizan, who described himself as a “philosopher, traveler and communist,” and wrote: “But why should I hide my game? I say simply that there’s a philosophy of the oppressors and a philosophy of the oppressed.” The 1930s were now in the air. There was a shrillness about everything. A great contest was under way to find the worst oppressor, always in furtherance of some oppression suffered. The next issue of Bifur began with the religious scholar and thinker Henry Corbin’s translation of Heidegger’s “What is Metaphysics?” which Alexandre Koyré introduced with these words: “In the philosophical firmament of Germany, the star of M. Heidegger shines with supreme radiance. According to some it is not a star but a new sun that rises and with its light eclipses all its contemporaries.” A whole variety of games was being played all at once. 

    There was also the announcement of a Great Game that is still open. “Le Grand Jeu is irremediable; it is played only once. We want to play it at every moment of our life. And moreover ‘he who loses wins.’ Because it’s a question of losing. We want to win. Now, Le Grand Jeu is a game of chance, namely of dexterity, or rather, of ‘grace’: the grace of God and the grace of gestures.” These are that words that Roger Gilbert-Lecomte used to launch the first issue of Le Grand Jeu in the winter of 1928 — words that escaped the Surrealist web. The words “grace of God” were unthinkable elsewhere, and also “grace of gestures.” This was the point that most irritated Breton and Aragon, who in their response to the new journal were temporary reincarnations of Monsieur Hommais. In their view, those words transformed the youngsters of Le Grand Jeu (Daumal was twenty, Gilbert-Lecomte twenty-one) from potential allies into certain reprobates. The most serious charge against the new journal was “the constant use of the word ‘God’ further aggravated by the fact that one of the articles states that it is referring to a single God in three persons.” To this charge was added “a blunt remark that declared the preference given to Landru [a French serial killer of women] over Sacco and Vanzetti.”

    Here was the sound of something radically divergent, ready to strike out in a different direction. This was no longer a literary dispute or a clash between avant-garde sects. The new review was interested in identifying a “fundamental experience,” as Daumal would call it, from which everything else had to follow, including Scripture. And of course the review itself. Only three issues of Le Grand Jeu appeared; the journal closed in the autumn of 1930. But from its very first lines one sensed an “air of other planets.” It was a review that was taking its leave from the world of reviews. And in particular, before even being expelled from it, it was distancing itself from the Surrealist atmosphere that now pervaded everything (an everything that broadly coincided with the Sixth Arrondissement). The definitive sign of that separation can be found, perhaps, in two pages by Daumal that appeared in the second issue of the journal, called “Once More on the Books of René Guénon.” One reads there that Guénon, “if he speaks of the Veda, thinks the Veda, he is the Veda.” More than describing Guénon, a French mystic and esotericist who wrote about Hinduism and converted to Islam, those words foretold what Daumal himself would become, as a writer and an interpreter and translator of Sanskrit texts, right to the end.

    Why did the season of the reviews come to an end? Mainly because the irresistible attraction of the new fell into decline, steadily fading and vanishing. “Au fond de l’Inconnu pour trouver du nouveau,” to the depths of the Unknown to find the new:  it is always a line or a phrase of Baudelaire that signals the essential features of modernity. The new that Margue-rite Caetani was seeking, and that she found at the start of Commerce, was not the same new that she sought and failed to find twenty-five years later when Botteghe Oscure began. Every-one continued to pose as new, but this was now only a sign of social recognition. And even when the new was really new, it was not always what it claimed to be. Around a century later, it is striking how all the avant-gardes were weighed down by what was already old. Everything was sustained by an amalgam of art and snobbery. But the formula gradually fell apart. Every-thing proceeded “carrying its own corpse on its back,” in the words of Roger Gilbert-Lecomte, the poet who was one of the founders of Le Grand Jeu, and the most lucid of the mutants. They had to “change level,” said Daumal, the first who managed to do so, and devoted himself to steering toward Mount Analogue, the inaccessible summit of his spiritualist novel of that name, whose subtitle is A Novel of non-Euclidean and Authentically Symbolic Adventures in Mountaineering. Having reached that empyrean, there was no more talk of literary reviews — and no need for them.

    Obviously during those years, between 1920 and 1940, some notable reviews flourished in other countries, too — in Germany, England, Italy, the United States. But in Paris there was a concentration of journals that found no equivalent elsewhere. It all happened inside the boundary of the Sixth, with occasional forays into the Seventh and the Fifth. It was said that the editors of Bifur had only to spend each day at Café de Flore and Les Deux Magots to fill the table of contents of their journal. The Paris reviews had spawned other significant offspring during the 1930s. Each was a variant of the classic format of the Parisian review: anthropological (in the style of Marcel Mauss) with Documents, edited by Georges Bataille; militant-de-lirious with Acéphale; smugly modernist with Minotaure (based in Geneva with Albert Skira, though still Parisian); the progeny of Commerce and Mesures. But the concept and the intent of the review remained unquestionable: it was created by a few and for a few, yet its ambition was absolute and limitless. This aspiration was gradually lost, until it more or less disappeared after 1945. A common thread tended no longer to be there. Literature was preparing to become what it would be in the new millennium: a matter for individuals, stubbornly separate and solitary.

    In the first issue, in March 1964, of Art and Literature, which described itself as an “international review,” and indeed was, there appeared a piece by Cyril Connolly called “Fifty Years of Little Magazines,” which reads like an epicedium, a dirge, to literary reviews:

    Little magazines are the pollinators of works of art: liter-ary movements and eventually literature itself could not exist without them. Most of the poetry of Yeats, Eliot, Pound and Auden appeared in magazines, as did The Portrait of the Artist and Ulysses, Finnegans Wake, and nearly all of Hemingway’s short stories. A good magazine brings writers together, even the most isolated, and sets them influencing their time and, when that time is past, devotes a special number to them as a funeral. 

     Little magazines are of two kinds, dynamic and eclec-tic. Some flourish on what they put in, others by whom they keep out. Dynamic magazines have a shorter life and it is around them that glamour and nostalgia crystallize. If they go on too long they will become eclectic, although the reverse process is very unusual. Eclectic magazines are also of their time, but they cannot ignore the past nor resist good writing from opposing camps. The dynamic editor runs his magazine like a commando course where picked men are trained to assault the enemy position; the eclectic editor is like a hotel proprietor whose rooms fill up every month with a different clique. 

     To give some examples: The Yellow Book was eclectic, The Savoy dynamic, The Little Review dynamic, The Dial eclectic, Transition dynamic, Life and Letters eclectic (also The Criterion and The London Mercury), Les Soirées de Paris dynamic, La Nouvelle Revue Française eclectic, […] Verve eclectic, Minotaure dynamic, etc. 

     An eclectic editor feels he has a duty to preserve certain values, to reassess famous writers, disinter others. A truly dynamic editor will completely ignore the past: his magazine will be short lived, his authors violent and obscure. The eclectic will be in constant danger of becoming complacent and middlebrow: he lasts longer and pays better. Most quarterlies are eclectic: they have so many pages and are less agitated by the time-clock.

    There is very little to add, almost sixty years later, except that it has now become unlikely that there could exist a magazine that would even publish a eulogy such as this, which was after all solidly based on facts, since Connolly himself had edited Horizon between 1939 and 1949, which is to say, during the closing years of this short history that has the advantage of a clearly defined beginning and an end, like certain short stories by Nathaniel Hawthorne.

    Chorus of the Years

    Why won’t you let me be glory, standing
    there in the mountainous half-bright shadow,
    fallen step-by-step down the staircase where
    a bad smell, urine and something else,
    unarguably an ultimate flaw,
    good to ignore years before but now not,
    not with you there above me, looking down,
    hardly clear, hard silent, except for cricks
    on the landing I strain to count, losing
    each, for what, for glory and motion,
    which can’t be claimed, neither can—I know it,
    this frowsiness we ascend by, descend
    fast to register but not recognise,
    being that it is a malevolent malice
    I left behind in brackish St Thomas,
    that ash earth place, where dark glares in vials
    and parchment with names are put in shoes to
    turn minds spider (it happened to mom),
    where speartips of cane-flags pointed at hearts
    set whole hills in tears, and rage, Jesus, rage
    evening after evening drags—let me stop
    now, halfway up the stairs; the arc’s broken
    like an hiatus at fresh water gapes
    back at the chorus of early days
    when, of course, you were the only singer,
    lifted up, granted, like fire in coal
    broken through, at last, one black ice nugget.

    Crowns

    i.m. Donald Rodney (18 May 1961 – 4 March 1998) 

    Emblems of countless martyrs
    devoured by the Atlantic,
    who remembers that slavery was monarchical,

    that historical atrocity
    came directly from the divine head,
    that gravity cannot be numbered.

    Do not seek to be venerated
    or to win the appraisal of civic awe,
    like for instance Basquiat’s crowns.

    Estimates have been made – “about 15 million” –
    but you didn’t allow your body
    to conform to that illness;

    you transformed that. But into what?
    Emblem of power and of savage mockery?
    Vehemence yet no vengeance? And yet,

    ever at the fulcrum
    like that ocean, bleak plain ink which echoes
    the aftermath of your rage

    which achieved the most difficult
    grace in the election of urgency (which is grace).
    Turbulent saccharum officinarum: you transformed all that.

    Obsessive sketching of that.
    The immense, miserable aftermath of that.
    Concentric abyss that your crowns

    turning wheels within time
    inexhaustible after a splash opens
    the death clinic chasm, turning

    the aftermath of surgery and slavery,
    growing irrepressibly without end, blood’s
    real provenance of what survives. And what survives

    is diaspora. (Braced, sotto voce, in this
    parenthesis, is your perpetual rage.)
    All else constitutes a lived fable.

    Zungguzungguguzungguzeng

    All me sparks fly all night;
    all my mouth axle bright, wheel
    the true guillotine serpents’ fleck
    amber sweat off my waistline,

    sibilant as touch-me-nots’
    shuttered leaves rattling Death
    in the Arena. Honey Blight
    and Armageddon. I am Thorn

    Tongue, bare sprite-child nerved
    against neon slush and ants trap,
    I squeal, bitten, “Mother O mother…
    come!” No one but echo and ice.

    Day fevers dusk a Midas
    wisp. Torched corona. I am adagio,
    brisk, cool and deadly onstage;
    my visible black flares yellow,

    speckled lava. All me manna
    chrome, stigmata’s tingling
    rush turns the purging
    cassia spokes ripe, ripe music.

    After Covid

    Paleontologists disagree about whether dinosaurs were thriving or had already entered a long decline when an extinction event finished them off sixty-six million years ago. Depending on who is right, the asteroid that struck Earth either radically changed the direction of evolution or merely accelerated an established trend. Disasters that target the currently dominant species invite similarly divergent interpretations. Their capacity to jolt us out of our complacency is not in doubt. But in so doing, do they truly redirect the course of human history, or do they merely act as catalysts of ongoing change? Covid19 is just the latest in a long series of crises that have raised this perennial question.

    And how it has been raised! Since the pandemic began, journalists, pundits, scholars, and pundit-scholars speak as if the pandemic will itself periodize history, into the “Before Time” and the new world that we have entered. They have fallen over each other predicting all manner of dramatic change. But what kind of change, exactly? A big divide separates the realists and the continuationists from the aspirationists and the disruptionists. The former prefer to view the coronavirus crisis as an amplifier of present shifts and enhancer of familiar structures. The latter consider it a transformative force, a crisis that is an opportunity, a source of novel remedies for assorted societal ills thought to be in urgent need of correction. 

    The continuationist position has much to commend it. After all, a great many of the crises highlighted by the pandemic were already underway. Nationalism and anti-glo-balist sentiment were on the rise. International indices of freedom were declining. Digital tracking and surveillance had become ever more invasive. Economic inequality was already unprecedented. Corporate debt had already reached record highs, and central banks had already begun to drive more of the economy. Tensions between the United States and China were already mounting. Iran had long been in trouble. China’s strategic ambitions were already obvious, and its growth had already begun to slow, as had India’s. Oil prices were already too low to sustain the bloated budgets of the petrostates. The European establishment was already eager to go green even before the EU’s covid-recovery package unleashed a torrent of funding for salient projects. Online shopping had long been eating into retail’s market share, and remote instruction and telecommuting had been expanding for many years. Millen-nials had already been dealt lousy cards by the Great Recession and austerity. African-Americans led shorter and unhealthier lives even before they succumbed in disproportionate numbers to the coronavirus. And even the “novel corona-virus” is not as novel as it has been made to sound: multiple outbreaks of SARS and MERS have been recorded since 2002.

    In all these and many other respects, the crisis has served as an accelerator and an amplifier. Sometimes the push was felt to be sudden and hard: the head-over-heels transition to remote work and teaching is a prime example. But even that apparent rupture was firmly rooted in technological shifts that had long prepared the ground. This kind of historical acceleration has a long pedigree. The World Wars and the Great Depres-sion spawned unprecedented mass mobilization (for war and revolution) and economic shocks. Taxes soared, the right to vote spread, colonial empires trembled, and welfare states bloomed. Capitalism was temporarily tamed, suspended, and sometimes even abolished. Yet none of this came out of nowhere: well before 1914, there had already been pensions, progressive taxation, labor unions, public schools, suffragists and suffragettes, and independence movements. What these crises did was give an enormous boost to initiatives that were already in progress.

    The dramatic empowerment of the masses was rooted in the modernizing institutional and economic transformations of the previous century or two. Even purposely radical communist regimes built on nineteenth-century ideas and embraced generic schemes such as industrialization. Genuine detours from the modernizing script — such as the Khmer Rouge’s murderous evacuations of urban residents to the countryside — remained exceedingly rare and unsuccessful outliers.

    Nor had historical pandemics been genuine game-changers. It is hard to imagine disasters more disruptive than the Black Death of the late Middle Ages and the pandemics of smallpox, measles and influenza that ravaged the Americas after European colonizers introduced these pathogens after 1492. Yet even the medieval plague frequently intensified earlier trends, from urbanization and the erosion of serfdom to challenges to Catholic unity and papal supremacy. In the New World, the decimation of the indigenous population greatly assisted the Spanish conquests, but even that process was ultimately a mere acceleration, however monstrous in scale and style. The ultimate outcome had hardly been in doubt: witness the wide and rapidly expanding disparities between the fiscal-military states of Europe and the largely Copper Age societies of the Americas, the fissions that had already opened within the most powerful American empires of the day, and the conquistadors’ zeal to out-colonize their fellow European competitors.

    Nothing quite as dramatic has happened since, even as epidemics remained common. When bubonic plague intensified one more time in seventeenth-century Europe, the most dynamic economies — most notably Britain and the Nether-lands — weathered it quite well. In the nineteenth century, massive outbreaks of cholera and yellow fever famously raised support for ambitious public health measures but cannot rightly be viewed as their root cause. After all, the counter-factual of ever richer and more knowledgeable societies persistently failing to invest in sanitation is hardly a plausible one. However much repeated health scares shaped the pace and scale of intervention, it was economic growth and science that made it possible in the first place.

    A century ago, the Spanish Flu was as global in its reach as Covid19 is today, but it turned out to be much more lethal. It targeted not only the elderly but also infants and, most crucially, people in their twenties and thirties — workers in the prime of life who often had just started a family and who left behind spouses and small children. Vast numbers of people died: perhaps forty million, or 1 in 50 people on earth, equivalent to more than 150 million today. Yet in the end, little tangible change resulted from this catastrophe. Improved coordination of international health monitoring was well in line with the overall consolidations of the League of Nations period and fairly unremarkable.

    This is not to say that crises never reroute trajectories of develop-ment. But such outcomes are made all the more noteworthy by their extraordinary rarity. A few years ago I was able to identify a clear example: the attenuation of income and wealth inequal-ity through major disasters. I found a pattern that has held true across recorded history: massively violent ruptures were the only events that have ever greatly shrunk the gap between rich and poor. Those events, I found, came in four flavors: the collapse of states, catastrophic pandemics, mass mobilization warfare, and transformative revolution. The latter two were especially characteristic of the twentieth century.

    State collapse was the most ancient leveling force, dating back to Old Kingdom Egypt. It was also the most dependable. Early states were designed as powerful engines of inequality: whenever they unraveled, they took elites and their accumulated wealth and power down with them. While most people ended up worse off, the rich had the most to lose. Pandemics, by contrast, equalized by different means, administering a harsh Malthusian solution to demographic pressure. When they carried off a sufficiently large enough share of the population, labor became scarce and wages rose, while demand for land fell, reducing its value. The masses who sold their labor found themselves less poor while the rich who controlled capital lost some of their income and wealth. 

    There occurred, in these cases, a kind of egalitarian intermission in history. Such shifts are faintly discernible during the first pandemic of bubonic plague at the end of Roman antiquity, are amply documented in Western Europe in the wake of the Black Death, and have also been observed in seventeenth-century Mexico, where real wages increased once indigenous population numbers had dropped to record lows. But these violent levelings — these adjustments by catastrophe — never lasted. As states were rebuilt, greedy elites returned. As plagues faded, population recovered, wages fell, and fortunes grew. Even so, the egalitarian intermissions could go on for generations, providing rare relief from plutocratic dominance. At the very least, they proved to the world that life did not always have to be the way it usually was.

    The unique ruptures of modernity drove home that message with even greater force. In the World Wars — especially the second one — returns on capital plummeted, and governments launched aggressive interventions in the private sector and raised taxes on large incomes and estates sky-high. Conscription and the war effort boosted the bargaining power of workers and unions thrived. After the war, social solidarity and the newly grown fiscal and organizational capabilities of government underwrote welfare states. During the 1950s and 1960s, and occasionally even the 1970s, economies grew, middle classes expanded, and inequality was kept at bay.

    Those societies were the lucky ones. Others experienced violent upheaval that led to far more dramatic change, as in Russia after World War I and in China after World War II. Communism actively pursued economic equality by the bloodiest of means. But that grand and grotesque experiment merely created new social hierarchies. The compression of wealth and income distributions persisted only as long as violent regimes survived or remained committed to that goal. The moment restraints were relaxed, material inequality soared to previously unknown heights, from Russia to China and beyond. In the West, where equalization had been less radical, its reversal was also more muted, but it has proven equally persistent. Since the 1980s, large economic policies and processes such as globalization, deregulation, financialization, and automation have rewarded some more than others, to put it mildly. In the United States, this process has gone further than among its Western peers, creating economic disparities not seen since the 1920s. Seemingly impervious to political preference, this process has continued under Democrats and Republicans alike.

    So will the pandemic prompt a change of course? Recent history gives us little reason to think so. Although the Great Recession of 2008 battered the One Percenters, they soon recovered, while many others continued to struggle. This time does not look notably different. Inequality has gone up, in the United States and elsewhere. Job losses have disproportionately hit the young, the poor, the less skilled, and traditionally disadvantaged groups. And economic inequalities have been replicated in other domains, from worse health outcomes for the least protected and inferior learning opportunities for poorer students.

    Meanwhile, at least so far, the super-rich have recovered with astonishing speed. Bloomberg’s index of American fortunes among the world’s top 500 reveals the most V-shaped of recoveries: in 2020, a steep plunge between mid-February and the third week of March followed by an almost complete turnaround by early June. Jeff Bezos, the leader of the pack, is richer than ever before. To the relief of non-billionaires, the S&P 500 has closely tracked this plutocratic V. Yet while that has been good news for portfolios of all sizes, key indicators of economic health and general welfare, such as GDP or employment, are lagging far behind.

    Taken together, these developments fit the continuationist template: existing inequalities have been brutally exposed, or have grown, or have made themselves more painfully felt, or all of the above. But there has been no change of direction. What does this mean for the future of inequality — or climate change, or “late capitalism,” or the “neoliberal world order?” What, in other words, are the odds of seizing progressive change from the jaws of … more of the same?

    A genuine re-direction, a bold new course for society, may be accomplished by peaceful or violent means. Aspirational disruptionists hope for the former. Their message is simple: this is the time. At long last, the coronavirus crisis will make it impossible for us to avert our gaze from society’s ailments. It will shake us out of our customary stupor and jolt us into action, ready to combat inequality and systemic racism while shoring up health care and worker protection and infrastructure and the environment. This perspective, popular in large parts of punditdom, involves a bold leap from trigger (virus) to outcome (transformative change). The proximate mechanisms that are supposed to generate such sweeping change tend to be rather less well defined. During the Democratic primary, Bernie Sanders’ vague allusions to a “movement” that would somehow ensure implementation of far-reaching programs were emblematic of the magical thinking employed to bridge that chasm. The path to a grandly upgraded social contract or a Green New Deal seems similarly obscure. Yet the more ambitious and game-changing the goals, the more clearly formulated the way to reach them needs to be. At least for now, the needed clarity is absent.

    The current buzz of progressive energy in politics may prove deceptive. The election of Joe Biden was a relief of historic proportions, but with mainstream politics stuck in crisis-management mode, drastic re-directional change would seem less plausible than ever. The results of our recent election, which highlighted persistent polarization and all but guaranteed a prolonged stalemate, confirm this impression. (Old news alert: Democrats and their allies enjoyed a 26-seat margin in the Senate when the New Deal got underway, and a staggering 63-seat margin when FDR’s Supreme Court packing plan failed. Obama dropped the “public option” from the Affordable Care Act when his side in the Senate was 18 seats ahead. Enough said.) American institutions have focused on keeping everything afloat, as have the leaders of European countries and others elsewhere. If such efforts bear fruit, the prospect of radical transformation will once again recede.

    That would not come as a surprise. Historically, transformative change has been born of extraordinary violence. Yet Covid19, for all its terrors and mortality rates, is not particularly violent at all. Worldwide, the 1.25 million or so lives lost as of this writing equal about a week of normal mortality (which averages 165,000 per day) — or rather a few days more, allowing for undocumented deaths; and they were for the most part gleaned from among those already well advanced on their journey to the sweet hereafter. This is a far cry from the fall of ancient Rome, which set back civilization by centuries, or the Black Death, which took one in three Europeans, or the world wars and communist takeovers, which ruined entire countries and killed many tens of millions of all ages.

    Looked at from that angle, the notion that we might achieve re-directional change without comparably massive dislocations seems more wishful thinking than realistic strategy. At the very least, it is squarely at odds with what history teaches us. There is no historical precedent and no obvious contemporary mechanism for making that happen. While we cannot rule out anything — what if the 2020s are different after all? — this should certainly give us pause.

    But what of the alternative? What is the potential for ruptures dramatic or violent enough to upend the established order and open up space for transformative change — for upheaval so severe that it cannot fail to force us off the beaten path?

    My answer will be sobering for those who crave radical change. Conservative forces are more powerful than they have ever been. The four violent leveling mechanisms that have been operating in the past are now kept at bay by four robust stabilizers of the established order. Mass affluence is the most basic one. It is hard to find societies with a per capita GDP of more than $5,000 or $6,000 (measured in 2011  standardized dollars to ensure comparability) that have experienced societal breakdown or revolution. By some measures not even Yugoslavia in the 1990s cleared that fairly modest threshold of prosperity.

    While this lack of precedent does not rule out truly violent dislocations in Western countries or their peers, it strongly suggests that the likelihood is low. Moreover, economic achievement tends to lower fertility and age populations: the resultant paucity of desperate young men — the most plausible agents of revolutionary struggle — imposes a pacifying constraint. Those concerned about secession, armed conflict, and collapse will have to cite the likes of Syria, Yemen, and the two Sudans rather than the United States.

    Such outcomes are qualitatively different from the anti-government protests and riots that have become more common in the United States and some European countries over the last decade. Rooted in the Great Recession and its social consequences, these protests have mobilized growing numbers against austerity, globalization, and more recently climate change. The wave of activism and unrest that started in May 2020, while triggered by particularly jarring instances of racial injustice, would likewise seem hard to disentangle from the Covid19 lockdowns and the sudden economic downturn that disproportionately hurt the young. Once again, the pandemic amplified existing discontent.

    It is true that this need not be the whole story. In pioneering work that seeks to identify regularized patterns across history, Peter Turchin has argued that these events are not merely responses to acute crises, but are meaningfully correlated to gradual shifts in destabilizing variables from political polarization to immigration and inequality over the long term. He envisions a cycle moving from relative stability after American independence to a peak of destabilizing factors from the 1860s to the 1910s, and then on to another minimum in the 1950s followed by a renewed and ongoing rise. At least so far, however, the overall intensity of unrest has been much lower than it was in the past, when society was poorer and less well buffered against privation. The United States seems a long way from the riots of the late 1960s, let alone the bloody labor conflicts of the 1910s and 1920s — not to mention the Civil War. This will come as bad news  to radicals. 

    This is not an accident. Other stabilizers have been contributing to this striking attenuation. The social safety net has helped tame the fallout from crises. Europeans woke up to the virtues of welfare when the mobilizations of the Great War and the Bolshevik revolution shook the foundations of the old order. Although America briefly lagged behind, the Great Depression quickly forced it to follow suit. Support schemes have been expanded ever since, from Johnson’s Great Society to the second Bush’s prescription drug benefit and Obama’s Affordable Care Act. Threadbare though this system may seem to admirers of the most generous European welfare states, it largely manages to stave off mass immiseration and serious social unrest, especially as ad hoc patches — such as the $600 weekly supplement to unemployment benefits this spring — can be applied as needed.

    The third and fourth great stabilizers — the other impedi-ments to cataclysm — are more recent in origin. Quantitative easing — whereby central banks expand the money supply by buying government securities — has come to play the role of a miracle drug that promises to shore up businesses and markets without the need for austerity or punitive taxation. Thus far, this torrent of keystroke money has been good news for investors and bad news for progressives. The aforementioned V-shaped recovery enjoyed by the former would not have been possible without this intervention. And the final stabilizing force is science, and its particular potency in the face of a pandemic. During the Spanish Flu, there was no flu vaccine and DNA was unknown. A century later, the SARS-CoV-2 genome was sequenced mere weeks after the first report in Wuhan, and its mutations are now carefully tracked around the globe. Within months, more than a thousand drug and vaccine trials were underway, fast-tracking has cut development time to a fraction of the usual slog, and pharmaceutical production capacity was ramped up on spec.

    Of course we still do not know how all this will work out, especially as the efficacy — and the popular acceptance — of new vaccines and treatments remain uncertain. Yet a measure of cautious optimism seems warranted. The sooner science delivers the goods, the better are our prospects of a return to some version of normal, if indeed that is, or should be, our goal.

    Yet modernizing development is not a one-way street toward greater resilience. At the same time as it buttresses the established order with growth, welfare, finance, and science, it also undermines the status quo, rendering it more socially and economically fragile as a direct result of progress. Governance is the main exception. In rich countries, the state and its agents are firmly entrenched. If push came to shove, we would soon realize just how far their instruments of surveillance and repression surpass anything available in the past. The only restraint resides in the political will to employ these means.

    Welfare matters even more. States that capture and redistribute between a third and half of GDP cannot be dislodged, at least not without bringing down everything else. There is no plausible alternative. Bloated in their bureaucratic complexity and persistent in their insinuation into every conceivable aspect of our lives, they are hard to capture, harder to restructure, and impossible to overcome. 

    Fragility lurks elsewhere now, above all in the economic domain. Advanced economies have become vulnerable in new ways. Three principal reasons stand out, all of them direct consequences of development and progress. First, there is globalization in the broadest — and most de-politicized — sense of the term: the interconnectivities and interdependencies that govern production and exchange. This is, empirically, how economies now work. A chain with many links has many vulnerabilities. Yet despite initial worries about vulnerable supply chains, this intricate web seems to have passed the latest test.

    The second is the growing importance of the service sector, which expands at the expense of farming and manufacturing as societies grow richer. In normal times, retail, hospitality, and entertainment account for a tenth of America’s GDP. The greater the role played by these services, the more lockdowns and social distancing drag down the overall economy. When the Spanish Flu struck, there was far less to shut down than there is now, from airlines to resorts.

    But this time we also caught a lucky break. Fully a third of official economic output is generated by finance, insurance, the real estate business, and all manner of financial and business services. Well suited to remote work, these crucial white-collar sectors were spared major devastation. Had Covid19 appeared twenty years ago, they would have been much harder hit, with dire consequences for economic life more generally. (All we got was the pathetic Y2K scare.) This in turn underscores the stabilizing potential of science and technology well beyond virology. Information technology has truly been a savior.

    Overall, these vulnerabilities have been rather well contained by some of the same technological and economic innovation that has brought them into being. This leaves a third and altogether different source of fragility: our valuation of life and our attitude to risk. Given short shrift in current discourse, it deserves far more attention. All other things being equal, a society more inured to morbidity and death would be considerably more resilient in the face of a pandemic, and so would be its economy.

    To be sure, humans have always feared disease and the end of life. Yet no matter how fundamental and invariable such attitudes might seem to be, they are sensitive to overall development. For most of history, life was short. Two centuries ago in the West, and a century ago or even less elsewhere, average life expectancy at birth was a third of what we now take for granted. Perhaps one in three babies did not survive their first few years. The ranks of adults were whittled down in a steady drain of attrition. And even as some lasted to a ripe old age, they were vanishingly few in number. No one thought that odd or even remarkable. Much of the underlying suffering was frightfully mundane, driven by childhood diarrhea and dysentery and typhoid and tetanus. Epidemics merely added further uncertainty to the mix.

    When great plagues struck ancient civilizations, there was nothing to be done. The basics of infection long remained a mystery. In Europe, the Black Death of the late Middle Ages inspired early experiments with quarantines. An improvement over helpless laissez-faire, those early lockdowns nevertheless failed to solve the problem: waves of plague pounded Europe for more than three centuries. In the 1720s, when Marseille was sealed off from the outside world for two years to prevent a plague outbreak from spreading inland, half of its residents perished. And the disease slipped through anyway: not even the seventeen miles of actual stone walls that had hastily been thrown up cross-country managed to stop it. When Yellow Fever swept Philadelphia in 1793, the federal government shut down and almost one in ten residents lost their lives. Residents blamed a variety of causes from rotting coffee and lightning rods to that old mainstay, divine punishment.

    But then the world finally changed. In the centuries that followed, successive breakthroughs in epidemiology gradually rendered human life more predictable and less vulnerable. Modernity’s crackdown on smallpox and typhus, on cholera and typhoid, on tuberculosis and yellow fever, on polio and measles set us free us from much misery and early death. This was without precedent. Science did not restore a better world we had lost. It cleaned up and increasingly secured a world that had always been a dirty, dire, deadly mess.

    It has been all too easy to get used to the blessings of that epochal clean-up, and to take them for granted. Now is the first time they are slipping from our grasp, and the first time we count on science to retrieve them. Gone are the low expectations of even a hundred years ago. When the Spanish Flu appeared back in 1918, global life expectancy at birth was only half of what it is today, and the Grim Reaper was still a constant companion in ways we now find hard to fathom. Vaccines for tuberculosis, typhus, tetanus, measles, and polio had yet to be developed. Viruses raged largely unchecked. In that world, a new strain of influenza was simply one more refugee from Pandora’s well-stocked box. And when that pandemic departed as abruptly as it had arrived, scientists could not take credit.

    2020 was very different. Cradled by the comforts of peace, penicillin, Prozac, and prosperity, we have grown far less tolerant of hazard than our toughened ancestors. Economies wither under anxious distancing even as case fatality rates fall far short of those wrought by historical pandemics and greatly favor those with most of their lives still ahead of them. This makes our pandemic above all an economic crisis, with all the social, psychological, and political repercussions that entails. Economic activity hinges very much on perception — not just on bare needs (long met by modernity) but on the confidence that has us demand and consume all those superfluities that prop up employment and GDP. But now, unlike in the past, that confidence has been shaken.

    Every year about 2.8 million Americans die from all causes. As of this writing, Covid19 had raised this tally by a little  over 8 percent, or a bit more if all likely deaths are included. And these bare numbers inflate the pandemic’s overall impact. In the United States, the mean age of death of or with Covid19 has been around 75 years, an age at which remaining life expectancy averages 12 years (or rather less for those with pre-existing conditions, which are overrepresented among those who do die). This is very much worth putting in perspec-tive. Recall the morbid excitement of the media when, last spring, Covid19 fatalities passed the toll of the Vietnam War of a little over 58,000 — a false equivalence if there ever was one. Average age at death among those soldiers was 23 years, at which point the average man could expect to live another 48. A total of 2.8 million years of life were lost. The official Covid19 toll did not reach that mark until around Election Day. And even that calculation ignores the fact that these young soldiers were cut down with their whole lives ahead of them and before most of them had a chance to start a family. If we found a way to factor that in as well, the weight of loss would appear even more staggering.

    And yet our response is different, our fear more palpable. How would we feel today if 2.7 million young Americans were drafted to go fight overseas, as they were then? (Or rather 4.5 million, adjusting for today’s larger population.) Would that even be possible? Yet just fifty years ago they mostly went, even if some claimed bone spurs or went to Canada. Over 26 million Americans served in the two World Wars and in Korea, without major resistance at all. In the Civil War, 1 in 50 Americans was killed. The shift away from treating lives as expendable and fellow citizens as cannon fodder is a fairly recent one.

    Good riddance, we might say: few if any of us will pine for that bygone age. But our growing commitment to safety, and our ability to honor it, have come at a price. Some economists have cooked up an unappetizing concept known as the Statis-tical Value of Life. Working from dubious premises, they inform us that an American life is currently worth close to ten million dollars. Government agencies eagerly seize on this number to impose costly safety regulations on private industry but conveniently ignore it when compensating the families of service personnel killed in action, who are usually fobbed off with a million or two in lifetime support. Callous as it might seem, that latter approach at least has the virtue of being more realistic than the high-end price tag that signally fails to align with other metrics: equivalent to twice average lifetime per capita GDP in the United States, it values all Americans, who account for a paltry 4 percent of humanity, at more than 3 quadrillion dollars, or close to ten times total global wealth.

    Such absurdities are but a pale reflection of a broader revaluation of values. We no longer fight fear, we promote it. We seem transfixed by our fragility. Summary school closings, which wreak havoc on countless working parents, are hard to reconcile with the minuscule health risks faced by the very young. Let us grant that they are designed to protect those parents and not just unionized teachers. But plenty of parents have also been wary of letting their children return to college, despite the fact that only 0.1% of Covid19 deaths have occurred between the ages of 18 and 22. One cannot help wondering how the families of the 26 million Americans who served in the two World Wars and in Korea would have felt about that.

    During the Spanish Flu in the fall of 1918, male college students trained for the war in France. And even as this pandemic took more than half a million American lives, its economic consequences were ultimately minor. Shutdowns were haphazardly imposed and retail was not greatly affected. For the most part, people just plowed on — partly because working from home was not an option and welfare was almost non-existent, and partly, perhaps, because it did not actually occur to them not to. The notion that the preservation of life justifies almost any cost —widely if unevenly embraced by a citizenry ready to pull back regardless of government fiat — had not attained its present dominance.

    This collective anxiety, and the open-ended distancing and recession it spawns, places an especially heavy burden on the fourth great stabilizer. Well beyond warding off mass mortality and morbidity, science must now restore the sort of confidence that governments cannot decree. Our trust in the blessings of modern medicine has eclipsed traditional religious beliefs as well as faith in government — at least in much of the population. Last year, a Pew Research Center survey found that more than twice as many Americans had confidence in scientists than in elected politicians. And why wouldn’t they? Unlike the uneasy compromise of life with rampant Covid19, the renewed freedom of life after — or rather alongside managed — Covid19 is entirely the gift of the high priests of science. And the more we value life, health, and safety, the more we rely on that gift to get everything back on track.

    This is all the more true as reactions to containment measures vary, driven by factors as diverse as vulnerability, political preference, and class. The pandemic has even provoked a new know-nothingism, an unembarrassed hostility to science, that reached all the way into the White House. This variation in responses to the crisis fosters acute tensions — between young and old, between red and blue, between rich and poor. The longer the viral threat lingers, the more corrosive these tensions are bound to become. Government must help us stay afloat while the pandemic rages. It also has a crucial role to play in subsidizing and distributing medical remedies. At some point it might even have to mandate their use. In the end, however, only science can deliver us.

    The coronavirus shock has been both amplified and checked by modernity: by global connectivity and fragilities on the one hand, and by financial relief, Zoom, and medicine on the other. It bears within it both boost and restraint. This is something it has in common with a greater crisis to come. Neither the Paris Accord nor Greta Thunberg will save us from climate change. Geneticists, physicists, and engineers are our only credible line of defense.

    Much has already been accomplished, from drought-re-sistant genetically modified crops to the ever more effective harnessing and storing of renewable energy. Nuclear power, an even more powerful redeemer, has been at our disposal for almost eighty years, even as misguided politicians around the world are doing their craven best to snuff it out. But more will be needed to sustain mass affluence in the First World and to spread it elsewhere. If nuclear fusion remains a pipe dream, geoengineering may well have to come to the rescue. Yet whatever the precise configuration of techniques that will keep us on track, all of them will share the same source: science.

    We can argue to our heart’s content whether the twenty-first century will be America’s or China’s. But that debate is moot. We already know that this will be the century of science. For the first time in history, it will not be enough for scientists to make the world smarter and richer. Now they will be called upon to make sure that it does not slide back into the dark days when germs held humanity hostage, and to enable us to square the circle by reconciling environmental protection with ongoing growth.

    Modernity has long struggled to contain the forces it unleashed. Smog had to be tamed. Nuclear war — the ultimate genie-out-of-the-bottle hazard of modernity — had to be averted. HIV, Ebola, and the SARS outbreak of 2003 had to be contained. The current challenge has not introduced a new dynamic: it is simply the latest in a long line of challenges brought forth by progress. As a challenge, the coronavirus crisis ought to be manageable enough to be overcome even as it accelerates ongoing shifts and trends, not all of them equally worrisome. Yet this need not be true of the next “novel” virus, be it natural or man-made — let alone of the much more complex and daunting process of environmental degradation.

    It may seem strange, even a little morbid, to marshal an ongoing crisis to ponder future ones. But that is exactly what we must do, once we appreciate that Covid19 is unlikely to serve as that rarest of agents, a genuine historical disrupter. But the post-Covid19 continuities, for better and for worse, should not make us complacent about the volatility of late modernity. The most positive spin we might put on our epidemiological calamity is that it has provided us with an invaluable trial run for the more enormous crises that await us. Just as corona-virus outbreaks in the early 2000s and the financial crisis of 2008 taught us lessons that have come in handy (or would have come in handy had they been heeded), the more we are willing to learn from the present pandemic the better equipped we will be to deal with worse travails down the road.

    In the end, this might well turn out to be the most important legacy of Covid19: an enlargement of our imagination of disaster, a sober preparedness for the perils that surely await us. Do not expect the virus to re-make our world. It will not force us to solve our most pressing problems; only we can force ourselves to solve them. There was a touch of wisdom in Michel Houellebecq’s sardonic prediction that the post-Covid19 world “will be the same, just a bit worse.”

    Strangering

    According to Wallace Stevens, “Every poem is a poem within a poem: the poem of the idea within the poem of the words.” We often put “the idea” in a brief phrase: “the evils of war.” We rarely talk about the poetry of the idea. By itself, the theme, the idea, is always banal: it has to be recreated by the poet’s imagination into something animated. (And since the poet at hand is male, I’ll call him “he” in what follows.) And then, how is he to arouse a glow of personal vividness within the language, and create “the poem of the words”? 

    Suppose the poet wants the “idea” of his poem to be “the disparity of cultures.” What might “the poem of the idea” be? This particular poet’s imaginative move is to locate his two cultures in cosmic space, on two different planets, one of which is Planet Earth. And how will the words be made into “the poem of the words?” An answer occurs to him. What if a visitor from outer space had studied English, but could not escape mistakes in using it for the first time? At this initial stage, there remains a great deal to be done, since both the poem of the idea and the poem of the words are still sketchy and unfulfilled. But at least the poet now has the two poetries to work with. And the poet, Robert Hayden, an Afro-Amer-ican (his preferred term), is convinced that a poem written by a minority poet has to be as strong in the poetry of its words as in the poetry of its idea. In “Counterpoise,” a group-mani-festo that Hayden published in 1948, he declared emphatically, “As writers who belong to a so-called minority we are violently opposed to having our work viewed, as the custom is, entirely in the light of sociology and politics.”

    Let us suppose that it is 1978, and in a new book of poems a reader is seeing an odd entry, bizarrely bracketed fore and aft to show that the title is an editorial addition: “[American Journal].” Who, the reader asks, kept the journal; for whom was it intended; who attached the subsequent title implying, by its non-authorial initial capitals, an editor familiar with written-English usage? The answer is suspended. As the poem opens, the reader sees a series of totally unpunctuated sentiments flowing down the page in hesitant and unequal paragraph-stanzas halted intermittently by pauses. The journal-speaker is fluent, but not error-free, in English. The reader is in fact encountering the internal stream of conscious-ness of an extraterrestrial, dispatched by his rulers (“The Counselors”) to spy on, and report on, a group of brash new planetary invaders calling themselves “americans.”

     We see that the spaceman has learned only oral English, and knows none of the conventions of written English such as punctuation, apostrophes, and upper-case letters; but there must exist in his own native language some sort of honorific distinction reserved for the rulers, “The Counselors” (the honorific is translated in his journal by its sole and singular use of capital letters). In the poem’s prologue, the extraterrestrial muses on his new situation:

    here among them     the americans     this baffling
    multi people         extremes and variegations     their
    noise        restlessness       their almost frightening
    energy                  how best describe these aliens in my
    reports to     The Counselors

    disguise myself in order to study them unobserved
    adapting their varied pigmentations          white black
    red brown yellow       the imprecise and strangering
    distinctions by which they live     by which they
    justify their cruelties to one  another

    charming savages     enlightened primitives     brash
    new comers lately sprung up in our galaxy     how
    describe them       do they indeed know what or who
    they are         do not seem to             yet no other beings
    in the universe make more extravagant claims
    for their importance and identity

    The spy, disguised and passing as a fellow-citizen, studies the unfamiliar new tribe, noting its heterogeneity, its “strangering” distinctions, and its repellent moral justifications. Little by little the inner voice of the spy reveals his burden: he must compose a report for The Counselors, and he feels inadequate to the task. Although the planet of the “aliens” belongs to the same galaxy as his own, he knows no group in the entire universe who regard themselves so insolently, so proudly, as these “savages” do. So far, the extraterrestrial voice has offered relatively little information about its own powers and intentions; only later — during a visit to a rough tavern — does it reveal that it has masked itself (at least in the tavern) as male. I am calling the voice “he,” but it has the power to exist in different genders and can adopt local skin-pigmentation at will. 

    Since the planetary visitor is addressing himself, we can only guess, from his own categories and judgments, what sort of person is generating these words. We learn that he is fright-ened by lawless energy, by noise, by unpredictable restlessness, by multiple skin-colors: in disguise he has “adapted” — but he means “adopted” — various pigmentations depending on his social context. “Adapted” is one of his linguistic falterings, like “strangering” (in lieu of “strange”). He has strong moral views, and is revolted by the cruelties he sees among these “savages” (however charming); he has equally strong intellectual views, judging the newcomers as “primitives” (however sophisticated their technology). To him, the “americans” are aliens incapable of introspection or self-analysis yet ever-boastful in their claims to importance and to a unique identity.

    Hayden’s 141-line “[American Journal]” has attracted a good deal of contemporary attention, but its imaginative swirls of inconsistent “american” ideologies and behaviors have provoked more critical observation than its equally imaginative flights of language. I want to reflect here on Hayden’s imaginative interest in creating a spaceman’s mind and forms of expression. “The Counselors” on the spaceman’s planet apparently maintain a training laboratory for spies, providing language-tapes of any culture they intend to investigate. Two sets of these tapes are labeled “English,” one transmitting British English and the other American English, and the spy has been afforded both sets for his diligent preparatory study. One of the most entertaining aspects of this nonetheless serious poem is its presentation of the verbal and interpretive blunders that any visitor to a foreign land is bound to commit when he finds himself embedded in a bewildering unknown culture. Hayden must have taken intense pleasure in thinking up, all through, the multi-faceted “poem of the words” used by the alien.

    “[American Journal]” presents itself as a quasi-symphonic poem, advancing with the fluidity of musical movements in the spaceman’s successive choices of aspect: scenes, emotions, distinctions of pacing, degree of self-distancing. After the opening prologue, the voice (perhaps to reassure The Counselors) begins to liken this brash new species to his own tribe; “like us,” he says, they have advanced technology and have traveled to the moon (grossly leaving their “rubbish” behind); and apparently they too worship “the Unknowable Essence” (but how do they define their Unknowable?). In lieu of shamans they have “technologists” (a native speaker might have said “scientists”). The observer tallies geographical and meteorological earth-features that he recognizes from his own home-planet, including the temporal feature of the sun by day, the moon by night:

    oceans deserts mountains grain fields canyons
    forests           variousness of landscapes weathers
    sun light moon light as at home

    Nostalgia for “home” has made him begin his observations with familiar perceptions, but he is as yet a novice in English pronunciation: he separates the word “sun” from “light” and “moon” from “light,” as though his overarching category “Light” has separate subordinate categories, that of the sun and that of the moon. With him, the light has not, as in native English voicing, been absorbed almost silently into the polysyllabic “moonlight” and “sunlight.” 

    The observer, we are pleased to see, has an aesthetic sense resembling our own, responding instantly to “red monoliths” like those of his remembered “home”:

                                     much here is
    beautiful     dream like vistas reminding me of
    home       item         have seen the rock place known
    as garden of the gods and sacred to the first
    indigenes     red monoliths of home

    For the first time, an actual American name has at this point made its way into the spaceman’s report: the so-called Garden of the Gods in Colorado Springs is a stark terrain of red monoliths held sacred by Native Americans. The name points us to an incident in the life of Robert Hayden. In 1975, five years before his death, the day after his reading at Colorado College, Hayden visited the Garden of the Gods at the invitation of a young MFA student whom he had met the night before. A few years afterwards, that student — Yusef Komunyakaa, later a distinguished poet himself — recorded their walk:

    Hayden had to be assisted closely along the rocky paths up the beautiful hills.  He seemed nearly blind. . . .  Soon we were in the heart of the Garden of the Gods, beside a formation called Balanced Rock—a smaller stone supporting a larger one, massively depicting a visual mathematics too subtle for words.  Hayden stopped, looked around, and said, “I love this country.”

    In “[American Journal]” Hayden bestows his own warm response to the grandeur of the scene on his extraterrestrial, fusing himself and the surreal cosmic visitor. 

    A reader aware that Hayden is African-American may suspect that he is satirizing, in the response of the technically sophisticated alien contemplating the americans, the discourse of a “civilized” white gazing, with simultaneous denigration and envy, at a “primitive” Black culture. But by now enough ink has been spent on the poem to discourage any idea that its “message” is without subtlety; a number of identity-determi-nants — national, linguistic, gendered — populate the poem. Although the spy celebrates the landscape so like his own, he is not free to mention in his report the sensuous appeal of the americans themselves. After his search for the right adjective to describe them—”i am attracted to / the vigorous americans   disturbing, sensuous”—he becomes ashamed, adding “never to be admitted,” meaning, surely, not even to himself.

    The next movement of the poem is a scherzo, in which the alien-in-disguise has a conversation in a tavern with an american. When he asks what is meant by “the American dream” the “earth-man” answers in ignorant colloquial language (with its crude “irregardless,” its unthinking alliance of “sure” and “i guess”). The alien, never having read written English, is mistaken in substituting two words for the proper English single word, as in “night mare” and “every body”), and he is baffled by the redundant insertion of the all-purpose American linguistic filler, “okay.” The “earth-man” says, of  the American Dream:

                 sure
    we still believe in it i guess. . . irregardless of the some
    times night mare facts we always try to double
    talk our way around                 and its okay the dreams
    okay and means whats good could be a damn sight
    better            means every body in the good old u s a
    should have the chance to get ahead or at least
    should have three squares a day          as for myself
    i do okay      not crying hunger with a loaf of
    bread tucked under my arm you understand

    The alien‘s dutiful previous listening to the tapes of spoken English does not equip him to understand the torrent of incorrectness, slang (“double talk,” “three squares”), and abbreviations (“u s a”) uttered by the “earth-man.” He puts forth, in reply to this barrage of American dialect, his courteous British reply (deriving from his alternate set of language-tapes, the British one): “i / fear one does not clearly follow.” His tavern-mate becomes suspicious:

    notice you got a funny accent pal        like where
    you from he asked       far from here I mumbled
    he stared hard I left

    The tavern-dialogue teaches the alien that his linguistic mimicry is still imperfect:

    must be more careful    item     learn to use okay
    their pass word              okay

    After the comic interlude of the tavern scene, however, “[American Journal]” suddenly turns savage, as a street riot erupts, alive with new unintelligibility. The alien sees people he characterizes as “sentinels” — a literal translation from some word in his native tongue, since he hasn’t learned the correct English word for “police.” The “sentinels” are disturb-ingly re-characterized by the crowd — “pigs / i heard them called”–as the police retaliate “with flailing clubs”:

                  unbearable decibels      i fled lest
    vibrations of the brutal scene do further harm
    to my metabolism already over taxed

    A biological fact about the alien — that under the rule of The Counselors the capacity to tolerate violence has been genetically bred out of his metabolism — leads him to side with the police, as with the primary authoritarian decisions that have created and socialized him. His voice becomes that of a repressed creature unconscious of his own victimization, incapable until now of any mental act not channeling the opinions of The Counselors. Yet his equilibrium has been so shaken by the violence of the riot that the very word “serenity” shatters into linguistic fragments over a line-ending:

    The Counselors would never permit such barbarous
    confusion          they know what is best for our sereni
    ty          we are an ancient race and have outgrown
    illusions cherished here          item       their vaunted
    liberty

    His (temporary) identification with The Counselors allows the alien to parody the earth-men’s truculence:

        “no body pushes me around i have heard
    them say             land of the free they sing          what do
    they fear mistrust betray more than the freedom
    they boast of in their ignorant pride     have seen
    the squalid ghettoes in their violent cities

    (Nowhere does the alien sound more like a white supremacist than here: he has learned, and uses, the abusive word “ghetto.”) And he wonders, returning to the word “paradox” from an earlier summary:

    paradox on paradox       how have the americans
    managed to survive

    After the deafening street riot there arrives a louder scherzo than the earlier tavern-interlude: now it is the “patriotic” spectacle of the Fourth of July. As “earth-men / in antique uniforms play at the carnage whereby/ the americans achieved identity,” the alien reveals that on his own planet they indeed do study American history in its origins:

                                                        we too recall
    that struggle as enterprise of suffering and
    faith uniquely theirs

    But what has happened in the vulgar modern era to the noble independence celebrated on the Fourth? With mockery the alien sees its debasement into a craven nationalism:

                                                         blonde miss teen age
    america waving from a red white and blue flower
    float as the goddess of liberty                            a divided
    people seeking reassurance from a past few under
    stand and many scorn

    “A past [that] few understand and many scorn”: in these high-minded words the alien exhibits his own superior wisdom as he judges American ignorance and political decline. And hearing contemporary skeptics dismiss the Fourth of July parade (“why should we sanction / old hypocrisies”), the alien returns to his “native” moralizing and irritated scorn. Yet his anxiety exhibits itself afresh as the revered word “Counselors” breaks into pieces at a line-end:

                                          The Counse
    lors would silence them

    a decadent people The Counselors believe         i
    do not find them decadent     a refutation not
    permitted me

    The Counselors, we begin to understand, do not countenance objections to their views. The alien’s irrepressible mixed feelings about the americans throw him into a violent mixed diction as he ends up siding with the Counselors’ stereotypes of raw crude “earthlings”:

                   but for all their knowledge
    power and inventiveness not yet more than raw
    crude neophytes like earthlings everywhere

    With the subsiding of his unresolved responses to the Fourth of July, the alien wonders how his report on america will strike The Counselors. Since he is, himself, delighted by the ingenuity of his multiple disguises, he reminds himself sotto voce to induce approval in The Counselors by describing his stratagems. But even while reassuring himself that The Counselors will admire his powers, he still worries about their eventual estimation of his work. Hoping to curry favor, he describes his spy-costumes in a cascade of nouns and idioms learned, we feel, rather on the street than from the bland tapes of his language-lab):

    though I have easily passed for an american   in
    bankers grey afro and dashiki long hair and jeans
    hard hat yarmulke mini skirt               describe in some
    detail for the amusement of The Counselors   and
    though my skill in mimicry is impeccable        as
    indeed The Counselors are aware        some thing
    eludes me      some constant amid the variables
    defies analysis and imitation               will I be judged
    incompetent

    In his next, most analytical moment, the extraterrestrial rises to the philosophical diction natural to his culture — a discourse technologically supreme, wholly rational, but emotionally repressed. The minor role of america in the cosmic scheme of things (“an iota in our galaxy”) is evident to him, but he is disturbed by its problematic existence as a conceptually insoluble entity, resistant — in its mobile lability of science and fantasy, logic, and imagination — to the analytic reason that is the pride of his civilization. He sighs in frustration:

    america           as much a problem in metaphysics as
    it is a nation earthly entity, an iota in our
    galaxy           an organism that changes even as i
    examine it     fact and fantasy never twice the
    same     so many variables

    As the spy ponders the unintelligibility of america, its antagonism to all he has valued, he realizes that he is in physical danger from its natives: already his presence has been rumored in the newspapers. While the papers laugh at those “believing” in the existence of “humanoids,” the “humanoids” in their spaceship laugh back at the scoffing newspapers. Quiet in his withdrawal from the company of his “crew,” the alien reflects on all he has seen and heard: the gaudy Fourth of July parade, blonde miss teen-age america, the suspicious “earth-man” in the tavern, the street-riot between citizens and “sentinels,” the awful decibels of both celebration and violence, the confluence in the streets of dashikis and yarmulkes. Lost in his memories, the alien, tensely frustrated, cannot define what the americans are: he knows only that the american personality confounds his own schooled, careful, sexless, logical self. He cannot, now, return unthinkingly to his own sterile planet, submit to The Counselors’ rules, and censor his speech. Once home, he will ponder the “variegations” of his past journey — his adroit disguises of body, skin-color, gender, and manner of speech — but for all his wide-ranging observation, he will remain forever unable to solve the “quiddity” — the “thisness” — of this paradoxical population, this exuberant and savage rebel-tribe.

    Hayden’s science-fiction is doubly dystopian. His spacemen are like Swift’s whinnying Houyhnhnms, inhuman, chilly, fastidious, rational; and their representative courier flinches at the americans’ untidiness, their boasting, their costumed mimicry of the carnage of 1776, their cruelty, their childish “floats,” their veneration of the “Goddess of Liberty” in the person of a teen-ager in a toga, their incoherent “metaphysics,” their elusive essence.

    Behind the agitated monologue of the visiting “humanoid” lies the implied story of his former life: he was born, he was schooled, he was reprimanded for any excess of act or emotion, he was indistinguishable from others of his tribe. Passionless, he needed no human relations (family, wife, children); he worshipped “technologists,” and excelled in scientific observation, memory, and analysis. Posted to another planet to spy on the brash new tribe of “earthlings,” he is disposed at first to dismiss their childish “civilization,” but eventually, as he moves among them, he discovers in their “variegated” pigments and “various” behaviors much that he has lacked in his artificially rational former life. And what will his future be? He will be sadder, and wiser, forever alienated from his compliant fellow-citizens, unable to convey to them the extravagance of emotion and action, free from punitive supervision, that the americans, for all their faults, possess.

    Hayden made room in his poem for his extraterrestrial’s implied past and presumably alienated future to sharpen the contrast of the two cultures, the governed rational and the unbridled free. Both are insufficient, both are incomplete. The rational and disciplined one sees the unbridled one as ungovernable; the unbridled one would see the alien’s author-itarian Counselors as intolerable. Neither culture is really admirable. The chief difference between them is that one is subjugated, the other free (in both virtue and vulgarity). The free culture has no stable government; its people are unruly, as likely to sponsor a riot as a parade. The governed culture has the dark stability of its euphemized “counseling” — coercive, repressive, severe, implacable. 

    Hayden invented from scratch the unusual sensibility and the “faulty” English of the alien, his innocence as to punctua-tion and spelling, his nervousness intermittently betrayed by his words’ falling into pieces (not syllables), his complacent moral judgments, his intellectual scorn of the “earthlings” who have gotten to the moon but no further, his horror at the sheer noise of the american streets in parades and riots — all the while showing his opinions being put into question by that elusive “something” for which he has no words. It is, of course, freedom, both in creation and in destruction.

    We can, if we choose, read this conflict of cultures as embodying on the one side technologically schooled and hierarchically socialized America and on the other side that supercilious America’s view of African-American life. There is something to that reading, but not everything. Hayden repudiated the narration of victimhood as the chief resource of a minority writer, just as he repudiated despair at the racial division of his America. His “God-consciousness” (as he named it) led him to an unshakeable conviction of human brotherhood and enabled him ultimately to join his wife Erma’s church, the Baha’i, which exists without a hierarchical structure and affirms belief in the unity of all humankind.

    And yet Hayden had, by his own acknowledgment, periods of profound depression as well as periods of strenuous belief that relations between the races could not only improve but become harmonious. He incurred the wrath of the Black Power movement in the 1970s because of his conviction that the literature of organized protest movements tended toward propaganda, not art. Nor could he bring himself to refuse Emersonian symbolism in favor of literal statement.

    When an interviewer asked him why he wrote poetry, he said — disarmingly and wittily — because he liked it better than prose. He thought “confessional” poetry too naked to attain universality. He never stopped revising his poems in the direction of greater concision, greater symbolic power, and greater objectivity. Famous for his powerful sequences of African-American history — “Middle Passage,” “John Brown”— he is justly remembered in most anthologies for the inexpressibly moving “Those Winter Sundays,” an elegy for his laborer foster-father. “Sundays, too, my father got up early,” it begins, with all the emphasis on the accented “too”: — “got up early” as a kindness to the sleeping family in the cold house, “making banked fires blaze.” “Nobody ever thanked him”: that is the line of the poem that nobody can ever forget.  

    Once Hayden learned to read — by himself, at three — he read intensely and passionately in the major British and American poets. One can see him, over a lifetime, experimenting with nearly all poetic genres: nature lyrics, elegies, sequences, allegories, ballads. When he looked to African American predecessors, he saw some of them writing in dialect, others creating new folk ballads, still others choosing the high language of the canonical English lyric. He would learn from them, but equally from Whitman, Crane, and Auden (who taught Hayden at Michigan). Just as Elizabeth Bishop would not allow her poems to appear in single-gender anthologies because she took herself to be an American poet, not a “female poet,” so Hayden always believed himself to be an American poet among other American poets. For him, the democracy of literature could not countenance partisan hostilities, nor could the brotherhood of human beings conceive of exclusions within the company of artists.

    Born in Detroit in 1913 and named Asa Sheffley by his birth-parents, the poet was given away, but not abandoned, by his mother when she moved to find work. He was raised (but never adopted) by a neighborhood family, the Haydens, and subsequently went by the name Robert Hayden. He came to feel that his foster-family meant well by him; his father did not obstruct his intellectual desires, and saved to help him through college, but it was a teacher, a librarian, and a social worker (assigned to the Haydens when they were on welfare), who saw something unusual in him and encouraged him. In his prose, he was candid about his group difficulties in school; with his thick glasses, his poor sight, and his love of poetry, he was called “nigger, Four-Eyes, sissy.” In view of the violent racial divisions of American life, which he experienced from childhood with unavoidable pain, he thought that an artist had to cultivate a strict objectivity in social observation. He supported himself all his life by teaching. For twenty years he remained at Fisk (teaching fifteen hours a week, a taxing load for a conscientious teacher), and thereafter he closed his career at the University of Michigan. In 1976, the Bicentennial Year, he was appointed Consultant in Poetry to the Library of Congress (a congratulatory post now renamed, more accurately, Poet Laureate). The final triumph of Hayden’s personal and impersonal objectivity was “[American Journal],” composed in 1976 as the Phi Beta Kappa poem for the University of Michigan and placed as the final work in his Collected Poems. You can hear Hayden read it in his quiet and musical voice on a tape he made for the Library of Congress in 1978, two years before he died, early, at 66, of cancer.

    Lolita Now

    After almost three-quarters of a century, how are we now to think about Lolita? It may well be the most commented on novel written in English in the past hundred years, alongside Joyce’s Ulysses. In the case of Ulysses, the imperative for commentary is chiefly a consequence of the invitation to exegesis generated by that novel’s dense network of allusions and the multiple complexities of its structure. In fact, Alfred Appel, Jr., in the introduction to his splendid Annotated Lolita, has observed certain affinities between Lolita and Ulysses in the centrality of parody for both novels, in their resourceful deployment of popular culture, and, of course in their shared elaborate mobilization of literary allusions. Nabokov, we should recall, was a great admirer of Ulysses, and Lolita has its own formal intricacies, which have been duly explicated by much apt criticism ever since its initial American publication in 1958.

    Yet the more obvious reason why Lolita has elicited so much commentary through the years is the moral questions raised by its subject. The crudest notes of the discussion were first struck by readers who imagined that the author must be a pervert and that the novel he wrote was altogether a sordid thing. In more sophisticated guise, some conservative critics, 

    such as Norman Podhoretz, have contended that Lolita may corrupt morals and must be approached with caution by right-thinking people. Inevitably, the novel has also been excoriated by the feminist Left. In her diffuse but influential article “Men Explain Lolita to me,” Rebecca Solnit seems to classify Lolita (her meaning is a bit opaque) as one of the books that “are instructions as why women are dirt or hardly exist at all except as accessories.”

    Serious considerations of the novel have properly dismissed all such views, and, indeed, many of the earliest critics recognized it as a literary achievement of the first order of originality (but not Nabokov’s erstwhile friend Edmund Wilson, who thought it regrettable). Indeed, powerful and persuasive arguments have been made for the moral character of the book, and these need not be repeated here.  

    What may be at issue for readers of Lolita in the twenty-first century is how to regard the book in an age when our culture has become so conscious of the sexual exploitation of children and of women in general, young or otherwise. This is, of course, a social problem that is alarmingly widespread and deserving of urgent reform, but it must be said that the public exposure of certain especially egregious cases has led much of the public to hair-trigger responses to any activity that is even obliquely related to such appalling exploitation. It is a sign of our confused and simplified and sanctimonious times that Dan Franklin, the editor-in-chief of the esteemed London publishing house Jonathan Cape, has declared that he would not publish Lolita if it were submitted to him now. His judgment stems from an acute nervousness about how thirty-year-olds on his company’s acquisition team would respond if he proposed publication, as he himself has said.

    Is the new awareness of sexual harassment likely to make it altogether uncomfortable to read the first-person narrative of a middle-aged male who repeatedly, extravagantly, and at times brutally commits carnal acts with a pubescent girl who is quite helpless to free herself from him? Novelists, of course, have not infrequently chosen to write books about deviant, criminal, or murderous characters — Humbert Humbert is all three — but the sexual exploitation of a child surely touches a raw nerve, especially now. (One highly intelligent reader, recently reading Lolita for the first time, told me that he could see it was a brilliant novel but found it difficult to stick with it because of the subject.)

    I would like to suggest that the way Humbert’s story is constructed anticipates this sort of discomfort, in a sense even aligning itself with the discomfort. Devoted as he was to the supreme importance of art, Nabokov had been concerned since his Russian novels with the phenomenon of the perverted artist, the person who uses a distorted version of the aesthetic shaping of reality to inflict suffering on others. Humbert Humbert is only his most extreme representation of such distortion. The perversion of the artistic impulse is a vital subject for Nabokov precisely because art matters so much to him.

    The first thing that should be noted about the treatment of this subject in Lolita is that Humbert Humbert clearly regards himself as a monster, repeatedly emphasizing his own monstrosity. This goes along with the fact that he is insane, as he frankly admits, and that he has been several times institutionalized in asylums. Humbert’s assertions of his own moral repulsiveness abound in the novel. “I am,” he says of himself early in his story, as a boarder in the Haze home, “like one of those pale inflated spiders you see in old gardens. Sitting in the middle of a luminous web and giving jerks to this or that strand.” With Lolita tantalizingly sitting in his lap on the Haze davenport, he invokes a familiar fairy tale that here will have no happy ending as he wriggles in order “to improve the secret system of tactile correspondence between beast and beauty — between my gagged, bursting beast and the beauty of her dimpled body in its innocent cotton frock.” Humbert’s framing of this allusion altogether reduces the man to his imperious sexual member. And as we shall see from other citations, he has a clear awareness that his absconding with Lolita is bound to have dire consequences for both. 

    When he finally consummates his lust for Lolita, he declares that it was she who seduced him, not an altogether improbable claim given her sexual precociousness, but she on her part says, fearing that he has torn her internally — though it is unclear whether she might be merely joking — that she ought to report him to the police for rape. At least in a moral sense as well as in the statutory one, this could be quite right. The year-long frenzy of sexual gratification with a sometimes reluctantly submissive, sometimes resistant, twelve-year-old has its particularly sordid moments beyond its intrinsic sordidness, as when Humbert insists on sex when Lolita is running a high fever or in his repeatedly bribing her with magazines and treats to make herself available to his insatiable desire. Humbert’s admission of all this repeated abuse culminates near the end of the novel in his often cited recognition, as he watches school children at play, that he has deprived Lolita of her childhood. But a summarizing assess-ment of what he has perpetrated in the throes of his obsession occurs earlier, as he and Lolita head back east in his car:

    We had been everywhere. We had really seen nothing. And I catch myself thinking today that our long journey had only defiled with a sinuous trail of slime the lovely, truthful, dreamy, enormous country that by then, in retrospect, was no more than a collection of dog-eared maps, ruined tour books, old tires, and her sobs in the night—every night, every night—the moment I feigned sleep. 

    Here the defiling of America and the defiling of Lolita are virtually interchangeable. This self-revelatory moment, coming at the end of a chapter, is very telling in two ways — first the invocation of slime, cognate with the earlier image of the spider, to indicate the repulsiveness of this sexual odyssey, and then, at the end of the little catalogue of the detritus of the journey, interwoven with it and constituting Humbert’s first report of this wrenching fact, Lolita’s sobbing through it all, night after night.

    If Lolita were nothing but this, it would merely be a riveting and also unappetizing representation of a sexually obsessed madman. Yet what is enacted in the novel is more complicated and more interestingly ambiguous. In the afterword that Nabokov wrote to Lolita in 1956 to accompany the Ameri-can publication of excerpts in The Anchor Review, he offers a curious origin for the idea of the novel. When he was laid up with illness in Paris in 1940, he came across a newspaper story about a caged ape in the Jardin des Plantes that had been given charcoal and paper and produced a sketch of the bars enclosing him in. (One thinks of Rilke’s famous poem about the panther, “Au Jardin des Plantes,”: “It seemed to him there were a thousand bars / and behind those bars no world.”) The ape inspired Nabokov to write a Russian story with a plot roughly like that of Lolita, but, unhappy with the piece, he destroyed it.

    What does an ape in a cage drawing his prison have to do with Lolita? The obvious answer is that Humbert Humbert’s predicament is of a man hopelessly imprisoned by his obsession. The narrative he produces is the representation of his prison, which is not an enclosure of vertical bars but rather an alluring and also vulnerable girl whom he has desperately fixed as the object of his desire. This transformation of a cage into a sexual obsession has a double effect: Lolita as its object is repeatedly celebrated in radiant prose as a thing of beauty, and the reader is led to perceive Humbert not only with horror but also with a qualified kind of sympathy, as a man hideously trapped in his own impulses that inflict grave harm on someone he comes to love and that in the end destroy him. It is relevant in this connection that the Russian story Nabokov discarded ended with the suicide of its perverted protagonist. The central paradox of Lolita, and one of the effects that makes it a great novel and not just the story of a psychopath, is that one simultaneously recoils from its narrator and is drawn into both the anguish and the lyric exuberance of his point of view.

    Especially in regard to the second of these contradictory responses, the extraordinary style of the novel surely takes the book well beyond the fictional case-study of a madman. Nabokov himself characterized the book as his love affair with the English language, and there are few other novels since Joyce that deploy its resources with such pyrotechnic virtuosity. In the famous first paragraph, which is a spectacular prose poem, Humbert ends by saying, “You can always count on a murderer for a fancy prose style.” Humbert, with his inventor standing firmly behind him, is wonderfully having it both ways: the extravagance of the musical prose might push to the brink of excess, and Humbert is perfectly aware of this, yet the prose is glorious and is surely a part of the reader’s enjoyment of this troubling story. This is the narrative of a man repeatedly doing something morally ugly conveyed in language that is often quite beautiful. The contradiction between subject and style poses a certain moral dilemma for readers, who may well relish the novel and at the same time feel uneasy about the delight they take in it. Perhaps that double-take was part of Nabokov’s intention.

    For a characteristic instance of this tricky balancing act, let us return briefly to Humbert on the davenport in the Haze home with Lolita, who is evidently unaware of his sexual excitement, sitting in his lap. As he approaches climax, deliberately prolonging the pleasure, he says, in a phrase that shrewdly defines his relationship with the pre-pubescent girl, “Lolita had been safely solipsized.” He continues in his habitual extravagant style:

    The implied sun pulsated in the supplied poplars; we were fantastically and divinely alone; I watched her, rosy, gold-dusted, beyond the veil of my controlled delight, unaware of it, alien to it, and the sun was on her lips, and her lips were apparently still forming the words of the Carmen-barman ditty that no longer reached my consciousness. Everything was now ready. The nerves of pleasure had been laid bare….I was above the tribulations of ridicule, beyond the possibilities of retribution. In my self-made seraglio, I was a radiant and robust Turk, postponing the moment of actually enjoying the youngest and frailest of his slaves. 

    This entire scene is the most explicitly sexual moment in the novel — after this, Nabokov pointedly refrains from explicit representations of sex — but it is also something rather different. The murderer’s fancy prose is exquisitely orchestrated in a virtually musical sense, the passage beginning with a spectacular set of alliterations that also incorporates a rhyme: “The implied sun pulsated in the supplied poplars.” The sun is “implied” probably because Humbert, totally focused on Lolita and his pleasure, is not directly observing the sun and the “supplied” poplars on which it is shining, though he does notice the sunlight on her lips. Beyond that detail, Lolita’s presence, radiant for Humbert, is evoked only in the brief phrase “rosy, gold-dusted” because Humbert is completely concentrated on his own sexual excitement.

    The verbal pyrotechnics of the kind one sees here, which are abundantly deployed throughout the novel, are surely a source of delight for readers, perhaps even eliciting a certain sense of admiration for Humbert’s “sensibility” or his inventiveness, though the acts he performs trigger moral revulsion. The novel’s perverted protagonist is manifestly a man of high culture — and, at the same time, following the precedent established by Joyce, avidly attentive as well to popular culture — and so this passage, like so many others in the book, spins a web of allusions in its very representation of sexual arousal. The invocation of Carmen, one of several in the novel, probably refers to Merimée’s novella rather than to the opera based on it, as Alfred Appel, Jr. plausibly suggests, thus conjuring up from fiction a young and sexually alluring woman, here appearing in a silly ditty. Humbert as a Turk in his seraglio, depicted in still another alliterative chain (“In my self-made seraglio, I was a radiant and robust Turk”) taps into an old cliché of Western culture in which the Orient is figured as a theater of exotic sexual license. Leopold Bloom plays more than once with this same Orientalist notion.

    Again, I think that the articulation of Humbert’s fantasy produces a double effect. A reader may enjoy the exuberance of his inventiveness, but surely what the fantasy reveals about his intentions is repugnant. What is especially telling is the phrase “enjoying the youngest and frailest of his slaves.” Presumably, this compliant or helpless victim of the Turk Humbert’s lasciviousness is almost or actually a child, and the fact that she is the “frailest” of the female slaves in the seraglio betrays his awareness of Lolita’s vulnerability, an aspect of her that may well pique his twisted desire. What I have characterized as the balancing act of Nabokov’s prose in this novel is abundantly evident here.  

    I would like to offer a final example of the odd allure created by Humbert’s writing, a passage in which the sheer literariness of the writing is especially prominent. It is Humbert’s first sighting of Lolita, peering at him over her dark glasses as she sunbathes on the patio. Her appearance will present to Humbert, or so he claims, the very image of Annabel Leigh, his first love met on the Riviera when both were still pre-teens, and then forever lost to him through an early death:

    It was the same child—the same frail, honey-hued shoulders, the same silky supple bare back, the same chestnut head of hair. A polka-dotted black kerchief tied around her chest hid from my aging ape eyes, but not from the gaze of young memory, the juvenile breasts I had fondled one immortal day. And, as if I were the fairy-tale nurse of some little princess (lost, kidnapped, discovered in gypsy rags through which her nakedness smiled at the king and his hounds), I recognized the tiny dark-brown mole on her side. With awe and delight (the king crying for joy, the trumpets blaring, the nurse drunk) I saw again her lovely indrawn abdomen where my southbound mouth had briefly paused; and those puerile hips on which I had kissed the crenulated imprint left by the hem of her shorts—that last mad immortal day behind the “Roches Roses.” The twenty-five years I had lived since then tapered to a palpitating point, and vanished.

    The idea of a formative experience in early life imprinting itself so indelibly on the psyche that the person becomes its lifelong captive is, as quite a few commentators have noted, Nabokov’s mockery of the Freudian notion of the causation of sexual pathology by childhood trauma, a notion he famously despised. Given that it plays an altogether determinative role in Humbert’s perversion, one must conclude that the “psychology” of the novel, based as it is on a parody of Freud, can scarcely be regarded as realistic. 

    It is, instead, a central instance in which playfulness is paramount in this representation of a sexual deviant, an unanticipated conjunction of manner and subject that may compel us to reconsider how to think about Humbert Humbert. In one respect, he is a powerful fictional representation of a disturbed person that one can readily relate to troubling manifestations of this kind of disturbance in the real world; in another respect, he is a kind of pawn in a wild literary game. One should note that the caged ape in the Jardin des Plantes breaks through the surface here in Humbert’s self-denigrating characterization of his own “aging ape eyes.” He proceeds to embark on the fantasy of the little princess kidnapped by gypsies — the introduction of gypsies ties in with the allusions to Carmen, who is a gypsy — comically casting himself, a male figure to uncomfortable excess, as the nurse of the vanished infant.

    The story of the kidnapped child rediscovered in adulthood through the recognition of a birthmark is very old, originating in the Greek romances of Late Antiquity and continuing to lead a literary life in the Early Modern period and beyond. Fielding, for example, employs it in Joseph Andrews, birthmark and all, with the kidnappers there identified as gypsies, a European fantasy about them in that era. Nabokov, then, is playing not only with Freud but also with the contrivance of an old tale told many times since the Greeks. What may properly be described as the highjinks of Humbert’s consciousness, however tormented he often may be, is on display as he quickly switches roles from nurse to king, clearly the child’s father, crying for joy over her discovery. The fact that the nurse is imagined to be drunk at this moment is a wildly extraneous and incongruous detail, Humbert indulging in a riot of the imagination as he recreates this old story for his self-explanatory purposes.

    In regard to his function as a narrator of the novel, it should be kept in mind that Humbert speaks in two distinct and intertwined modes: his language reflects an obsessive and, indeed, deranged mind, as in the excessive doubled insistence on “immortal” in this passage; and it also deploys the extravagant resources of Nabokov, shrewd and witty observer and master stylist. One might note here the lovely precision of the adjective in “the crenulated imprint” and the wit of “my southbound mouth” to refer to the ultimate sexual destiny toward which the mouth is traveling. The two twelve-year-olds on the Riviera, it seems, were going a step beyond ordinary pre-adolescent fooling around. The wonderful concluding sentence goes on to strike a distinctively Nabokovian note. It is strongly reminiscent of at least a couple of sentences in Speak Memory, a book cast in its initial version not long before the composition of Lolita. The literary and, one could also say, stylistic recapture of the past was an urgent undertaking  for Nabokov, splendidly achieved in Speak Memory, and in Lolita the intellectual joke of a “Freudian” childhood experience becomes also, at least at this moment, an emotionally fraught and joyous realization of the past returned in all its luminous presence.

    This concert of surprising and vividly inventive effects in the passage, and elsewhere in the novel as well, leads me to propose an aspect of the readerly experience that one would certainly not expect in the narrative of a sexual abuser of young girls: for all the moral dubiety of the protagonist’s story, Lolita is a pleasure to read, and anyone who denies this is likely to be suffering from terminal moralism or bad taste. In this import-ant regard, we should consider the essential role of parody in this novel, because parody is also not something generally associated with the fictional portrayal of psychopaths.

    Parody, of course, is pervasive in Nabokov’s novels. What its presence necessarily implies is that we must see the novel not as a direct representation of reality — a word, we should keep in mind, that for Nabokov must always be wrapped in scare quotes — but rather as a response to the world outside literature entirely mediated by literature, which is to say, both the novelist’s own literary contrivances and the variegated background of literary tradition on which he chooses to draw. As critics and scholars through the decades have abundantly shown, Nabokov constantly calls attention to the status of his fiction as literary artifice, executing what the Russian Formalists of the early twentieth century referred to as “laying bare the device.” Yet the double edge of this procedure as he practices it may be a little hard to get a handle on. Invitation to a Beheading and Bend Sinister are ostentatiously self-reflexive novels, but they are also serious engagements with the horrors of totalitarianism, whose potential for the wholesale extirpation of humanity was all too evident during the years when they were composed. Much the same is true of the totalitarian state fantasized by Kinbote in Pale Fire. Nabokov’s early novel The Defense abundantly calls attention to its own artifices, as we would expect, but it is also a wrenching representation of a genius trapped in the world of chess that is the vehicle of his genius. One could extend this Nabokovian catalogue of grave human predicaments, historical or personal, confronted through the medium of self-reflexive fiction.

    In Lolita, then, we get the probing portrait of a sexual deviant who kidnaps a girl-child and inflicts great harm on her that is conveyed through a novel which reminds us of its status as an invented fiction and plays quite exuberantly with literary tradition. Parody, again, is ubiquitous. It begins on the first page of the novel with the quotation from Poe’s “Annabel Lee,” a poem that lends the name Annabel Leigh to Humbert’s first love. Is she, after all, a “real” character in a novel or a kind of personified citation, Humbert living out the role of the male speaker in Poe’s poem? Allusions to Poe’s poem are scattered through the novel. The “winged seraphs” of the poem flit across these pages. Here is one especially telling instance: “I should have known (by the signs made to me by something in Lolita — the real child or some haggard angel behind her back) that nothing but pain and horror would result from the expected rapture. Oh, winged gentlemen of the jury!”

    The parodies and satiric references in the novel include Merimée, A. E. Housman, T.S. Eliot, Arthur Conan Doyle, Pierre de Ronsard, and many other writers. The elaborate development of Clare Quilty as Humbert’s doppelgänger harks back to Dostoevsky’s The Double, the work of a writer whom Nabokov despised, as well as to Poe’s story “William William.” Parody is also deployed generically, as in the old romance story of the kidnapped child discovered through a birthmark, or in the desperate, farcical physical battle between Quilty and Humbert, about which he himself observes, “elderly readers will surely recall at this point the obligatory scene in the Westerns of their childhood.” All these allusions and parodic elements have a paradoxical effect. Humbert is an appallingly twisted figure repeatedly operating in a literary landscape evoked through his own rich background in culture high and low. In the climactic scene with Quilty, we do not cease to see him as a violently jealous lover seething with rage against the man who has stolen his beloved girl from him, but the scene, with its plethora of parodic literary and cinematic references, is also hilarious: fun and horror are interfused and unmediated. The aesthetic does not usurp the ethical, but the ethical is made to co-exist with the aesthetic, and in this way the reader is made to read complexly, and is never let off the hook.

    Nabokov approaches two things with the utmost seriousness: the despicable act of sexually exploiting a child and the instrument of art through which the moral issue is represented. For all of the fun and games of his play with artifice, strong and moving emotions are expressed, as in the great moment near the end, when Humbert discovers the now pregnant Lolita with “her adult, rope-veined narrow hands and her goose-flesh white arms, and her shallow ears, and her unkempt armpits” (which earlier were called “pristine” when he watched her at tennis), and he can assert, “I loved her more than anything I had ever seen or imagined on earth, or hoped for anywhere else.” 

    The defining dimension of art in Lolita must be kept in mind. Parody and the overlapping practice of allusion are essential to the adventure of the novel at the same time that they point again and again to its status as a work of literature. Allusion itself is intrinsic to the dynamic of most literature: you would scarcely think of writing a story or a novel or a sonnet or an epic if you had no familiarity with other such works, and allusion, through which you insert your own writing in the body of its predecessors, remaking them and often challenging them as you invoke them, is a recurrent method for the creation of new literature. Parody may be thought of as a special category of allusion, usually in a comic and critical vein. These twin processes in Lolita constitute an implicit affirmation of the artfulness of the novel, of the pervasive operation in it of literary art. That art is of course manifested in the spectacular prose that Nabokov creates for his deranged narrator, at times deliberately over the top in keeping with his derangement but very often brilliantly original, witty, finely lyrical, and on occasion quite affecting. Here is a moment when Humbert introduces circus performance as a metaphor for art — the same trope will recur in Ada — that suggests how artistic skill can convey the plight of a pathetic and unseemly character, which is precisely what his author has done for him: “We all admire the spangled acrobat with classic grace meticulously walking his tight rope in the talcum light; but how much rarer art there is in the sagging rope expert wearing scareclothes and impersonating a grotesque drunk!”

    Lolita is the most troubling and touching representation of a morally grotesque figure in the fiction of the last century. At the very end of his painful story, in his prison cell, his death imminent, Humbert affirms that he has used his fleeting time to make Lolita “live in the minds of later generations.” He then goes on to proclaim these grand concluding lines: “I am thinking of aurochs and angels, the secret of durable pigments, prophetic sonnets, the refuge of art. And this is the only immortality you and I may share, my Lolita.” The very last word of the novel, as Alfred Appel has observed, is the same as the first, affirming a kind of architectonic unity for the novel as a whole. The reference in “aurochs” to the cave paintings of early man and in “angels, the secret of durable pigments” to Renaissance art set this narrative in the grand tradition of art going all the way back to prehistory, much of it still enduring. There is a certain ambiguity as to who is speaking here at the end. Of course, it has to be Humbert, reflecting on what turns out to be in the end the truly beloved human subject of his story as he senses his own end approaching. Yet his voice merges with Nabokov’s in the proclamation of the perdurable power of art.

    Humbert Humbert is not Vladimir Nabokov: the point is worth emphasizing in our cultural circumstances. And the real identification of the novelist with his protagonist is not in regard to Humbert’s perversion, as some readers of the book have misguidedly imagined, but in the celebration of art as a fixative of beauty and feeling, anguish and love — as a fixative of humanity. It is this, finally, that lifts Lolita above the currents of shifting attitudes toward sexual exploitation or toward sex itself. The novel is obviously not a case study in perversion, as the highly parodic foreword by the fictional psychologist John Ray, Jr. would have it. It is also something more than a riveting fictional portrait of a repellently disturbed person. A murderer may have a fancy prose style, but in this instance the prose style turns out to be both arresting and evocative, at moments sublime, leading us to experience through the moral murk of the narrator a great love story that seeks to join the company of the cave paintings of Lascaux and the sublime angels of Giotto and Raphael, and nothing less.

    The Scandal of Thirteentherism

    Amendment XIII
    Section 1.
    Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have
    been duly convicted, shall exist within the United States,
    or any place subject to their jurisdiction.
    Section 2.
    Congress shall have power to enforce this article by
    appropriate legislation.

    In our age of roiling discontent, liberalism and its historical achievements are under assault from all sides. For the past four years, Donald Trump had little use for truth, science, progress, mutual respect among races and identities — all the liberal ideals embodied in the founding documents and embedded in the history of American politics. Despite overseeing the military that long ago defeated the Confederacy, Trump nonetheless made the Lost Cause his own, becoming the protector of Confederate monuments and place names, and this support has gained him the appreciation of white nationalists and other “good people” like the ones who marched on Charlottesville. Trump had little use for the colorblind state that liberals associate with the Party of Lincoln.

    Even with Trump out of the Oval Office, Trumpism continues to be the perfect ideological provocation for those on the other side now questioning America’s central political tradition. It sets the mood for their revisionism. At war with classical liberalism and “neo-liberalism” alike, the progressives are busy rewriting American history. They want a past that reflects their dim view of the American record and justifies certain policies to address racial grievances. American history, they now instruct, is dominated by topics that liberals allegedly marginalized, including settler colonialism, slavery, white supremacy, whiteness, and peoples of color. The editor of the eminent American Historical Review writes that he aims to “decolonize” American history. Ibrahim X. Kendi’s book Stamped from the Beginning described racism as our very origins. Reducing four hundred years of black history to victimhood, the New York Times’ 1619 Project echoed this sentiment. Racism explains slavery, which in turn explains the American Revolution and much else worth knowing about American history. Internal conflicts among whites — based on religion, ethnicity, or class — hardly explain anything, and there is certainly nothing exceptional about America.

    Rather than claiming their own version of the liberal tradition articulated in the Declaration of Independence, the Reconstruction Amendments, the promise of the New Deal, and the Civil Rights Acts of the 1960s, the progressives play up the failures and the betrayals of previous generations of liberals, even as they are suspicious or grudging about the Biden victory. Unwittingly taking their cue from the Nation of Islam, they view American liberalism itself as a species of white supremacy, national in scope and operation: in their view, white supremacy is not an aberrant tradition rooted in the American South, as most twentieth century liberals saw it. They feel little solidarity with American liberals, except those they have dubbed radical and incorporated into what they call the “black radical tradition,” especially Fredrick Douglass, Ida B. Wells, and Martin Luther King, Jr. Like some of the activists in the street, they would topple Jefferson, Lincoln, and Grant along with the Confederate generals. They see liberalism, past and present, as a huge obstacle to the remaking of America into what amounts to a fully integrated society with a social welfare state for all. 

    One of the pillars of American liberalism under assault is the Thirteenth Amendment. Many Americans now believe that slavery never ended — not despite but because of the amendment that fulfilled the promise of the Emancipation Proclamation. In the words of Bryan Stevenson, the head of the Equal Justice Initiative turned historian, slavery never ceased, it merely “evolved.” In his thinking and that of other Thirteenthers, it was the great amendment of 1865 that led to the re-enslavement of black people and mass incarceration. The key to understanding its “evolution” is the exception clause in the amendment, which ended slavery and involuntary servitude “except as a punishment for crime.” Under cover of those words, the Thirteenthers claim, ownership of slaves shifted from individuals to the state, even as the Thirteenth Amendment gave the American people, especially its newly freed people, the false impression that America had ended slavery once and for all. Some Thirteenthers do not simply believe that the amendment led to mass incarceration; they also hold that the loophole represented a diabolical scheme concocted by whites as a race. What all Thirteenthers share is the belief that the loophole created a seamless history of black slavery from the seventeenth century until today.

    When a person of Stevenson’s commitment and stature gives such a dim appraisal of the efficacy of an amendment signed by Abraham Lincoln, attention must be paid. In his crusade to link mass incarceration to the Thirteenth Amendment, he is not alone. A wide array of historians, cultural studies scholars, activists, and artists have endorsed this view in full or in part, including Henry Louis Gates, Jr., Kimberlé Crenshaw, Khalil Muhammad, Alex Lichtenstein, Kanye West, and Ava DuVernay. Whatever chance this interpretation had for burning out on its own disappeared when DuVernay’s documentary 13th took the nation by storm in 2016. It is now taking root in the nation’s schools: after watching DuVernay’s film, my students believed that the convict lease system (about which more below) re-enslaved most blacks. They were shocked to learn that the percentage was much less than one percent. 

    An idea born in the 1960s has become a popular and pseudo-scholarly belief that many want to use as a basis for making public policy. Not many have gone as far as Kanye West, who— with all the erudition at his disposal — has called for the repeal of the Thirteenth Amendment. Most Thirteenthers aim for an amendment to close the loophole. Their objective is to put an end to mass incarceration, which is a fine objective. But the key to ending it, they suggest, lies in removing its supposed economic justification — black prison slavery. 

    Thirteentherism is best viewed as another episode in a long tradition of using history as a weapon in a political struggle. At times, the distinction between historical truth and propaganda gets lost. Yet in keeping with our era, bad history and worse social science have replaced truth as the intellectual underpinning for a great deal of thinking about social change. Rather than making the incontrovertible case that mass incarceration as an inherent evil, they seek to hitch their cause to the moral opprobrium that already exists against chattel slavery. They have little use for differences and distinctions, and simply wish to call incarceration slavery. Never mind that Americans of African descent have always held historical truth as sacrosanct, believing that the dispelling of falsehoods is the proper foundation for black people’s progress. Thirteentherism breaks with that black historical tradition of truth telling, hoping to end convict slavery and in the process misrepresenting some of the most momentous changes in American history.

    The intellectual origins of Thirteentherism lie in the intellectual ferment of the 1960s. Prisoners commonly described themselves as slaves, whether on the prison plantations in the South or the workshops in other regions, since they all worked for little or nothing. It took an epiphany by Lee Wood, a prisoner in the California system, to link the Thirteenth Amendment to his condition. As part of a radical reading group, Wood read the amendment aloud to his comrades, and the loophole — “except as punishment for crime whereof the party shall have been duly convicted” — suddenly appeared to explain his plight. Few would do more to spread the idea. Once he had served his time, Wood dedicated himself to ending this “slavery,” founding the Coalition Against Prison Slavery (CAPS). He became well known for spreading literature on  the role of the Thirteenth Amendment, and with funding from the Quakers he published, along with his wife, a short volume tracing the history of prisoners as slaves.

    Wood’s idea of removing the loophole from the Thirteenth Amendment gained some traction in prison activist circles by the mid-1970s. He was not so much interested in ending imprisonment as he was against ending the exploitation of prisoner labor. Not only did CAPS receive funding from the Quakers, he also got the American Friends Service Committee to endorse his idea of removing the exception clause from the Constitution. From CAPS, the idea spread. In 1977, the New Afrikan Prisoners Association in Illinois petitioned the United Nations: “We protest the 13th amendment which legalizes slavery…” In 1980, William Coppola, a prisoner in Texas, cited the amendment as proof that slavery was alive and well in America. According to increasing numbers of prisoners, the Thirteenth Amendment had done them dirt. Not only did it not end slavery, it created more of it.

    By the 1990s, the intellectual influence of prisoner advocacy spilled over into academic circles. Before joining the professoriate, Alex Lichtenstein worked on behalf of prisoners, then followed his interest into scholarship. He published a history of convict leasing, called Twice the Work of Free Labor. Notable mostly for its interpretation that the system contributed greatly to the industrialization of the South, the book promoted the Thirteenther view of the amendment to end slavery: “Ironically, this [convict lease] system emerged immediately on the heels of the passage of the Thirteenth Amendment to the Constitution, which intended to abolish bondage but permitted involuntary servitude solely as a punishment for crime.” He named one chapter, “New South Slavery,” and another, “Except as a Punishment for Crime.”

    Thirteentherism gained most of its academic visibility and activist credibility through the writing of Angela Davis, who embodies the continuity between the prison activism of the 1960s and the modern prison abolition movement, which seeks to make prisons obsolete here and abroad. Her academic labors made cultural studies a central venue for the study of “the carceral state.” Reminiscent of W. E. B. Du Bois, who laid the seed for whiteness studies with a passing comment about how whites benefited from black oppression, Davis wrote an essay on Fredrick Douglass’ failure to oppose convict leasing and other forms of labor oppression. Of the amendment’s clause, she wrote: “That exception would render penal servitude constitutional — from 1865 to the present day.” As if this would have been impossible without the clause, she went on to say “That black human beings might continue to be enslaved under the auspices of southern systems of justice (and that this might set a precedent for imprisonment outside of the South) seems not to have occurred to Douglass and other abolitionist leaders.” After her essay, the Thirteenth Amendment’s loophole became intellectually important.

    Wood, Lichtenstein, and Davis see a constitutional power in the Thirteenth Amendment to establish convict or prisoner slavery, yet they know that the various British colonies and American states had exercised legal authority to create systems of convict slavery. They often carry on as if the amendment was meant for blacks only — the original post-Civil War black code, if you will. But before and after the founding of the United States, convicts had been forced to labor against their will without recompense. During the colonial era, more than 50,000 whites convicts were given the most extreme Hobson’s choice: an indenture (contract) to slave for a term of years in British North America or to be put to death for their crime. They were often sold to work for masters at the auction blocks where Africans were sold, and both types of slaves, convict and chattel, were known to run away together.

    The American Revolution ended the importation of white convicts as slave labor, but the new sovereign states all put those deemed criminal, regardless of racial designation, to work without compensation in one form or another. In the new penitentiaries some worked directly under the supervision of the state, others worked at the prisons under the control of leases, and others still off site. By the end of the Civil War, the power of colonies and then states to inflict involuntary servitude or slavery for a term on whites and others as convicts had existed for over two hundred and fifty years — a period longer than the age of chattel slavery. Of those seeing a white conspiracy to re-enslave blacks as convicts, an obvious question needs to be asked: why would Congress need to create a special constitutional amendment for blacks to make convict slaves of them? They had done that very thing to whites for centuries. The exception clause merely recognized the existing police power of the states.

    The history of white convict slavery notwithstanding, Thirteenthers often treat the amendment as a federal black code that applied uniquely to the freed people and blacks in general. Among many others, Lichtenstein and Davis suggest as much when they imply that something special could have been done in the language of the amendment to prevent the criminalization and re-enslavement of the freed people. In their account, the language as it stands empowered the Southern planters, and they point to a Southerner or two who read the amendment as a veritable black code for the treatment of freed people. For an alternative that could have made things different, Thirteenthers point to Senator Charles Sumner’s attempt to offer a different version of the amendment that outlawed slavery and made no mention of crime and punishment. Many believe that his amendment without the exception clause would have changed history. 

    Yet Sumner simply wanted an amendment that clearly embodied the abolitionists’ belief that blacks and whites would be free and equal under the law. Removing the exception clause would have ended chattel slavery, but it would have left convict slavery — for blacks and whites alike — in place. Criminalization and imprisonment go well with Sumner’s desired wording of equality under the law. The difference of racism would have adversely impacted them at the hands of the states as it had plagued antebellum free blacks, North and South. Indeed, something more than an amendment touting equality under the law was needed. 

    The Republican-dominated Congress was interested in ending chattel slavery, nothing more, nothing less. They decided on the language that had been an effective chattel-slavery killer since 1787. The exception clause had become part of American federal law when Congress passed the Northwest Ordinance in 1787. Congress prohibited chattel slavery in the territories ceded to the federal government — except for those slaves found guilty of crimes, who could be subject to “involuntary servitude or slavery.” Thomas Jefferson, who most likely drafted the provision, wanted to end the expansion of chattel slavery. Congress required the exception clause as part of every constitution submitted by territories to enter the union as a free state. Over time, Jefferson’s proviso, as Sumner called it, ended chattel slavery wherever it was enshrined in a state constitution. It also made clear that Congress was not usurping a new state’s police power to punish criminals in a manner consistent with the original thirteen states — some of which, as colonies, included enslavement.

    Apparently few Republicans, including Sumner, understood that the exception clause allowed for a term of slavery for a conviction. The potential difference between Sumner’s equality under the law proposal and the loophole version became clear immediately. In early 1866, the United States Army quashed the enforcement of a black code in Virginia that allowed freed people who did not sign a contract to be sold as a slave for a term. In November 1866, a judge in Anne Arundel County, Maryland, sentenced three black people to be sold for a term of service to local planters. The decision alarmed Sumner and other Republicans. The judge, in effect, was seeking to apply the old free person of color laws. The sentences were never carried out and the judge did not ultimately face prosecution. Yet Maryland was soon forced to remove its discriminatory laws. The loophole  for any form of chattel slavery, even for a crime and even for a term, was closed. The Thirteenth Amendment was emphatically not a black code. 

    Without compromising the principle of equality under the law, the Republican-dominated Congress would have had to pass a version of the Thirteenth Amendment, or an additional one, that explicitly forbade convict slavery, not just chattel slavery. The new fundamental law would have done for whites what Thirteenthers wish it had done for blacks. It would have reduced the states’ police power to decide the appropriate punishment and pushed the costs for prisons entirely on state taxpayers. In her impressive book The Crisis of Imprisonment, Rebecca McLennan laments the failure of the Framers to end prison labor. She points out the ubiquitous tensions within states to, on the one hand, bar unfree, unpaid prisoner labor from competition with free labor, and, on the other, meet the needs of taxpayers to defray the cost of a penal system. 

    A vote to end convict slavery in the Thirteenth Amendment likely would have divided Congress and ultimately the nation. Northern and borders states would probably have been unanimously opposed to an additional section to the Thirteenth Amendment that usurped state power and left them with an expense. Even the version that retained the state’s power to use prisoners as involuntary or slave labor did not have universal support among Northern and border states; Delaware and Kentucky did not ratify until much later as it was. Texas and Mississippi held out. It was impossible to reach the necessary twenty-seven states to ratify the amendment without two of the rebellious states on board. With greater opposition from Northern and border states, a southern state movement to unite against the Thirteenth Amendment might have succeeded — but it was an unimaginable outcome. The political appetite to end convict labor, however it was defined, did not exist. 

    For the conspiracy theorists among the Thirteenthers, this insistence upon the limits of the politically possible will simply be received as further proof of an alliance between Northern and Southern whites. In their thinking the loophole is not happenstance, but a plan that allowed whites to catch and re-enslave black people. As early as 1977, the New Afrikan Prisoners Association in Illinois, in a petition to the United Nations against the Thirteenth Amendment. wrote, “It was never the intention of the rulers of the u.s. to ‘abolish’ slavery.”

    The other elements of the Thirteenthers’ re-enslavement plot are the black codes and the convict lease system. Most professional historians, including Thirteenthers such as Lichten-stein, know two things about the black codes that the amateurs ignore: first, that the aim of the black codes was to push blacks back onto the plantations, not into jails or prisons; and second, that the black codes lived and died before the rise of convict leasing as a system. The Civil Rights Act of 1866, various court decisions, and the Fourteenth Amendment eliminated them. In most states, the convict lease system started years after the black codes had been outlawed. Just as Southerners did not need a loophole to create the convict lease system, they did not need black codes to discriminate against black people and to convict them of crimes. In the Thirteenthers’ narrative, the exception clause and the black codes are best understood as narrative devices to enhance the effect of their propaganda.

    Although historians of convict leasing now argue that it served the industrializing New South, not the cotton, tobacco, or rice planters, Thirteenthers often cannot shake the image of their imaginary black codes being used to send the re-enslaved ex-slaves back to their former masters on the plantation. No less a figure than Henry Louis Gates, Jr. has produced a short video in which he argues that convict leasing and the black codes were part of a “labor system that took shape in the late nineteenth century [and] developed coercive means to ensure that cotton remained king.” The convict lease system is the indispensable element in the Thirteenthers’ narrative, and every effort is made to play up its size, its duration, and its profitability. They use percentages to show that the prison population shifted from white to black in a decade or so after the end of chattel slavery. And they emphasize the growth of the black prison population, how quickly it doubled. 

    In both cases, no effort is made to explain how a system largely closed to blacks in the antebellum years would show dramatic annual increases without much change in the size of the prison population. And little attention is paid to the size of the system throughout its duration. Instead the impression is given that re-enslavement captured a huge percentage of the black population. I repeat: the historical truth is that it captured less than one percent.

    The focus on the late nineteenth century gives a false image of incarceration then and now. In Georgia, where we have the best numbers, about one third of one percent of the black population was imprisoned for most of the convict lease era. In 2017, by contrast, 1.4 percent of the black Georgia population was in state prisons. That is almost five times as many as in the age of mass incarceration. This tracks well with the nation as a whole in the era of mass incarceration, when at its peak, in 2006, 1.5 percent of the black population found itself in the prison system. 

    The small, brutish system of convict lease proved to be shorter in duration than the Thirteenthers suggest. They point out that Alabama’s system existed until 1928, but rarely, if ever, do they note that it was an outlier. In the 1890s, Virginia, Tennessee, South Carolina, North Carolina, and Mississippi ended theirs. By 1913, only Florida and Alabama were engaged in leasing. DuVernay’s film gives the impression that convict leasing and lynching caused the Great Migration out of the South, making blacks “refugees.” Yet before the start of World War I convict leasing was already a moribund institution, barely a shadow of the monster it had been, and lynchings were in decline. Ironically, the white supremacist governments that brought the nation the illiberal institutions of state-mandated segregation and black disenfranchise-ment ended the system most associated with chattel slavery. Moreover, they put out of business the only profitable penal system in American history. 

    While trading on images of the Southern prison structure — convict leasing, the chain gangs, the prison plantations — Thirteenthers ignore the form of convict slavery that engulfed  most prisoners in America from emancipation forward. In the North and the West, the prisons predominated through most of the nineteenth and twentieth centuries, but in the Thirteenther narratives they simply do not appear, because the amendment is treated as a federal black code, enslaving only blacks, regardless of work life. In non-Southern prisons, leasing out prisons and prisoners stopped in the late nineteenth century, and production under prison control for state use became the system. With northern migration, blacks found their way into them. Undoubtedly, racism resulted in harsher treatment, but it did not Southernize the prison regimes as the Thirteenthers suggest. Convict leasing, road chain gangs , and prison plantations did not appear. Racism abounded, but it was hardly new in the West or the North.

    Together the Northern and Western prisons dwarfed the Southern system in size and scale. Before the rise of mass incarceration, roughly a third of all black prisoners were serving time in them. By that time prisons were on the brink of rebellion, but the nature of prison life was different. Despite the inherent repression in all prison life, black convicts in the North and West found time, like their white counterparts, to pursue self-improvement. Malcolm X and many others like him became autodidacts with the assistance of prison libraries. Some received more formal education through vocational programs. If Northern and Western prisons produced white writers, they also produced black ones such as Chester Himes and Eldridge Cleaver. It was in federal prison that Angelo 

    Herndon wrote his autobiography Let Me Live. To maximize the propaganda value of black men in conditions reminiscent of chattel slavery, the Thirteenther narrative ignores the growth of black incarceration outside the South, hinting that Southern ways moved north. Yet the rise of penitentiaries in the South, along with the decline of the road-building chain gangs, suggests that the Southern penal system increasingly became more like the rest of the country. 

    Having used the black codes and convict leasing to create the impression that Thirteenth Amendment had subjected black people to massive, profitable, and brutal re-enslavement, Thirteenthers continue their discussion into the true age of mass incarceration, from the 1960s forward, as if nothing of substance had changed since 1865. Little thought is given to the inclusion of Hispanic bodies among the slaves, and white prisoners remain merely unfortunate by-products caught in the nets of a system that was designed to enslave blacks. The image presented is that of the state raking in profits from selling black labor to Fortune 500 corporations, consuming the fruits of black labor in prison industries and the various and sundry centuries-old plantations in the South. The truth is that most “convict slaves” are actually idle, and that the state and federal governments make revenues but never profits. All this seems wholly lost in the conversation. 

    And as serious scholars know, the origins of the expensive and unprofitable system of mass incarceration are to be located in the changes of the 1960s, not 1860s. Thirteenthers have little use for the works of scholars such as James Forman, Jr. and Elizabeth Hinton, who see mass incarceration arising largely from party politics and political choices made by politicians and communities, including African Americans. They do not take seriously scholars such as Ruthie Gilmore, who argues for examining the political economy — not narrow politics or the pursuit of revenue from prison labor — to explain the rise of mass incarceration. These are traditional scholarly debates, and so they lack a grand narrative of slavery or an explosive Jim Crow metaphor. They are not useful for propagandists.

    Having penetrated the academy, popular culture, social media, and the classrooms, Thirteentherism has also become a basis for social activism and policymaking. Lee Wood’s old CAPS agenda of ending prison slavery by removing the loophole from all American constitutions has been taken up by many. An increasing number of activists believe that by removing the “profit motive” from mass incarceration, locking up millions of people would lose its rationale.

    Nationwide prison strikes have become almost annual occurrences. In 2016, the promotion and release of DuVernay’s film overlapped neatly with a nationwide prison strike to end the abuse of prison labor. Originating in Alabama, the prisoners leading the strike invoked the role of the Thirteenth Amendment in making them slaves and protested against their being forced to work with little or no remuneration. The strike involved more than twenty thousand prisoners in twenty-four prisons. In 2018, coinciding with the fiftieth anniversary of the uprising at Attica, prisoners in seventeen states struck again and made ending prison slavery one of their ten demands. “The Thirteenth Amendment didn’t abolish slavery,” said the strike’s spokeswoman, Amani Sawari. “It wrote slavery into the Constitution. There’s a general knowledge that the Thirteenth Amendment abolished slavery, but if you read it, there’s an exception clause in the abolishing of it. That’s contradictory — that something would be abolished and there would be an exception to that.” 

    Beyond the prison strikes of recent years, there has been ongoing pressure from activists to sever the purported link between constitutions, state and federal, and the use of convict slavery. Most of the calls from prison activists and reformers are for the country to “amend” the federal Constitution to end all forms of slavery. On August 19, 2017, for instance, the Millions for Prisoners March on Washington, DC proclaimed that “We DEMAND the 13th amendment ENSLAVEMENT CLAUSE of the United States Constitution be amended to abolish LEGALIZED slavery in America.”

    The activism on the ground had some impact on presidential politics in the recent election, but not much. Among the major candidates, only Bernie Sanders invoked the amendment. In making a case against the continuation of private prisons, he argued falsely and inexplicably that they had their origins in “chattel slavery.” After the Civil War, he held, “prison privatization expanded rapidly when the 13th Amendment, which outlawed slavery but continued to permit unpaid penal labor, was ratified… Due to an extreme shortage of labor caused by the emancipation of slaves, former Confederate states exploited the legalization of penal labor by incarcerating newly freed black people.” To his credit, Joe Biden, despite an unfavorable record on stoking the growth of mass incarceration while in Congress, did not pander or traffic in this nonsense. As he sought to reverse his record and come out for the reduction of incarceration rates, he did not invoke the Thirteenth Amendment in his policy statements.

    At the state level, however, the situation has been different, and in the long run might bear fruit nationally. Given how deeply rooted the link between the Thirteenth Amendment and convict slavery has become in African American social thought, state-level politicians are responding to activists’ calls to end the loophole. In various states, efforts are being made to remove the language. In Colorado, activists laboring under that assumption pressed for constitutional change and achieved the removal of the exception clause. In 2016, they succeeded in placing on the ballot “Amendment T,” which, they believed, would have prohibited the state from using prisoners as labors without their consent. Despite a lack of opposition, the amendment failed by two percentage points because of its confusing language. Two years later a similar amendment passed, but a strange thing happened along the way — no one, not even its advocates, believed that the new amendment, despite its removal of the exception clause, would prohibit prisoners from being forced to work. By the time the bill was put before the people of Colorado, it became clear, as Vox reported, that the removal of the clause would not end virtually uncompensated labor. This was not a reform, it was a gesture; and too often reformist energy is squandered on gestural politics.

    Even in the wake of Colorado’s cosmetic change to its social contract, the movement to purify all state constitutions has not declined but rather increased. Policymakers and activists from a number of states (Utah, Colorado, Nebraska, South Carolina, New Jersey) have banned together recently to form “a national coalition fighting to abolish constitutional slavery and involuntary servitude in all forms.” During the 2020 election, the red states of Utah and Nebraska revised their constitutions to eliminate the exception clauses with complete bipartisan support. These victories are largely symbolic because they seemed to interpret slavery as chattel slavery or as involuntary labor performed for private enterprises, not the state. Utah’s Department of Correction will continue to require prisoners to perform work within the prison and to volunteer for other prison-labor opportunities, including with private industries. In Nebraska, State Senator Justin Wayne introduced a constitutional amendment to remove the exception clause in the state constitution. He assured voters that prisoners were paid a nominal amount for their labor. For many prison reform advocates, that nominal amount represented nothing less than convict slavery. 

    The only state initiative thus far that has had the potential to end convict slavery or any form of involuntary servitude is the one recommended by policymakers in New Jersey. (This initiative did not get on the ballot in the last election.) As one of the original thirteen colonies, the state’s constitution never carried Jefferson’s proviso that was imposed on territories brought into the union as anti-chattel slavery states. With convict slavery stretching back to its early colonial history, New Jersey would be breaking not with the Thirteenth Amendment but with its most deeply ingrained tradition. 

    Tying the abolition of convict slavery to the Thirteenth Amendment implies that the institution has shallow roots. Moved by the myth of Thirteenthism, however, lawmakers are adding rather than subtracting language to the constitution to uproot an ancient practice: “No person shall be held in slavery or involuntary servitude in this State, including as a penalty or a punishment for a crime.” As the language on the ballot that will be presented to the voters of New Jersey explains, “This amendment would prohibit forcing an inmate to work as a penalty for a crime, even if they are paid. This amendment would not prohibit inmates from working voluntarily.” And as Democrat Ronald Rice, one of the amendment’s sponsors, put it, “We must set right the treatment of prisoners in our prison system and guarantee that no one is unwillingly forced to perform work, whether they are being compensated two dollars or not. Our justice system continues to tarnish our nations [sic] principles but this amendment would set New Jersey on the right path to finally ending indentured servitude in our state once and for all.”

    Only a new amendment to the Constitution of the United States could end convict slavery everywhere, and now, thanks to the bad history of the Thirteenther movement, such a legislative effort exists. With the support of the Congressional Black Caucus, the constitutional amendment introduced by then-Representative Cedrick Richmond (who now works in the Biden White House) reads: “Neither slavery nor involuntary servitude may be imposed as a punishment for a crime.” Here is the language that has been calling out to those opposed to convict slavery from the time of Jefferson’s proviso. If it makes it out of the House, it will certainly die in the Republican Senate, though Senator Jeff Merkley of Oregon has expressed agreement with the Thirteenther argument. More than likely, the Thirteentherist amendment will become a perennial legislative offering, like the late Congressman John Conyers’ reparations bill. 

    Born of the use of the Thirteenth as propaganda, the proposed Twenty-Eighth Amendment will ultimately rise or fall on its proponents’ ability to win on the merits. Rather than trying to persuade Americans that mass incarceration is an inherent and expensive evil, which is an indisputable proposition, Thirteenthers have sought to trade on America’s moral distaste for chattel slavery, pretending that convict slavery was its offspring. When the false association is stripped away, the proposed amendment will call for Congress and then three-fourths of the states to vote for millions of prisoners to shift from being mostly to completely idle with taxpayers footing the cost. It would not have won in 1865 and it is unlikely to do so now.

    And there is a larger issue, a different integrity, at stake here. The Thirteenther use of history as propaganda to achieve a political end marks a break with the tradition of black history. From the antebellum period forward, black historians, professional and amateur, have believed that historical falsehoods justified black oppression and that the truth would therefore be an ally in the movement for racial justice and equality. By distorting the history of the Thirteenth Amendment and by denying one of black people’s greatest triumphs in American history — the destruction of chattel slavery — this generation has sought to emancipate itself by diminishing its ancestors’ prized accomplishment. It has also sought to free itself from culpability for a system that all Americans, including blacks, had a part in making. The legion of black intellectuals who have conflated convict labor and chattel slavery have reached the limits of false persuasion. History as propaganda works better to rationalize the status quo than to usher in change. Rejecting the historical meaning of the Thirteenth Amendment is not an avenue to progress.

    Theseus

    A young king, swashbuckling, expensively schooled
    in rhetoric and swordplay, with your gold-threaded tunic and plumed
    helmet fitted over your patrician nose:
    so you tossed bandits off cliffs and captured a bull—what do you know
    about war? Labor is for peasants, labor pains
    for women. But you waded among the suppurating dead
    on the fields of Thebes and broke the pollution law
    by washing corpses with your own royal hands.
    “Which bodies are mine?” I thought, as Bill
    Arrowsmith paced back and forth holding out his hands—
    “With his own hands!’ he kept saying. “Defiled!”
    With his own hands he offered us glasses of dark red wine.
    We perched along his couch, on his armchairs,
    taking notes. We had not yet touched our dead.
    Our labors were just beginning, mainly
    in library stacks and the pages of dictionaries.
    “Sophrosyne,” Bill barked, “The virtue of moderation,”
    with his round, sun-browned, wrinkled satyr’s face
    and black eyes flashing immoderately
    just a few years before he toppled, alone
    in his kitchen, his heart ceasing its labors and his corpse
    becoming the labor of someone else’s hands.

    The Flood

    —when angels fell out of the bookcase along with old
    newspapers, torn road maps from decades past, and a prize edition
    of the Très Riches Heures du Duc de Berry : suddenly

    the catalogue tumbled. The painting, the show, Peter Blume’s
    Recollection of the Flood, the studio where I slept
    as a child those nights when moonlight fingered

    the looming canvases, the forest of easels, the jug of brushes like a spray
    of pussy willow boughs—all surged. In Peter’s dream
    the restorers stand on scaffolding to paint

    the frescoed shapes between lines the flood has spared:
    and won’t some massive wave of oil
    and shit always storm a city’s heart? Restore, restore—

    there on the ghostly grid the angels dance
    holding hands in a two-dimensional ballet
    of bliss, taking on substance with each cautious dab

    to whirl with wings spread over the very rich hours
    of what we’ve lost. For they are sleeping
    on the bench at the foot of the scaffold, the refugees—

    the exhausted woman clutching her purse, a scrawny girl
    collapsed in her lap, the huddled, bony old man,
    bald head in his hand. And everything they’ve saved

    lies at their feet in a quilt bundle, or stuffed in a box
    tied with twine, or in that suitcase, desperately genteel.
    Only the boy is awake. The artist stands

    apart. Holds in his hands a sketch we cannot see.
    Blonde curls, like Peter’s. Remembering, perhaps,
    Cossacks, the flight from Russia, the ship, the Brooklyn

    tenement where he learned to draw.
    A jug of brushes stands on the windowsill.
    The angels keep twirling. I hear, beyond the door,

    the growl of mountain streams all dragoning down.

    “Dead Flowers”

    If you hurt yourself before
    someone else hurts you, is that
    homeopathic? Watch me prick

    poison into my skin, sign
    my name in pain. Watch me miss
    the appointment, cancel the call. Watch me

    gulp smoke and receive a certificate
    of enlightenment between
    the smeared egg-yolk horizon to the west

    and the bone-white eastern sky:
    the emperor appoints
    me to the Poetry Bureau and I

    declare myself Queen of the Underground.
    On the back road, the turkey vulture
    plucked the guts from the squashed squirrel,

    then flapped up to the dead
    branch of the shagbark hickory
    to examine us examining

    the carcass. O sacerdotal bird
    with your crimson scalp and glossy vestments, teach
    us to translate the spasm, the cry, the dis-

    integrating flesh, the regret.
    What can be made of all this
    grief. Over the butter-

    yellow, humming, feather-grassed midday meadow
    skim the shadows of vultures: ghostly, six-foot
    wingspan, V, swiftest signature, turning death into speed.

    Burning the Bed

    Carefully you balanced the old mattress
    against the box spring to create a teepee on that frozen December patch
    behind the house, carefully

    you stacked cardboard in the hollow and touched the match
    to corners till flame crawled along the edges
    in a rosy smudge before shooting

    twenty-five feet into darkening air. Fire gilded each
    looming, shadowed tree, gilded our faces as we stood with shovel and broom
    to smack down sparks.  So much

    love going up in smoke. It stung
    our eyes, our lungs. Pagodas, terraces, domes, boudoirs
    flared, shivered, and crumpled

    as the light caved in, privacies curled to ash-wisp, towers
    toppled, where once we’d warmed each limb,
    fired each nerve, ignited

    each surprise. And now at dusk, our faces reddened in heat
    so artfully lit, we needed all that past, I thought,
    to face the night. 

    Balanchine’s Plot

    The great choreographers have all been more than dancemakers, none more so than George Balanchine. He was in truth one of the supreme dramatists of the theater, but he specialized in plotless ballets with no named characters or written scenarios, and so this aspect of his genius has gone largely unexamined. Instead, everyone accepts the notion — it has become the greatest platitude about him — that he was the most musical of choreographers — a notion that, for all his musical virtues, should be qualified in several respects. Even at this late date, there is much about Balanchine that we still need to under-stand. He belongs in the small august company of modern artists who shattered the distinction between abstraction and representation. His work renders such categories useless.

    Balanchine’s dance creations often eliminate ingredients that others regarded as the quintessence of theater. The performers of his works are verbally and vocally silent. Facial expressions and other surface aspects of acting are played down. In many of his works, costumes are reduced to an elegant minimum: leotards and tights, “practice clothes,” often only in black and white; or simple monochrome dresses or skirts. In particular, he pared away layers of the social persona of his dancers, so that on his stage they become corporeal emblems of spirit. Liebeslieder Walzer, for example, his ballet from 1960, has two parts. In the first part, the four women wear ballgowns and heeled shoes; in the second, they dance on point and in Romantic tutus. “In the first act, it’s the real people that are dancing,” Balanchine told Bernard Taper. “In the second act, it’s their souls.”

    Serenade, one of his supreme creations, made in 1934 to Tchaikovsky’s Serenade for Strings, is a masterpiece for many reasons. No ballet is more rewatchable. (If you don’t know it, there are at least two complete versions on YouTube.) Several of its configurations and sequences are among the most brilliantly constructed in all choreography. Pure dance is threaded through with threads of narrative, suggesting fate and chance, love and loss, death and transcendence. It consists almost as much of rapturous running as it does of formal ballet steps. Classicism meets romanticism meets modernism: it is all here. The opening image is justly celebrated, a latticed tableau of seventeen women who, in unison, enact a nine-point ritual like a religious ceremony. At its start, they are extending arms as if shielding their eyes from the light; at its end, their feet, legs, torsos and arms are turned out, open to the light like flowers in full bloom. This has often been interpreted as transforming them from women into dancers. Taking Balanchine’s point about Liebeslieder, we might go further and say that the opening ritual of Serenade transforms them into souls.

    Serenade also has an important place in history as the first work that Balanchine conceived and completed after moving to the United States of America. A serial reviser of his own work, he kept adjusting it for more than forty years. Only around 1950 did it begin to settle into the form we know now, with its women in dresses ending just above the ankle. (The nineteenth-century Romantic look of those dresses is now definitively a part of Serenade: it remains a shock to see photographs and film fragments from the ballet’s first sixteen years, with the women’s attire revealing knees and even whole thighs. Still, if you see the silent film clips of performances by Ballet Russes de Monte Carlo in 1940 and 1944, you can immediately and affectionately recognize most of their material as Serenade.) For more than fifty years, Serenade has been danced by non-Balanchine companies around the world; in the last decade alone, beloved by dancers and audiences, it has been performed from Hong Kong to Seattle, from Auckland to Salt Lake City.

    Even so, for musical purists it is unsatisfactory. Tchaikovsky’s Serenade for Strings, composed in 1880, was a score in which this notoriously self-critical composer took immediate and lasting pride: he conducted it many times, not only in Russia but in many other countries too. He made it in cyclical form: his opening movement, called Piece in the Form of a Sonatina, opens and closes with powerful series of descending marcato scales, while the final movement returns to descending scales with a jaunty Russian theme. Throughout the work, the composer plays with musical effects as if he had the alchemist’s stone — taking the weight off those descending scales by changes of orchestration in the first movement; reversing them in the climbing legato scales of the third movement, the Elegy; and returning at the end of the fourth movement to the work’s opening scales, only to accelerate and show how closely they are related to the Russian theme. Tchaikovsky was deeply proud of his status as the most internationally successful Russian composer of all: by naming the final movement Tema Russo he reminds us that, if he had any extra-musical agenda in his Serenade for Strings, it was to win renown for the music of his nation.

    Yet Balanchine made Serenade only to Tchaikovsky’s first three movements. He was following the precedent of Eros, a ballet by Michel Fokine in 1916 which Balanchine had known in Russia, and which likewise omitted the final “Russian Theme.” (Although Balanchine remarked at the end of his life that he had not much liked Fokine’s ballet, he took several other ideas from it for Serenade.) A devotee of Tchaikovsky’s music, he may have omitted the concluding Russian Theme in 1934 merely because his new American students did not yet have the speed and the brilliance that in his view the Russian Theme would require; later in the decade he sketched the Russian Theme with Annabelle Lyon, one of his original 1934 group, but was unable to stage it. He added the Russian Theme in 1940, by which time the youngest of his original students, Marie-Jeanne, had acquired the virtuosity he wanted to create its leading role. But not as a finale, as in the musical score: instead Balanchine inserted it between the second and third movements, thus erasing one of Tchaikovsky’s most magical transitions, beginning the fourth movement with the same quiet high notes that ended the third.

    How curious: Tchaikovsky ended his Serenade with a high-energy and dance-friendly finale, but Balanchine preferred to close his Serenade with the Elegy, which seldom sounds like dance music. His reason, I think, was dramatic: by ending his ballet with Tchaikovsky’s elegiac penultimate movement, he found a way to conclude the work with a passage into the sublime. A number of Balanchine’s ballets end with the leading character departing for a new world. This is one of them.

    Balanchine’s Serenade is quite as marvelous a work as Tchaikovsky’s score. No, it is even more marvelous. Yet it is not a faithful rendition of Tchaikovsky’s original. Instead the choreographer gave it its own enthralling musical existence. Balanchine took the liberty of revising this score — as he did with scores by several composers, but with none so much as Tchaikovsky — because he was impelled by a dramatic vision. If, as I say, Serenade is the most rewatchable ballet ever made, it is because, from first to last, the work is an exercise in theatrical drama. Its narrative is mysterious but undeniable. The work is an abundant kaleidoscope of changing patterns, images, encounters, communities; a tapestry of stories that movingly suggest fate, love, loss, death, transcendence, and the group’s support for the individual. It is also an object-lesson in ambiguity and metamorphosis.

    When Balanchine arrived in the United States in late 1933, at the invitation of Lincoln Kirstein, he was not particularly associated with pure-dance works. In Western Europe, between 1925 and 1933, he had staged Ravel’s L’Enfant et les Sortiléges, Stravin-sky’s Apollo, Prokofiev’s Prodigal Son, the Brecht-Weill Seven Deadly Sins, and other highly singular narratives. Once in New York, he abounded in ideas for new ballets, many of which Kirstein reported in his diary. The projects of which he told Kirstein — sometimes he developed them for days or months — include versions of the myths of Diana and Actaeon, Medea, and Orpheus, an idea of his own named The Kingdom Without a King, a new production of The Sleeping Beauty, Uncle Tom’s Cabin (Virgil Thomson was to compose the score), Brahms’ Variations and Fugue on a Theme by Handel, Schumann’s Andante and Variations, a ballet of waltzes starting with those of Joseph Lanner, and The Master Dancers, a Balanchine idea based upon the story of a dance competition. (Most of these ideas were never fulfilled, though some were probably inklings of dances that Balanchine choreographed much later.)

    Kirstein’s entry for May 6, 1935 gives us a vivid glimpse of the dramatically imaginative workings of Balanchine’s mind in this note about a Medea ballet that never saw the light of day:

    Bal. thought of a new ending for Medea: Her dead body is executed by the troops: told me a story or idea for another pantomime : a court-room where the condemned is faced by a three headed judge. She is two in one like AnnaAnna: As evidence, objects like Haupt-mann’s ladder are brought in — The whole crime is reconstructed. She is declared guilty, though innocent… Bal said it shd be like Dostoevski. 

    Anna-Anna had been the London title of The Seven Deadly Sins, in which the dancing Anna and the singing Anna express different aspects of the same person. Balanchine never lost this flair for radically reconceiving old radical stories. In that work and others, he was addressing different layers of being, in much the same way that D.H. Lawrence, when writing The Rainbow, described to Edward Garnett about how it differed from his earlier Sons and Lovers:

    You mustn’t look in my novel for the old stable ego — of the character. There is another ego, according to whose action the individual is unrecognizable, and passes through, as it were, allotropic states which it needs a deeper sense than any we’ve been used to exercise, to discover are states of the same single radically unchanged element. (Like as diamond and coal are the same pure single element of carbon. The ordinary novel would trace the history of the diamond — but I say ‘Diamond, what! This is carbon.’ And my diamond might be coal or soot, and my theme is carbon.) 

    The Balanchine ballets that seem to be reflections solely of their music — the specialty of the long American phase of his career, especially from 1940 onward — do not dispense with narrative. Not at all. They supply multiple narratives or fragmented versions of a single narrative. In one of his last masterpieces, Robert Schumann’s “Davidsbündlertänze,” in 1980, four male-female couples express diverse aspects of Schumann and his relationship with Clara, his wife and muse. As in Liebeslieder Walzer, Balanchine introduced the women in heeled shoes but then brought them back onstage on point, as if setting their spirits free. The complication of having four Roberts and four Claras suggests the tragic splintering of the composer’s tormented and echoing mind: not Anna-Anna but Robert-Robert-Robert-Robert alone with Clara-Clara-Clara-Clara. This is multiple personality syndrome at its most poetic.  

    There are ballets in which Balanchine moves from showing his dancers’ bodies to showing their souls without employing any change of costume or footwear. The outer movements of Stravinsky Violin Concerto, from 1972, the Toccata and the Capriccio, are festive, with four leading dancers (two women, two men) each joined by a team of four supporting dancers. The mood is largely ebullient. But then Balanchine brings the concerto’s two-part centerpiece, Aria I and Aria II, indoors, as it were, as if he were taking us into a marital bedroom for scenes of painfully raw, almost Strindbergian, intimacy. Different male-female couples dance each Aria, though Balanchine may have seen them as different facets of the same marriage.

    The woman of Aria I is amazingly and assertively unorthodox, constantly changing shape, using the man’s support to be even less conventional. In the most memorable image, she does bizarre acrobatics, bending back to place her hands on the floor and then turning herself inside-out and outside-in, fluently flipping through convex/concave/convex shapes in alternation. The duet is an unresolved struggle, not so far from the marital strife of Who’s Afraid of Virginia Woolf? The woman of Aria II, much needier, is more subtly demanding. Stravinsky’s music has a repeated chord that sounds like a sudden shriek. Here the woman strikes an X pose, balanced precariously on the points of both feet with legs and arms outstretched. She is both confrontational and insecure: it may be the most passive-aggressive moment in all Balanchine, as if the wife is demanding his support. As he goes to her, her knees buckle inwards; when he catches them before she crumbles further, it seems as if she has mastered the way to pull him back to her assistance. It works. He plays the protective husband that she needs him to be; she is the grateful wife. There are moments of touching harmony between them — one when he shows her a panorama view with his arm over her shoulder, another when he covers her eyes and gently pulls her head back. The tension between his control and her passivity is part of the scene’s poignancy.

    Serenade, too, tells multiple stories, or gives us dramatic situations that we are free to interpret in many ways. Balanchine’s narrative skill is such that few observers follow this ballet without tracing some element of plot in it somewhere. This is a ballet about the many and the one: about how a series of individual women emerge from the larger ensemble, sometimes in smaller groups and occasionally with men, but recurrently supported by the corps. Over the years — as with no other ballet — Balanchine amused himself with the redistribution of roles: there may have been as many as nine soloists in the 1930s performances, but in the 1940s he gave most of the largest sequences to a single ballerina. (Perhaps he privately thought of it as one woman; in his late years he told Karin von Aroldingen that the work could be called Ballerina.) Yet there are moments when we see more women than one: there are tiny solo roles of great brevity, and in most productions all the women have been dressed identically. Again and again Balanchine makes us ask, Who is this? What is happening to her? At times the answer scarcely matters; at others it matters greatly.

    After a string of quasi-narrative situations, the final Elegy has always seemed the most suggestive of plot. It is profoundly moving because of the story it seems to tell. At its start, one woman is lying on the floor as if abandoned, bereft, or even dead. A man is led to her by another, fate-like, woman, who keeps his eyes and chest covered with her hands until he reaches his destination. The woman on the floor is the first person he sees; he is the first person she sees. Balanchine presented the charged moment of their eyes meeting, with the man and woman framing each other’s faces with their arms, as a quotation from Canova’s extraordinary sculpture Psyche Awakened by Cupid’s Kiss, to which he drew the attention of some dancers. Although this Canova quotation was itself derived from Fokine’s Eros, it must have gratified Balanchine that the three chief versions of the statue are to be found in the main museums of the three chief cities of his career: St Peters-burg’s Hermitage, Paris’ Louvre, New York’s Metropolitan Museum of Art.

    What follows between the figures — we might also call them the principal characters — seems like love. But just as Diana of Wales once observed that “there were three of us in this marriage,” so this love is shadowed by the constant presence of the female fate figure known in Balanchine circles as the Dark Angel. Other women pass through, one of them lingering for a while. (The dancer Colleen Neary told me that Balanchine once jokingly likened these three women to the man’s wife, his mistress, and his lover. And added, “Story of my life!”) An unhappy ending ensues. With startlingly swift force, the man lowers his “wife” to the floor. The Dark Angel stands aloof, averting her eyes from this tragic parting. She then returns as the agent of fate, beating her arms like mighty wings; once again she covers his eyes and chest with her hands; and she leads him offstage as if continuing the same diagonal paths by which they entered.

    All this is a powerful re-telling of the myth of Orpheus and Eurydice — a myth to which Balanchine returned between 1930 and 1980, using music by Gluck, Offenbach, and Stravinsky, and to which he made many autobiographical connections. In the ancient myth, Orpheus, artist and husband, loses Eurydice when she is bitten by a snake. He is permitted to enter the realm of the dead — the Elysian Fields, the realm of the blessed — and to lead her back to life on condition he does not look at her until they both have reached the ground above. At the last moment, however, he looks back, and loses her forever. The Elegy in Serenade prolongs and suspends the bittersweet moment of their eyes’ climactic meeting.

    Unlike Balanchine’s other treatments of the Orpheus story, this one leaves us with Eurydice, the dead Eurydice whom Orpheus has lost a second time. She is left by him on the floor exactly where she had been when he found her. When she rises, she parts her hands before her eyes, as if to ask if it were all a dream. Just at this point, in the confusion of her awakening, she is joined by a sisterhood: a small cortège of the women who have characterized the whole ballet. In grief, she embraces one of them — known as “the Mother” — before kneeling and opening her arms and head to the sky, in a gesture of utmost resignation and acceptance. As the ballet ends, she is carried off like a human icon, by three men, while her sisters and her “mother” flank her. She opens her arms and face to the skies in a backbend as the curtain falls, entering a new plane of existence. It takes several viewings before you realize that when she opens her arms and her head this way to the heavens, she is repeating what all seventeen women did in the ballet’s opening sequence. One reading of Serenade, therefore, is that all of its dramatic narrative is set in the Elysian Fields. Those dancers we see at the beginning are ghosts consecrating themselves, as if saying their vows.

    It is revealing that the eyewitness accounts of the first day’s rehearsal of Serenade differ: not contradicting one another, but concentrating on different facets. Kirstein wrote in his diary:

    Work started on our first ballet at an evening ‘rehearsal class.’ Balanchine said his brain was blank and bid me pray for him. He lined up all the girls and slowly commenced to compose, as he said — ‘a hymn to ward off the sun.’ He tried two dancers, first in bare feet, then in toe shoes. Gestures of arms and hands already seemed to indicate his special quality.

    For Balanchine, looking back in the 1950s and 1960s, the compositional issue had been the fortuitous presence of seventeen women. Probably he knew anyway from the music that he wanted them to start with a slow arm ritual — but how do you take this unwieldy prime number, seventeen, and arrange it in space? His brilliantly geometric solution of this arithmetical problem was the diagonally latticed formation, a pair of two diamond shapes conjoined. These obviate the usual vertical lines of ballet corps patterns. Each woman commands space like a soloist, with genuine parity. Never mind the Elysian Fields of the dead: this pattern has often seemed like an image of American democracy (which may have seemed Elysian to Balanchine after his experience in Russia and Europe between 1918 and 1933).

    And we have a third source for that rehearsal. Ruthanna Boris — one of those seventeen young women, who stayed in Balanchine’s orbit for many years, dancing the foremost roles in Serenade for the Ballet Russe de Monte Carlo in 1944, and choreographing for New York City Ballet in 1951 — wrote an undated memoir in which she recalled that Balanchine — announcing that “we will make some steps!” — then spoke to the seventeen young women about his life in Russia (“it was revolution, bullets in street”) and his move to Europe.

    Little by little his talking became more and more like a report — less conversational, more charged with feelings of anger and distress: “In Germany there is an awful man – terrible, awful man! He looks like me only he has moustache – he is very bad man— he has moustache — I do not have moustache — I am not bad man — I am not awful man!”… It seemed to me he was tasting his words and trying to get past them. To the best of my memory no one knew what he was talking about. We were adolescent and young ballet dancers, mostly American, mostly aware of the dance world, unaware of governmental affairs in the world beyond it….

    Look again at that opening tableau: Balanchine choreographed here as if he too had the alchemist’s stone, transmogrifying the Nazi salute in space until it became a quasi-religious vow.  

    The ritual that follows is similarly an exercise in metamorphosis, every staccato pause on the way taking the dancers further away from politics and danger toward a great openness to experience. In 1927, Paul Valéry had written, in The Soul and the Dance, that dance was “the pure act of metamorphosis,” and no ballet by Balanchine better illustrates the idea than Serenade. The opening upper-body ritual has no logic in terms of moment-by-moment meaning, but it shows us change in action (and then leads to ballet’s logic of turning out the limbs and torso from the body’s center). Balanchine was a practicing Christian, and I like to think the start of Serenade comes close to Paul’s famous words in the first letter to the Corinthians:

    Behold, I shew you a mystery. We shall not all sleep, but we shall all be changed. In a moment, in the twinkling of an eye, at the last trump: for the trumpet shall sound, and the dead shall be raised incorruptible, and we shall be changed. For this corruptible must put on incorruption, and this mortal must put on immortality, then shall be brought to pass the saying that is written, Death is swallowed up in victory.

    Balanchine has started his ballet with what could easily be an ending. But this dance prolegomenon abounds in thematic material. Even after hundreds of viewings, we keep noticing how myriad details of what follows — the bringing of a wrist towards the forehead, the sideways pointing of foot and leg, the arching back of the neck — were all introduced here, in the beginning, as a prophesy of the ending.

    He took pride in relating how he incorporated rehearsal accidents into this ballet. One day, a girl fell over; he put that into the ballet. Another day, another girl arrived late; he put that in, too. The incidents began to look like a story. Balanchine never worked quite that way again. What was it about Serenade that made him so receptive to chance moments of non-dance? Perhaps he could do so because he could see how those two girls were images of Eurydice. He adjusted the “girl who falls” so that she spins on the spot as if losing control before collapsing to the floor, like Eurydice at the moment of death; and in early performances (particularly in a film of a Ballets Russes performance in 1940) he then presented her supine body as if it was a corpse in its coffin. Likewise the latecomer may simply be Eurydice taking her place among the heavenly dance choir in Elysium. Who can tell now whether these Orphic fancies are truly what Balanchine had in mind?

    Certainly Balanchine had hidden imagery that he seldom disclosed. His protégé John Clifford was surprised when Balanchine, during a fierce argument about the fit of movement to music, said that his choreography of the second movement in Symphony in C, the high-classical pure-dance that he created in 1947 to Bizet’s score of that name, was “the dance of the moon… The grands jetés where she gets carried back and forth at one point are supposed to be the moon crossing the sky.” This was not an image that Balanchine had ever given his dancers; but many readers of I Remember Balanchine, which contains the interview in which Clifford recounts this anecdote, have dutifully written of the moon crossing in the sky in that sequence. (I still don’t see a moon in those lifts, though I enjoy watching both the moon and Symphony in C.) 

    Similarly, in 1979, Balanchine coached the dancer Jean-Pierre Frohlich in the first pas de trois of Agon, his master-piece of 1957. Performed in black and white leotards, tights, and T-shirts to a commissioned Stravinsky score, this work has often seemed a peak of pure-dance radical invention, infusing classicism with a new high-density and “plotless” modernity that moved dance far away from drama and role-playing. Yet Frohlich has recalled that Balanchine explained his role as “the court jester.” For me, this made immediate sense: it did not change my understanding of the work as a whole, but it helped me to define one aspect of its character.

    It matters to notice just how Balanchine tells his stories. The interesting thing about the young woman who falls to the floor in Serenade is not the way she falls but the entrance of the corps. Fifteen young women march in on point in five different rows, like radii towards her focal point. Yet they do not rush to console or to help her. In one of the strangest images in all dance theater, they coalesce around her in the shape of a Greek theater, whereupon they simply do staccato arm exercises. Has one dancer fallen? Then the dance will continue with the corps.

    We can also interpret them as another facet of the Elysian sorority around Eurydice. Such a view, however, does not quite explain their formality and their impersonal behavior. Serenade may contain fragments of myths, but it is about a larger process than any myth: the constant subordination of the dancer to the dance. So what happens next? The fallen woman, the dead Eurydice, promptly picks herself up and dances the most difficult jumps in the ballet so far. She explodes in the air only to pounce precisely back down onto the music’s beat.

    As for the episode with the latecomer, what’s dazzling is that her colleagues have all just resumed the ballet’s opening tableau. Sixteen of them stand again just as they did in the beginning, yet they look quite different now: their statuesque immobility is in total contrast to the quietly informal way in which she, the missing seventeenth, traces her way through their ranks. (“Drama is contrast,” said Merce Cunningham.) Just as she takes her place to join them in the ballet’s opening ritual, Balanchine hurls two other masterstrokes. The other sixteen dancers softly turn into profile, beginning slowly to depart, as if leaving her to her destiny. And a man enters, walking toward her with the same inevitability with which they are walking away. Again Balanchine is the master of geometry: the man’s path is a straight diagonal, the corps’ path is a straight horizontal, but both his advent and their exit are focused on her, this innocent latecomer who sees none of them. Even if you do not imagine Orpheus coming to rescue Eurydice from the realm of the dead, you cannot miss how mysteriously fateful this strange scene is. Balanchine fits it perfectly to the final bars of the Sonatina, so that we reach the music’s end in complete suspense.

    Another of the strangest features of Serenade is that it abounds in echoes. The “mother” at the end of the Elegy enters from the same corner and along the same diagonal as another woman did in the Sonatina. Five women in the Sonatina dance in a chain that prepares us for five different women who form a chain at the start of the Russian Theme. The man who enters along the long diagonal at the end of the Sonatina prepares us for the other man who enters at the start of the Elegy (Orpheus I and II). The woman who falls in the Sonatina is echoed by one — added in 1940 — who tumbles more spectacularly at the end of the Russian Theme. The mysterious kingdom of Serenade is a land of second chances. And so too, for Balanchine, was America. A serious case of tuberculosis in 1932-1933 had rendered him unable to work for a year. Lincoln Kirstein, after inviting him to America in 1933, kept hearing from Balanchine’s ballet friends that he had a poor life expectancy. Balanchine, left with only one functioning lung, later told a friend, “You now, I am really dead man.” But he lived in his new-found-land for almost fifty years, prodigiously prolific until a few months before his death.

    Balanchine liked to envisage himself meeting his composers in the next life. When he died, I wrote an elegiac essay in which I gleefully imagined the scene with all of them waiting by the elevator door to greet him as he arrived and gushing appreciatively about the fabulous things that he had made from their scores. Yet prolonged acquaintance with his ballets now makes me imagine a different scenario. Gluck: “Okay, that’s a beautiful pas de deux he made to the Blessed Spirit music in my Orphée et Eurydice, but surely he could have seen that I meant it as the middle section of a da capo structure! It has to be A-B-A, but he cuts the return to A.” Tchaikovsky, who has quite a list of complaints, begins: “When I wrote my Third Symphony, I took a deliberate risk by giving it five movements. But he cut out the first movement in his Diamonds and made it just another four-movement symphony! Also I never wanted the Siloti edition of my second piano concerto — it tidies up all the irregularities in which I was changing concerto form! His Serenade I will forgive; it’s not my Serenade, but, yes, it is just as beautiful, 

    I can see that now. But why re-order my Mozartiana? And why all that tinkering to my Nutcracker? A genius, yes, but an impossible one.” And so on.

    Among all the dead composers impatiently awaiting Balanchine in paradise, I long most to overhear Stravinsky. “George and I were good friends for over forty years – and yet, the very year after my death, he makes all those ballets to my concert music as if I were writing plays about men and women! He uses my music for plots! And my blood boils about what he did to the Divertimento from Le Baiser de la fée. He cuts some of it, he interpolates another bit from elsewhere in Baiser, it’s really quite fraudulent. All right, what he created was quite beautiful — and it is so amazingly dramatic — no way is this a divertimento!” The composer’s aggrieved ghost would be right. Balanchine’s “Baiser” Divertimento is a misnomer. It is too profound for that name.

    Stravinsky composed the complete ballet La Baiser de la fée in 1928. It is his re-telling of Hans Christian Andersen’s story The Ice Maiden as if the protagonist were Tchaikovsky, whose music is employed throughout in a modernist and neo-Romantic collage. The story chillingly illustrates Graham Greene’s point that “there is a splinter of ice in the heart of a writer.” The ballet’s hero is singled out in infancy by the Fairy, who distinguishes him from other mortals by planting a kiss on his brow: a vision of the muse at her most heartless. He becomes engaged to a girl, but the Fairy, often disguised but sometimes revealing herself with terrifying clarity, keeps parting them. The ballet ends with him helplessly following the Fairy into her icy realm while his fiancée is left alone in desolation. 

    Balanchine first staged this complete Baiser in New York in 1937, at the Metropolitan Opera. For some fifteen years, he kept it in repertory of the successive companies to which he was attached (the American Ballet, Ballets Russes de Monte Carlo, New York City Ballet) until, in the early 1950s, he finally dropped it. But Stravinsky had arranged a concert suite of the ballet’s music, Divertimento from Le Baiser de la fée, in 1934, and Balanchine turned to it in 1972, as he created a flood of new ballets in celebration of Stravinsky (who had died the year before). Oddly for a Stravinsky Festival, Balanchine made major structural changes to this score. (Just to call it Suite from “Le baiser de la fée would have been more accurate.)

    In particular, as the dance scholar Stephanie Jordan first noted in 2003, he introduced, from elsewhere in the complete ballet, a dance for which he created the most poetically dramatic male solo of his career. The music depicts how the Fairy’s irresistible spell begins to infect the hero. The 1972 solo, beginning with great elegance and formal charm, is an accumulating soliloquy, in which the hero’s conflicting energies and self-contradictory aspirations pour forth with uncanny seamlessness. With no histrionics, he seems both inspired and tormented, changing speed and direction in one dance paradox after another. He pivots on his own axis as if keeling over; he jumps forward while arching back; he punctuates a briskly advancing diagonal with sudden slow turns that gesture upwards and away; he softly circuits the stage with jumps that arrive in slowly searching gestures. It is a completely classical statement within a classical pas de deux, and yet it turns the drama around: it tells us that this hero is no longer the fiancé he was.

    In 1974, Balanchine tinkered some more with this already remarkable non-divertimento Divertimento. He now added music from the ballet’s finale, in which Stravinsky makes a heartbreaking arrangement of Tchaikovsky’s famous song “None But the Lonely Heart.” Now, however, Balanchine omitted the Fairy that Stravinsky had signified in this music. The only two leading characters in his drama are the hero and his fiancée, who, though trying to embrace, are repeatedly interrupted by an impersonal line of women corps dancers. Yet this cruel interruption makes less impact than the way the man and the woman now part, evidently forever, as if accepting separation as destiny. They retreat on separate paths that depict them both as figures of tragic isolation. Both walk with their torsos backward, unable to see where they are going. Slowly they zigzag their ways into ever greater distance from each other, without resistance. Man and woman are sundered, the ballet suggests, not by an external figure of fate but by their own internal impulses, which are just as inexorable. It’s as if, in A Doll’s House, Nora and Helmer had agreed to end their marriage without anyone slamming the door at the end. This ballet begins as a divertimento but ends as a tragic psycho-drama; and the progression from plotlessness to plot, from the delight of form to the heartbreak of alienation, proceeds in an unbroken sequence.

    To praise Balanchine as the most musical of dancemakers is to persist in a cliché that misunderstands the full magnitude of his achievement. There are technical aspects of music — melody and harmony, in particular — of which his contemporary Frederick Ashton sometimes found more in the same scores than Balanchine did. Yet this does not make Ashton, an artist dear to me, the greater artist. It is better to see Balanchine as an incomparable exponent of Director’s Theater. His musicality was of a far more interventionist kind than has generally been admitted. He was not just the grateful servant of his scores; he imposed his own vision on his music, which was often an intensely dramatic vision, a vision of humans in the fullness of their relations, and where necessary he tweaked his scores to fulfill it. In the vast majority of his ballets, music and dance work in brilliant counterpoint, different voices that combine to dig deep into our imaginations and our nervous systems. Ear and eye collaborate closely and uncannily in a genre of dance theater that, even now, takes us where we had not been before.

    Naming Names

    Fiorello La Guardia was a great mayor of New York — he even has an airport named after him — but he made some boneheaded errors. Some years after the Sixth Avenue El in Manhattan was razed, La Guardia and the city council decided to rehabilitate the neighborhoods around the thorough-fare, which had become run down from hosting the elevated train. And so, in October 1945, they officially rebranded Sixth Avenue as Avenue of the Americas.

    City planners must have found the cosmopolitan-sounding name exciting. New York City was emerging as the global capital, on the cusp of the American Century: home to the new United Nations and soaring International Style skyscrapers, a hub of commerce, a dynamo of artistic creativity. But this act of renaming by fiat, against the grain of public opinion, failed spectacularly. A survey ten years later found that, by a margin of 8 to 1, New Yorkers still called the street Sixth Avenue. “You tell someone anything but ‘Sixth Avenue,’” a salesman explained to the New York Times, “and he’ll get lost.” Generations of visitors have noticed signs that still say “Avenue of the Americas” and wondered fleetingly about its genesis and meaning, but for anyone to say it out loud today would clearly mark him as a rube.

    Names change for many reasons. While designing Washington, DC in the late eighteenth century, Pierre L’Enfant renamed the local Goose Creek after Rome’s Tiber River. It was a bid for grandeur that earned him mainly ridicule. After Franklin Roosevelt was elected president, Interior Secretary Harold Ickes saw fit to cleanse federal public works of association with the most unpopular man in America, making the Hoover Dam into the Boulder Dam. With independence in 1965, Rhodesia ditched its hated eponym to become Zimbabwe, and its capital, Salisbury, became Harare. When it was conquered by the Viet Cong in 1975, Saigon was reintroduced as Ho Chi Minh City, however propagandistic the appellation still sounds. On Christmas Eve, 1963, Idlewild Airport became JFK. In 2000, Beaver College, tired of the jokes, chose to call itself Arcadia. (Et in Beaver ego.) Even old New York was once New Amsterdam.

    Like the misbegotten Avenue of the Americas moniker, though, new names do not always stick. Who but a travel agent calls National Airport “Reagan”? Where besides its website is the New York Public Library known as “the Schwartzman Building”? In 2017, the Tappan Zee Bridge formally became the Mario M. Cuomo Bridge, thanks to its namesake’s son, but everyone still calls it the Tappan Zee. (Few knew that for the thirteen years prior it had been named for former New York governor Malcolm Wilson; in fact, few knew that someone called Malcolm Wilson had been governor.) Everyone also still calls the Robert F. Kennedy Bridge the Triborough and the Ed Koch Bridge the Queensborough.

    Political events prompt changes, too. When in 1917 German aggression forced the United States into World War I, atlases were summarily revised. Potsdam, Missouri became Pershing. Brandenburg, Texas, became Old Glory. Berlin, Georgia became Lens — but after the war, with the rush to rehabilitate Germany, it reverted to Berlin. (During the next world war this Berlin declined to change its name again, though 250 miles to the northwest Berlin, Alabama rechristened itself Sardis.) In 1924, the Bolsheviks saddled splendid St. Petersburg with the chilling sobriquet Leningrad — “after the man who brought us seventy years of misery,” as tour-bus guides tell their passengers. Only with Communism’s demise could city residents reclaim their old appellation.

    The revision — and re-revision — of place names is thus a common enterprise. But how and why those in control choose to re-label streets, cities, schools, parks, bridges, airports, dams, and other institutions has always been a strange, unsystematic process — subject to changing social norms, political fashions, historical revisionism, interest-group pressure, the prerogatives of power, consistent inconsistency, and human folly. The current craze for a new public nomenclature, in other words, is far from the straight-forward morality play it is often made out to be. How we think about it and how we go about it deserve more deliberation than those questions have received.

    Today’s nomenclature battles mostly turn on a specific set of questions: about race and the historical treatment of non-white peoples. Every day, in the United States and abroad, new demands arise to scrub places, institutions, and events of the designations of men and women who were once considered heroes but whose complicity (real or alleged) in racist thoughts or deeds is now said to make them unworthy of civic recognition. Not only confederate generals, upholders of slavery, and European imperialists are having their time in the barrel. So too are figures with complex and even admirable legacies, as diverse as Christopher Columbus and George Washington, Andrew Jackson and Woodrow Wilson, Junipero Serra and Charles Darwin, David Hume and Margaret Sanger — even, although it sounds like parody, Mohandas K. Gandhi.

    What has led us to set so many august and estimable figures, along with the more flagrantly reprehensible ones, on the chopping block? It helps to look at the criteria being invoked for effacement. To be sure, advocates of renaming seldom set forth clear, careful, and consistent sets of principles at all. Typically, the arguments are ad hoc, each one anchored in some statement, belief, political stance, or action of the indicted individual, the wrongness of which is presumed to be self-evident. But occasionally over the years, governmental committees, university panels, or other bodies have gamely tried to articulate some criteria. Their language is telling.

    One body that recently made plain its standards for naming was a Washington, D.C. mayoral “working group” with the ungainly label “DCFACES.” (An ungainly name is an inauspicious quality in a body seeking to retitle streets and buildings.) That acronym stands for the equally ungainly “District of Columbia Facilities and Commemorative Expressions.” In the summer of 2020, DCFACES released a report declaring that any historical figure would be “disqualified” from adorning a public building or space in Washington, DC if he or she had participated in “slavery, systemic racism, mistreatment of, or actions that suppressed equality for, persons of color, women and LGBTQ communities.” These rules resulted, among other absurdities, in a call to re-label Washington’s Franklin School (which now serves as a museum) because Benjamin Franklin, though a magnificent patriot, politician, democrat, diplomat, writer, thinker, inventor, publisher, and abolitionist, also owned two slaves, whom he eventually freed.

    Here is how the report’s executive summary presents the rules:

    IMPERATIVES

    Commemoration on a District of Columbia asset is a high honor reserved for esteemed persons with a legacy that merits recognition. The DCFACES Working Group assessed the legacy of District namesakes, with consideration to the following factors: 

    Participation in slavery — did research and evidence find a history of enslaving other humans or otherwise supporting the institution of slavery.

    2. Involvement in systemic racism — did research and evidence find the namesake serving as an author of policy, legislation or actions that suppressed persons color and women.

    3. Support for oppression — did research and evidence find the namesake endorsed and participated in the oppression of persons of color and/or women.

    4. Involvement in supremacist agenda — did research and evidence suggest that the namesake was a member of any supremacist organization. 

    Violation of District human rights laws — did research and evidence find the namesake committed a violation of the DC Human Right Act, in whole or part, including discrimination against protected traits such as age, religion, sexual orientation, gender identity, and natural origin.

    Several difficulties with this formulation are immediately apparent. For starters, the list is at once too broad and too narrow. It is too broad because phrases such as “support for oppression” are so vague and subjective that they could implicate any number of actions that might be defensible or explicable. It is also too broad because it implies that a single violation is altogether disqualifying, so that someone like Hugo Black or Robert Byrd (both of whom joined the Ku Klux Klan as young men, only to repudiate their actions and go on to distinguished careers) can never be honored.

    At the same time, the lens is also too narrow. Its single-minded focus on sins relating to race and sex (and, in one instance, other “protected traits”) in no way begins to capture the rich assortment of human depravity. A robber baron who was untainted by racist bias but subjected his workers to harsh labor would seem to pass muster in the capital. So would a Supreme Court justice with a clean record on race who curtailed freedom of speech and due process. Dishonesty, duplicity, and cowardice are nowhere mentioned as disqualifying. Neither are lawlessness, corruption, cruelty, greed, contempt for democracy, any of the seven deadly sins, or, indeed, scores of other disreputable traits any of us might easily list.

    The Washington mayoral working group was not the first body to set down naming rules focused on racism and other forms of identity-based discrimination. In fact, commit-tees have propounded such frameworks for a long time. In 2016, the University of Oregon, in considering the fate of two buildings, adopted seven criteria that largely dealt with offenses “against an individual or group based on race, gender, religion, immigration status, sexual identity, or political affiliation.” (The Oregon list, to its drafters’ credit, also contained some nuance, adding the phrase “taking into consideration the mores of the era in which he or she lived” and making room for “redemptive action” that the individual might have engaged in.) In 1997, the New Orleans school board proscribed naming schools after “former slave owners or others who did not respect equal opportunity for all.” Few objected when this policy was invoked to exchange the name of P.T. Beauregard on a junior high school for that of Thurgood Marshall. More controversial, though, was the elimination of George Washington’s name from an elementary school, no matter how worthy his replacement appeared to be. (He was Charles Richard Drew, a black surgeon who helped end the army’s practice of segregating blood by race.) So the battles now being waged in city councils and university senates, though intensified by the recent racial ferment, long predate the latest protests or even the Black Lives Matter movement of 2014.

    Like so many skirmishes in our culture wars, these go back to the 1960s. That era’s historic campaigns for racial and sexual equality; the widespread criticisms of government policy, starting but not ending with the Vietnam War; the deepening skepticism toward political, military, and religious authority; the blurring of boundaries between public and private; the exposure of criminality in high places; the demise of artistic standards of excellence — all these elements conspired to render quaint, if not untenable, old forms of patriotism and hero worship. Debunking thrived. Not just in the counterculture, but also in the academy, there took hold what the historian Paul M. Kennedy called “anti-nationalistic” sentiment: arguments (or mere assumptions expressed via attitude and tone) that treated the nation’s past and previous generations’ values and beliefs with disapproval, disdain, or even a conviction, as Kennedy wrote, that they “should be discarded from … national life.” Growing up in the 1970s and after, Generations X, Y, and Z were never taught to passively revere the Founding Fathers or to celebrate uncritically the American experiment. On the contrary, we were steeped in dissidence, iconoclasm, suspicion, and wisecracks. At its best, this new adversarial sensibility instilled a healthy distrust of official propaganda and independence of mind. At its worst, it fostered cynicism and birthed a propaganda of its own.

    The thorniest questions of the 1960s stemmed from the challenge, thrown down by the civil rights movement, for America to live up to its rhetoric of equality. “Get in and stay in the streets of every city, every village, and hamlet of this nation,” the 23-year-old John Lewis said at the March on Washington in 1963, “until true freedom comes, until the revolution of 1776 is complete.” With uneven resolve, Americans devoted to human equality have striven to meet the challenge. And this effort has included, crucially, rethinking the past. To highlight and learn about our nation’s history of racial exclusion and discrimination is among the noblest goals we can have in our public discourse, because it is the intellectual and cultural condition of justice: we will not be able to achieve equality without understanding the deep roots of inequality in our society. 

    By the 1990s American society had become an irreversibly multicultural one. WASP values, assumptions, priorities, and interpretations of the past could no longer dominate. “We Are All Multiculturalists Now,” declared the title of a somewhat unexpected book by Nathan Glazer in 1996. But with that watershed, Glazer noted, it became necessary to pose a new set of queries (which Americans had indeed been asking for some time): “What monuments are we to raise (or raze), what holidays are we to celebrate, how are we to name our schools and our streets?”

    Probably no group of historical actors has been subject to as much contentious debate as the secessionists who founded the Confederate States of America. Yet by the third decade of the twenty-first century, there was not much of a debate left about their virtues. Arguments for their valor already seem hopelessly antiquated. Partial defenses of Robert E. Lee, of the sort that David Brooks earnestly mounted in the New York Times just five years ago, now induce cringes. (“As a family man, he was surprisingly relaxed and affectionate… He loved having his kids jump into bed with him and tickle his feet.”) Were the Times to publish a piece like Brooks’ in the current environment, the whole masthead would be frog-marched out of the building under armed guard.

    The public, or some of it, has now learned that Southerners imposed most of their Lost Cause nomenclature, iconography, and narratives not in innocent tribute to gallant soldiers, but as part of a rearguard racist project of forging and upholding Jim Crow. This new awareness — along with the political agitation of the last decade — has altered how many Americans think about a military base honoring Braxton Bragg or a park memorializing Nathan Bedford Forrest. The Lincoln scholar Harold Holzer confessed last year that statues and place names which “I long regarded as quaint were in fact installed to validate white supremacy, celebrate traitors to democracy, and remind black and brown people to stay ‘in their place.’” It became increasingly incongruous, if not bizarre, to see in  a redoubt of suburban liberalism such as Arlington, Virginia, a boulevard evoking the Confederacy’s leading general.

    Still, as the protests in Charlottesville in 2017 showed, Lee retains his champions. Plying his demagoguery that August, Donald Trump — at the same press conference at which he defended the Charlottesville firebrands — warned that if Lee were to be scrubbed from public commemoration, George Washington (“a slave owner”) and Thomas Jefferson (“a major slave owner”) would be next. “You have to ask yourself, where does it stop?” To this slippery-slope argument, many have given a sensible and convincing answer: Lee, Jefferson Davis, Stonewall Jackson, and the others were traitors to their country; Washington, Jefferson, and the founders were not. Removing the former from streets and schools while retaining the latter admits no contradiction. As far back as 1988, Wilbur Zelinsky, in his fascinating history Nation into State, remarked that “as the military commander of an anti-statist cause, there is no logical place for Lee in the national pantheon alongside Washington, Franklin, and others of their ilk,” explaining that Lee entered the pantheon (or stood just outside its gates) only “as an archetypal martyr — the steadfast, chivalrous, sorrowful, compassionate leader of a losing cause.”

    Yet the distinction between traitors and patriots, while perfectly valid so far as it goes, does not answer the big questions. It does not address, for example, whether every last venue commemorating a Confederate must be taken down. Yes, let us lose the Confederate flags and Confederate statuary, and change the place names that keep alive the Lost Cause. But would it be acceptable to keep a handful, for considered reasons? Doing so would show that we know that our history includes the bad along with the good, as all human history does; and it would remind us that our predecessors at times were not able to tell the bad from the good. It would remind us that our country was once riven to the core by a struggle over evil and inculcate sympathy for the difficulty, and the cost, of the struggle. It might also deflate a presentist arrogance that tempts us to think that our current-day appraisals of the past, fired off in the heat of a fight, are unerring and for the ages.

    The distinction between traitors and patriots also fails to address the larger and more humane question of whether there is a way, notwithstanding the hateful cause for which the Confederates fought, to extend some dignity to their descendants who renounce the ideology of the Old South but wish to honor forbears who died by gun or blade. In the right context, and without minimizing those forbears’ attachment to an evil institution, this goal should, I think, be achievable. At the Gettysburg battlefield, monuments to Southern regiments stand arrayed opposite those to Northern troops, but in no way does a walk through the austere, beautiful environs suggest an exculpation or a whitewash. To erase any possible doubt, a professionally designed and intelligently curated museum nearby spells out the war’s history, including the centrality of slavery, in cold detail.

    And the distinction between traitors and loyalists is insufficient for yet another reason, too: it speaks only to the period of the Civil War. Outright traitors are a small, discrete subset of those who have come under fire in the recent controversies; the nomenclature wars span much wider terrain. Identifying secession as grounds for censure is fine, but it provides no limiting principle to help us think through, in other circum-stances, whose names should and should not remain. It says nothing about Theodore Roosevelt, Winston Churchill, John Muir, Kit Carson, Louis Aggasiz, Henry Kissinger, Voltaire, or anyone else.

    Most regrettably, the distinction does not persuade everyone. In addition to the Lost Cause devotees, some on the left likewise deny the distinction. We saw New Orleans retitle George Washington Elementary School back in 1997. When Trump cited Washington in his press conference in 2017, he was unknowingly describing something that had already happened. Could it be that he recalled the campaign at the University of Missouri in 2015 to defenestrate Jefferson, whom students, apparently knowing little about his quasi-marriage to Sally Hemings, excoriated as a “rapist”? Even if Trump was ignorant of these precedents, as seems probable, he must have felt some vindication when protesters in 2020 targeted Abraham Lincoln, Ulysses S. Grant, Frederick Douglass (!), and other assorted foes of slavery. Trump and these leftwing activists agree that the current renaming rage should not “stop” with traitors to the Union. They share a fanatical logic.

    Few participants in the nomenclature wars have reckoned seriously with this slippery-slope problem. The Yale University officials who renamed Calhoun College because its eponym flew the banner of race slavery were well aware that Elihu Yale earned his fortune at a powerful British trading company that trafficked in African slaves. But Yale remains Yale, for now. Similar contradictions abound. Are we to make a hierarchy of hypocrisies? If Woodrow Wilson’s name is to be stripped from Princeton University’s policy school because he advanced segregation in the federal bureaucracy, by what logic should that of Franklin Roosevelt, who presided over the wartime Japanese internment, remain on American schools? If the geneticist James Watson’s name is scratched from his research institution’s graduate program because he believed that racial IQ differences are genetic, why should that of Henry Ford — America’s most influential anti-Semite, who published the Protocols of the Elders of Zion in his Dearborn Independent — remain on the Ford Motor Company or the Ford Foundation? In what moral universe is Andrew Jackson’s name erased from the Democratic Party’s “Jefferson-Jackson” dinners, but Donald Trump’s remains on a big blue sign near the 79th Street off-ramp on the West Side Highway? How can the District of Columbia go after Benjamin Franklin and Francis Scott Key but not Ronald Reagan, whose name adorns the “international trade center” downtown? It is not a close contest as to who made life worse for the city’s black residents.

    The problem with the contemporary raft of name alterations is not that historical or commemorative judgments, once made, cannot be revised. Change happens. It may have been silly for the Obama administration to rechristen Mt. McKinley “Denali,” but it was not Stalinist. The real problem (or one problem, at any rate) is that no rhyme or reason underwrites today’s renaming program. Like the social media campaigns to punish random innocents who haphazardly stumble into an unmarked political minefield, the campaign of renaming follows no considered set of principles. It simply targets whoever wanders into its sights.

    If we wish to impose some coherence on the Great Renaming Project, a good first step would be to create a process of education and deliberation. Our debates about history generally unfold in a climate of abysmal ignorance. How much is really known about the men and women whose historical standing is now being challenged? What matters most about their legacies? Were they creatures of their age or was their error perfectly evident even in their own time? What harm is perpetuated by the presence of their name on a street sign or archway? The answers are rarely straightforward.

    In many public debates, the participants know little about what the men and women under scrutiny did. In April 2016, a Princeton undergraduate and stringer for the New York Times wrote incorrectly in the paper of record that Woodrow Wilson “admired” the Ku Klux Klan. The next day the paper ran a letter correcting the error, noting, among other facts, that in his History of the American People Wilson called the Klan “lawless,” “reckless” and “malicious”; but just two weeks later another stringer, one year out of Yale, parroted the same mistake. That even Ivy-educated youngsters got things so wrong should not be surprising. The undergraduates I teach tend to know about Andrew Jackson’s role in Indian Removal, and that he owned slaves. But most know little of his role in expanding American democracy beyond the elite circles of its early days. Millions of young people read in Howard Zinn’s A People’s History of the United States about the horrors that Columbus inflicted on the Arawaks of the Caribbean. But Zinn was rebutting the heroic narratives of historians like Samuel Eliot Morison, whose Columbus biography won a Pulitzer Prize in 1943. How many students read Morison anymore? How many have a basis for understanding why so many places in North America bear Columbus’ imprint in the first place? Were all those places consecrated to genocidal conquest? Without efforts to educate the young — and the public in general — about the full nature of these contested figures, the good and the bad, the inexorable complexities of human thought and action, these debates will devolve into a simplistic crossfire of talking points.

    On occasion, mayors, university presidents, and other officials have recognized that a process of education and deliberation is necessary before arriving at a verdict on a controversial topic. In 2015, Princeton University came under renewed pressure to address the racism of Woodrow Wilson, who was not only America’s twenty-eighth president but a Princeton graduate, professor, and, eventually, a transformational president of the college. At issue was whether to take his name off the university’s policy school, a residential dorm, and other campus institutions (professorships, scholarships, book awards, etc.). Desiring a process that was democratic and deliberative, the president of the university, Christopher Eisgruber, convened a committee. Multiracial and multigenerational in composition, it included members of the board of trustees, Wilson experts, higher education leaders, and social-justice advocates. It solicited the views of students, faculty, staff, and alumni. Historians wrote long, thoughtful, well-researched letters weighing the merits of the case. Some 635 community members submitted comments through a dedicated website (only a minority of whom favored eliminating Wilson’s name).

    The committee weighed the evidence, which includes the record not just of Wilson’s deplorable racism but also his undeniable achievements. Although many students today know little about Wilson besides the racism — which, we must be clear, went beyond private prejudice and led him to support Cabinet secretaries Albert Burleson and William McAdoo in segregating their departments — he was for a century considered one of America’s very best presidents. Wilbur Zelinsky, in his meticulous study, called Wilson “one of four presidents since Lincoln whom some would consider national heroes” (the others being the Roosevelts and John F. Kennedy). Wilson could claim in his day to have enacted more significant progressive legislation than any president before him; since then, only Franklin Roosevelt and Lyndon Johnson have surpassed him. Wilson also built upon Theodore Roosevelt’s vision of a strong presidency to turn the White House into the seat of activism, the engine of social reform, that it has been ever since. Nor was Wilson successful just domestically. He was a historic foreign-policy president, too, and a winner of the Nobel Peace Prize. After exhausting all bids for peace with Germany, he reluctantly led America into World War I, which proved decisive in defeating Teutonic militarism, and he pointed the way toward a more democratic and peaceful international order — though, crippled by a stroke and his own arrogance, he tragically failed to persuade the Senate to join the League of Nations, leaving that body all too ineffectual in the critical decades ahead.

    The Princeton committee’s fair-minded report was adopted by the Board of Trustees in April 2016. It recommended keeping Wilson’s name on the buildings. But Eisgruber and the board of trustees simultaneously promised that campus plaques and markings would henceforth provide frank accounts of Wilson’s career and beliefs, including his racism. More important, the university would, it said, take bold steps in other aspects of campus life to address the underlying grievance: that many black Princetonians do not feel they are treated as equal members of the campus community. And there the matter rested, until 2020. Following the Memorial Day killing of George Floyd by a Minneapolis policeman, protests erupted nationwide calling for police reform and other forms of racial justice — including, once again, the reconsideration of names. This time Eisgruber launched no deliberative process, appointed no diverse committee, solicited no external input, convened no searching conversation. He simply declared that the Board of Trustees had “reconsidered” its verdict of a few years before. His high-handed decree, more than the ultimate decision, violated the principles on which a university ought to run. For Eisgruber, it also gave rise to some new headaches: in what can only be seen as an epic troll, Trump’s Department of Education opened an investigation into whether Princeton’s confession of rampant racism meant it had been lying in the past when it denied engaging in racial discrimination.

    Curiously, at the same time as Princeton banished Wilson, Yale University also performed a banishment — this one with regard to John C. Calhoun, whose name graced one of its residential colleges. But there were crucial differ-ences between the two cases. Although Calhoun has been recognized as a statesman, grouped with Henry Clay and Daniel Webster as the “Great Triumvirate” of senators who held the nation together in the fractious antebellum years, he is a far less admirable figure than Wilson. He made his reputation as a prominent defender of slavery and a theorist of the nullification doctrine that elevated states rights over federal authority — a doctrine that later provided a rationale for Southern secession. But beyond the huge political differences between Wilson and Calhoun are the differences in the processes that Princeton and Yale pursued. Princeton jettisoned a deliberative decision to implement an autocratic one. Yale did something like the reverse.

    Following the Charleston massacre of 2015, the president of Yale, Peter Salovey, told his campus that Yale would grapple with its own racist past, including its posture toward Calhoun. Then, the following spring, he declared that after much reflection on his part — but no formal, community-wide decision-making process — Calhoun would remain. Salovey contended, not implausibly, that it was valuable to retain “this salient reminder of the stain of slavery and our participation in it.” To get rid of Calhoun’s name would be to take the easy way out. At the same time, Salovey also announced (in a ham-handed effort to balance the decision with one he expected students and faculty would like) that one of Yale’s two new residential colleges would be named for Pauli Murray, a brilliant, influential, underappreciated midcentury civil rights lawyer who was black and, for good measure, a lesbian.

    Students and faculty rebelled. Salovey backtracked. He now organized a committee, chaired by law and history professor John Fabian Witt, to tackle the naming question systematically. Wisely, however, Salovey charged the committee only with developing principles for renaming; the specific verdict on Calhoun would come later, decided by still another committee, after the principles were set. To some, the whole business seemed like a sham: it was unlikely that after vowing to take up a question a second time he would affirm the same result. Still, the exercise of formulating principles—in the tradition of a storied Yale committee that the great historian C. Vann Woodward led in the 1970s to inscribe principles for free speech on campus — was worthy, and Salovey populated the Witt committee with faculty experts on history, race, and commemoration. Even more than the Princeton report, the Witt Committee’s final document was judicious and well-reasoned. When, in 2017, Yale finally dropped Calhoun’s name from the residential college, no one could accuse the university of having done so rashly. 

     

    Deliberation by committee, with democratic input, may be necessary to ensure an informed outcome on a controversial subject, but as the example of DCFACES shows, it is not always sufficient. Setting forth good principles is also essential. One mistake that the Washington group made was in asking whom to disqualify from recognition, rather than who might qualify. Historians know that the categories of heroism and villainy are of limited value. Everyone is “problematic.” And as Bryan Stevenson likes to say, each of us is more than the worst thing we have ever done.

    Thus if we begin with the premise that certain views or deeds are simply disqualifying, we have trouble grasping the foolishness of targeting Gandhi (for his anti-black racism), Albert Schweitzer (for his racist and colonialist views), or Martin Luther King, Jr. (for his philandering and plagiarism). In any case, how can we insist that racism automatically denies a historical actor a place in the pantheon when the new reigning assumption — the new gospel — is that everyone is (at least) a little bit racist? We all have prejudices and blind spots; we all succumb to stereotyping and “implicit bias.” By this logic, we are all disqualified, and there is no one left to bestow a name on the local library.

    A more fruitful approach is the one the Witt Committee of Yale chose: by asking what are the “principal legacies” of the person under consideration, the “lasting effects that cause a namesake to be remembered.” We honor Wilson for his presidential leadership and vision of international peace. He is recognized not for his racism but in spite of it. We honor Margaret Sanger as an advocate of reproductive and sexual freedom, not for her support of eugenics but in spite of it. Churchill was above all a defender of freedom against fascism, and the context in which he earned his renown matters. Of the recent efforts to blackball him, one Twitter wag remarked, “If you think Churchill was a racist, wait until you hear about the other guy.” Not everything a person does or says is of equal significance, and people with ugly opinions can do great things, not least because they may also hold noble opinions.

    Principal legacies can evolve. They undergo revision as people or groups who once had little say in forging any scholarly or public consensus participate in determining those legacies. It may well be that by now Andrew Jackson is known as much for the Trail of Tears as for expanding democracy, and perhaps that is appropriate. Arthur M. Schlesinger, Jr., made no mention of Indian Removal in his classic The Age of Jackson in 1945, but by 1989 he had come to agree that the omission — common to Jackson scholars of the 1940s — was “shameful,” if all too common among his peers at the time. But as the Witt Committee noted, our understandings of someone’s legacies “do not change on any single person’s or group’s whim; altering the interpretation of a historical figure is not something that can be done easily.” For all that Americans have learned about Thomas Jefferson’s racial views and his slaveholding in recent decades, his principal legacies — among them writing the Declaration of Independence, articulating enduring principles of rights and freedom, steering a young country through intense political conflict as president — remain unassailable. We will have to learn to live with all of him.

    The Witt Committee also asked whether the criticisms made of a historical figure were widely shared in his or her own time — or if they are a latter-day imposition of our own values. The difference is not trivial. As late as 2012, when Barack Obama finally endorsed gay marriage, most Democrats still opposed the practice. But norms and attitudes evolved. Today most Democrats think gay marriage unremarkable, and the Supreme Court has deemed it a constitutional right. It might be fair to condemn someone who in 2020 seeks to overturn the court’s decision, but it would be perverse to label everyone who had been skeptical of gay marriage ten years ago a homophobe or a bigot. Historians must judge people by the values, standards, and prevailing opinions of their times, not our own. No doubt we, too, will one day wish to be judged that way. Yet the pervasive impulse these days to moralize, to turn analytical questions into moral ones, has also made us all into parochial inquisitors.

    It is also worth asking what harm is truly caused by retaining someone’s name, especially if the person’s sins are obscure or incidental to his reputation. Many buildings and streets commemorate people who are largely forgotten, making it hard to claim that their passing presence in our lives does damage. A federal court forbade Alabama’s Judge Roy Moore from placing a giant marble Ten Commandments in the state judicial building, but the phrase “In God We Trust” is allowed on coins because in that context it is considered anodyne and secular — wallpaper or background noise — without meaningful religious content. By analogy, the preponderance of place names hardly evoke any associations at all. They are decorations, mere words. The State University of New York at Buffalo removed Millard Fillmore’s name from a campus hall because Fillmore signed the Fugitive Slave Act. But it is doubtful that Fillmore’s surname on the edifice had ever caused much offense, for the simple reason that almost no one knows anything about Millard Fillmore.

    Then, too, as Peter Salovey initially suggested about Calhoun, a person’s name can sometimes be a useful and educational reminder of a shameful time or practice in our past. In 2016, Harvard Law School convened a committee to reconsider its seal, which depicted three sheaves of wheat and came from the family crest of Isaac Royall, a Massachu-setts slaveowner and early benefactor of the school. While the committee voted to retire the seal, historian and law professor Annette Gordon-Reed and one law student dissented, arguing that keeping the seal would serve “to keep alive the memory of the people whose labor gave Isaac Royall the resources to purchase the land whose sale helped found Harvard Law School.” Historical memory is always a mixed bag — if, that is, we wish to remember as much as we can about how we came to be who we are. Sometimes, a concern for history is precisely what warns us not to hide inconvenient or unpleasant pieces of the past.

    Often context can serve the purposes of promoting antiracism or other noble principles better than erasure. Museums and other forms of public history are experiencing a golden age. Historic sites that once lacked any significant information for tourists are being redesigned to satisfy the hungriest scholar. Plaques, panels, touch-screen information banks, and other displays can educate visitors about the faults and failings — as well as the virtues — of the men and women whose names appears on their buildings and streets. Addition — more information, more explanation, more context — may teach us more than subtraction. But even here, there are limits. A recent show at the National Gallery of Degas’ opera and ballet pictures did not mention that he was a virulent anti-Semite. Should we care? If the museum had “contextualized” the tutus with a wall caption about Captain Dreyfus, the information would not have been false, but it would have been irrelevant, and in its setting quite strange. We don’t need asterisks everywhere.

    Above all, renaming should be carried out in a spirit of humility. The coming and going of names over the decades might inspire in some a Jacobin presumptuousness about how easy it is to remake the world. But what it should more properly induce is a frisson of uncertainty about how correct and authoritative our newly dispensed verdicts about the past truly are. “We readily spot the outgrown motives and circumstances that shaped past historians’ views,” writes the geographer David Lowenthal; “we remain blind to present conditions that only our successors will be able to detect and correct.” Public debates and deliberation about how to name our institutions, how to evaluate historical figures, and how to commemorate the past are an essential part of any democratic nation’s intellectual life and political evolution. Our understandings of our history must be refreshed from time to time with challenges — frequently rooted in deeply held political passions — to widely held and hardened beliefs. There are always more standpoints than the ones we already possess. Yet passions are an unreliable guide in deriving historical understanding or arriving at lasting moral judgments. In light of the amply demonstrated human capacity for overreach and error, there is wisdom in treading lightly. Bias is everywhere, even in the enemies of bias. Nobody is pure.

    The Student

    He acts it as life before he apprehends it as truth.
    RALPH WALDO EMERSON

    Entering an unfamiliar classroom for the first time, met by a cacophony of greetings, shuffles, and the flutter of unsettled nerves, a student experiences a particular strain of vertigo — a a kind of thrownness. Unbalanced, she glances about, wondering if her fresh peers are already friends, if they know or care more about the subject than she does, if the professor will command attention or beg for it. She wonders also about the subject — how it will stretch or resist or entice her; and what personal qualities, as well as intellectual qualities, she ought to bring to her studenthood. She must wait for an internal order to develop, and for the nerves to slow gently into a new rhythm. The experience catapults her from the grooves of ordinary life. She has the sensation of a swift transit. 

    That is what learning is meant to do. The developments that will occur in that homely but exotic room over those few months ought to confuse, not confirm, her. Each time she enters the classroom she must again try to recapture the vertigo and recover the instability — to distance herself from herself. She cannot learn, or learn well, if she conceives of that place and those hours as a sphere in which to calcify who she already is. Alienation is essential to study. The classroom is a community of the alienated. Genuine learning demands courage and adventure. The room must be a realm apart, a space with a strange energy and a different gravity — a foreign country, populated by real and imagined strangers. Discomfort is its air.

    The comfort of one’s own couch, then, is a bad place to set up school. And so the question is begged: Is remote learning possible? Is the setting of study a matter of indifference to the activity of study? The question was relevant before Covid19 bleakly introduced the age of Zoom. In the United States over the past fifteen years, enrollment in online courses has more than quadrupled. This trend, the success of which was meteoric, was a response to the equally monumental and endlessly mounting cost of college for the average student. In America, higher education now costs students thirteen times what it did forty years ago, and that price has swelled while state funding for public universities has decreased. As tuition has risen, returns on investment have dropped. The pioneers of MOOCs — “mass open online courses,” for those born too late to remember the old country in which they required introduction — explained that this disconnection is due to the uselessness of traditional curricula for the contemporary workforce and the “revolution in work.” All this reading and writing, all this training in thought — all this humanistic exploration — seemed impractical, and practicality has increasingly become the standard of judgement. If not for a job, then for what? And so they developed cheaper, skills-based models. Those models are online, a “convenience” which proclaims that the classroom, like the libraries cluttering university campuses, is redundant, and even archaic.

    For years now, Coursera has offered a fully online master’s degree from the University of Pennsylvania in computer and information technology for one-third of the cost of the on-campus version. MIT boasts a supply chain management degree which begins with an online segment on edX (a global non-profit founded in 2012 by Harvard and MIT). Similarly, Arizona State University’s Global Freshman Academy kicks off with a virtual first year. In both the Arizona State and MIT programs students complete the initial leg of their degree online and then are invited to apply for the on-campus portion at a fraction of its usual price. edX, like most similar platforms, considers education the process through which students are armed with tools to earn money. From its website: “[we are] transforming traditional education, removing the barriers of cost, location and access…. [our students are] learners at every state, whether entering the job market, changing fields, seeking a promotion or exploring new interests.” It tells us “edX is where you go to learn.” A professionalized application of the term, to be sure; but because of the overwhelming success and reach of these platforms, they have largely successfully redefined “learning” and “education.”

    Anant Agarwal, the founder of edX, called 2012 “the year of disruption” for higher education. Disruption indeed, and on what a scale! In its first year, edX had 370,000 students. Coursera, founded in January of 2012, reached over 1.7 million students within just a few months, and in the same stretch of time formed partnerships with thirty-three of the most elite institutions in higher education, including Princeton, Brown, Columbia, and Duke. Often when people talk about education now, they mean education as edX defines it. And when they talk about it in the years of the pandemic, they may be referring to the only pedagogical means possible. Imagine life in lockdown without the internet! And yet one must ask, in this field as in many other fields of contemporary life, at what price convenience?

    Of course, a certain kind of learning can be done online. Knowledge comes in many types and has many purposes and brings many satisfactions, and many people will find that better jobs and better lives will result from the acquisition of what can be obtained digitally. These are not trivial considerations. But the technological expansion of educational resources may also come with a significant cost. The critique of the digitalization of life is not Luddism. It is the only responsible way to reap the benefits of digitalization, and it is an intellectual duty now that there is no going back. It would be foolish not to utilize the new technological opportunities, except when we utilize them foolishly. So what, exactly, can a screen capture and transmit, and what can it not capture and transmit? If we are serious about the supreme value of education for the individual and society, we must not passively acquiesce in every online excitement and remain worshippers in the church of “disruption.” It sounds almost silly to say, and yet many realms of contem porary life often ignore this truth: there are significant things that numbers cannot measure.

    One way to evaluate the new technology is by the old purpose. If the new technology cannot serve the old purpose, and if we continue to believe in the old purpose, then the new technology must be judged by its limits. Learning has a long history, at all its levels. We know a lot about it. And by the standard of what we know about it, we have reason to ask whether digital learning is, strictly speaking, learning at all. Perhaps, owing to the constraints it imposes upon the student and the teacher, it is something else entirely: perhaps it is merely training, the communication of useful information, which may be structurally similar to learning, but which has the opposite mental and spiritual effect.

    That there is a difference between learning and training, between meanings and skills, has been noticed before, in a variety of traditions and eras. Here is an ancient example. The distinction is alluded to in the opening chapter of Pirkei Avot, or Ethics of the Fathers, a tractate of the Jewish legal text known as the Mishna. This particular tractate has no laws; it is an anthology of rabbinical wisdoms. Here it is twice stated “aseh lecha rav,” or “make for yourself a teacher” — in the sixth article of the first chapter, “make for yourself a teacher, acquire for yourself a friend, and give every person the benefit of the doubt”; and again, ten articles later, “make for yourself a teacher, and avoid confusion, and do not become accustomed to estimating tithes.” That the imperative appears twice indicates — I am reasoning here in the old Talmudic way — that each instance must refer to a different type of authority.

    In both cases the word “rav” is used. This is the traditional term for the figure to whom one turns for legal rulings, and also for the teacher with whom one studies. The same person can serve both functions and traverse the distance between the two roles. Less arcanely, think of a professor who offers expert insight to a journalist before meeting with a student about her doctoral thesis: she could have been discussing the same subject in both places, but her tonal shift, and the change in the scholarly level of her intervention, would be consider-able. In both roles she wields authority, but while speaking to the journalist her authority is meant to be the final word, whereas with her student it ought to stimulate curiosity and conversation. 

    The rav who is discussed in the latter dictum has the sort of authority that obliterates doubt. This figure gives rulings, and dispositive answers to practical questions, and the listener take note and acts accordingly. The students of this rav are not provoked, they are steadied. They have heard the stabilizing certainties of an expert — no thought is required of them, just trust and a willingness to follow instructions. This authority is different in kind from the first sort of rav, the one mentioned just before the “friend” and just after an article that treats of relations between wives and husbands. Extrapolating from this sequence — husband-wife; student-teacher; friend-friend — the rabbis establish that, after family, this sort of teacher-student relationship is the most intimate form of companionship, more intimate even than friendship.

    In both these articles the same unexpected verb is used: one must make a teacher. There are, predictably, centuries of argument in the Jewish tradition over exactly what this making means. Teachers are not found, they are made; and not only are they made, but they are made together with people other than themselves — by their students. Note that “make” is immediately distinguished from “acquire” (“make for yourself a teacher, acquire for yourself a friend”). “Acquire” intimates that a friend comes readymade, as it were — prepared for friendship. Both partners decide independently to commence the friendship. But a teacher cannot be a teacher unless he 

    (the Mishna assumed that all students and teachers were male) is made into one by a student. This is a more radical and obscure observation than that one becomes a teacher only through teaching, by means of practice. It is well known that no textbook or graduate study can inculcate the peculiar sensitivities that a teacher must develop: that only the work of teaching does that. But the Mishna makes a stranger and more stringent demand upon the teacher: he owes his status to a collaboration. His pedagogical certification derives from a personal relationship with the individual who comes to him for knowledge. Closeness and trust, intimacy and vulnerability: these are the terms of teacher-making.

    These conditions are not optional but obligatory, as it is also established from the article that it is a duty that the student make a teacher. One must not simply wait for a teacher to turn up, and one must not try to learn alone. Maimonides, whose reading of the ancient injunction is echoed by subsequent commentators, strikingly declared that the student must secure a teacher even if the teacher is not intellectually superior to the student. Not your equal or your better; just your interlocutor. This is an extraordinary refutation of our commonplace assumptions about pedagogical qualifications. This ideal of study is not hierarchical, it is dialogical. (The Jewish tradition has plenty of hierarchical reverence for teachers in other places.) Dialogical study is always superior to solitary study. In a significant sense, solitary study is oxymoronic.

    If a teacher does not have to be smarter than his student, then cleverness and even erudition are not the most important quality in the setting of study, or in the classroom. What matters most, it seems, is that it be a human encounter, an exchange of intellectual electricity. Maimonides’ notion has humbling implications for both teachers and students. Clearly, it humanizes the teacher, whom we may otherwise be tempted to cast as an infallible sage. In this scenario of study, the teacher, too, is vulnerable. And it also reminds the young and the bright that precocity is beside the point: in the classroom, obtaining knowledge and understanding not yet acquired is the overriding objective. One must not come to class eager to glitter. A student who is mesmerized by her own rhythms and insights will not grasp the subject and enter its new world, which is what study is. Better to be empty and attentive than clever and ahead. Learning is travel. “When you travel,” Elizabeth Hardwick observed, “your first discovery is that you do not exist.”

    All of which is to say that education, I mean of the deepest questions and themes, is first and foremost an experience.

    The difference between the first aseh lecha rav and the second is the difference between training, which transmits a practical skill, and learning. Skills make one useful; they provide the security of a straightforward purpose. The goal of training is problem-solving; and since life is full of solvable problems, two cheers for training. But not all of our problems are of the solvable, or easily solvable, or obviously and familiarly solvable, kind. Problems of meaning do not have technical or replicable solutions. Learning, therefore, is the opposite of training. It is a different sort of preparation for a different sort of difficulty. Learning acclimates students to the looming awareness that life is not governed by simple laws clearly stated. It is messy, murky, essentially contested, often mysterious. In the realm of meaning, neatness is not natural. (Though there have been philosophers who have thought otherwise.)

    It is certainly possible for trainees to train in the spirit of study — for example, through the rigors and drudgeries of a legal education a law student can be stimulated by the philosophical implications of her casebooks. It is also possible for disciples to study in the spirit of a trainee: to master the weeds and memorize the footnotes. This is Casaubonism, or humanism degraded, robbed of its soul — in sum, humanism minus doubt. True study does not obliterate doubt. The longer one spends inside a new world, the more acutely one recognizes that there are facets of it that can never be wholly penetrated. And the deeper into the world one goes, the more exasperating and incontrovertible that truth becomes. Moreover, the eventual comparison of another world with our own is itself one of the classical sources of doubt. Authority in a field does not confer certainty, as the greatest scholars know.

    It is impossible to become comfortable in an alien world without a guide — it is impossible to learn without a teacher. Even Emerson, the learner par excellence, whose enchanted mind thrived in unbalanced confusion and ecstatic chaos, had teachers whom he imitated, revered, differed with, and finally abandoned — but only after having been transformed. Emerson, to be sure, was a genius — but again, a teacher does not have to be smarter than her students. She simply has to have knowledge that they do not have, and a willingness to deliberate together. The distance between what a teacher knows and what a student knows will always be considerably smaller than the distance between what a teacher knows and what it is possible to know. No matter how many books and manuscripts and archives a scholar discovers and masters, there will always be secrets unknown, always someone who knows something the expert does not (even if this other person knows less than she does). And so the amount of information she has mastered will never be as essential to a learner as the attitude she has towards what is strange. It is the development of this attitude, an acquired openness, that all learners have in common.

    The objective of study is not self-expression. A genuine student must quiet her own rhythm in order to focus intensely on the rhythms of an alien system — another person, another religion, another civilization — they all have their own rhythm. Still, quieting one’s own is not the same as forgetting it. A student is not a blank slate, she brings her experiences with her to the classroom; it is after all her own mind, her own self, that she is cultivating by means of study. But she does not hold them at the forefront of her mind while she works. She must never find herself more interesting than what she studies. If she captivates herself, she is captive to herself. She is self-shackled. Instead she must strain to allow her subject to set the pace of study. If she is to understand thoughts that are not her own and lives that are not her own, the question that she must ask is how they are different from her, not how they are the same.

    The exploration of what is alien is not always exciting. In some stages of study it will almost certainly be tedious. Everything worth understanding demands discipline. There will be drills: amo, amas, amat, amamus, amatis, amant, flashcards, charts, red pens. These drills are not stimulating, but serious intellectual stimulation is impossible without them. They are the humanist’s training — training-for-learning, training that is only preparatory, that makes the student fit for the transit to a different and non-utilitarian plane. Drills are not learning, the way stretching is not running, but try running without stretching. 

    The result of this training for learning is a ready mind, a mind primed for and open to the unfamiliar and the alien. These monotonous exercises are the scaffolding that will hold and support the new universe into which the student ventures. Openness is finally the greatest quality of the learner. A student who is constantly comparing an alien grammar to the grammar to which she is accustomed will never experi-ence the tingly mental reorganization particular to thinking in and about a new vocabulary. This openness is a peculiar kind of emptiness: it is rigorous emptiness, well-equipped and well-appointed, a tensed readiness to be filled in. It withholds judgment only so as to judge more correctly later, which is especially necessary when studying ideas or figures for which the student lacks natural sympathy. After all, the only negative evaluation that has intellectual integrity is an evaluation made after an intimate understanding has been developed — in the way, for example, that Isaiah Berlin for decades dedicated himself to the study of his intellectual opposites.

    Why is this capacity useful? The question is often asked. It is a reasonable question, insofar as people deserve to be given reasons for humanistic exertions, but it is also a crass question, because it makes utility paramount. Answers have been given to the question on its own grounds: that the study of art, history, and philosophy can make the difference between brilliant lawyers, politicians, and doctors and ordinary ones, because the more professionals know about human existence, the wiser they will be when their professional activities may require a gloss of wisdom. All this is true and familiar: these are the apologias that adorn the welcome catalogs of liberal arts departments. These practical rationales for humanistic study are further proof of the infiltration and triumph of edX’s flattened “education.” The defense of learning in the terms of training, the justification of the humanities in economic and vocational terms: this is the hemlock that the humanities (and the arts more generally, starved for funds) now serve and swallow. Recall the English majors now flourishing at McKinsey. No, learning for its own sake is the only justification that treats the subject on its own terms — and so learning for its own sake is the only sake there is. In that spirit we may gladly acknowledge the social and personal “utility” of humanistic pursuits, as it is presented by writers and historians and philosophers, since it will inevitably inform and enrich the lives of students and teachers. Anyway, spiritually speaking, the enrichment of human life is useful.

    The obsession with outcomes is hard to resist in an outcomes-based culture. It may penetrate the most impractical of pursuits. In her admirable book Lost in Thought, Zena Hitz, a tutor at St. John’s College, bears witness to one iteration of this phenomenon: “[as a professor] my focus shifted — without my noticing — to the outcomes of my work rather than the work itself. I had lost much of the ability to think freely and openly on a topic, concerned lest I lose my hard-won position in the academic social hierarchy.” Her lament brings to mind Nietzsche’s strictures about the professionalization of philosophy. “It is probable,” he wrote in 1874 in On the Use and Abuse of History for Life “that [a professionalized philosopher] will attain cleverness, but he will never attain wisdom. He compromises, calculates, and accommodates himself to the facts.” He conducts research in order to publish, which he does in order to maintain a reputation for publishing, which he does in order to keep his job. The wonder and the vertigo disappear from his work.

    Pardon the unreconstructed idealism, but there are higher reasons.

    “What do you think about translation?”

    She asked me that question a few months after we met. In that time I had developed a familiarity with the cadence of her thoughts, so different from mine, gentle and complicated, and always swaying, studying, interpreting. This ruminative cadence was the first thing I noticed about her. I knew she would introduce me to a new rhythm, a different pace of thought. My pace unnerved her: it was too fast and forward, she got spooked. Slow down, slow down. It was difficult for me to slow down. I wanted to learn it from her. Too early, and incessantly, I would ask her the questions that occupied me because I wanted to hear them played back at her tempo. It would transform them, make them strange, open them up. Even the words we both use we do not use in the same way. She has cultivated her own relationships with language.

    “What do you mean?”

    (It was an act of generosity that she answered me instead of concluding that I wouldn’t be able to understand her, and then withdrawing from me. That is a particularly bitter kind of rejection. Once, years ago, a man pulled back from me and muttered, “No, no, I shouldn’t have tried to tell you.” I remember where I was standing when he said that.)

    “I mean — well, if you’re in love with someone and he’s asked you to explain a thought that you’ve had, or a fear or anxiety or any example of the many sorts of things that are specific to you, but you know he can’t understand it because it’s the kind of thought he wouldn’t have or even have imagined was possible (not because he’s stupid or self-centered, but because it just isn’t within his framework) you have to translate it for him. Is that bad? If he can’t understand it, does that mean he can’t understand me? That he can’t really love me if translation is necessary? ….. I suppose it’s all a question of degree.” (It was so characteristic that she added that last thought, a signature suffix.)

    Her trust reminds me of an exchange I had with a writer who asked me whether her use of esoteric language, of arcane foreign words, in an essay that she had written made it incomprehensible to uninitiated readers. I reread it and responded: Many of the terms you used felt foreign, like the language of an alien tradition or an exotic religion. I like that feeling. For the duration of your essay I could develop an acquaintance with the rhythms of the tradition of which you are an emissary. It is the rhythm that would have been lost in translation. You were right to be uncompromising about a taste of the original. Since you didn’t define those words, which would have ruptured or mangled their melody, their verbal music remained intact, even if I couldn’t explain in my own language exactly what you were saying. If someone who has never danced asks you what the sensation of dancing is like, the best you can do is show them. I trusted that you would compose your essay in such a way that it would eventually allow me to understand your meaning, and I was grateful that you trusted me to savor what I did not yet understand. You worry about uninitiated readers, but your essay is their initiation, and initiation is education.

    But books are not people. Isn’t reading a form of remote  learning, too? Isn’t a page somewhat like a screen — a blank surface for language to occupy?

    Emerson was a radical reader. Ravenously he sucked the souls of writers out of their books. His great biographer Robert Richardson marveled that “it sometimes seems as though no book published from 1820 until his death evaded his attention completely.” On its face, Emerson’s bookishness is odd given that he worshiped activity and had contempt for “meek young men grow[ing] up in libraries.” But Emerson’s reading was charged, active. It offered entry to a symposium out of time. Reading works of genius, he wrote, one “converses with truths that have always been spoken in the world and becomes conscious of a closer sympathy with Zeno and Arrian, than with persons in the house.” A relentless thirst for the nectar of intellectual companionship informs Emerson’s writing. This is what permitted him to read the way he read. He was able to coax what he sought from the pages of a book because of the enthusiasm (his holy word) that charged his entire approach to living. Wrestling with intellectual and spiritual possibilities in conversation with others was a familiar exercise for Emerson. He took this method, this experience, this dialogical energy, to his books, which he believed were as sure a portal as a classroom.

    Yet he never mistook a book for a person, or recommended reading as an adequate substitute for teaching, lecturing, conversing — for the experiential dimension of study. (“Books are for the scholar’s idle times. When he can read God directly, the hour is too precious to be wasted in other men’s transcripts of their readings.”) But if a book is an example of remote humanistic study, what are we to say of digital remoteness? The text or the image is there on the screen, and so is the tiny apparition of the talking teacher, hovering above it. Ideas in some form may certainly be imparted. But is this the full transit to another world that constitutes the fulfillment of humanistic education? Isn’t it rather the case that the screen leaves one where one began? That it is a buffer, a fancy buffer between the student and the world?

    A screen is too familiar to propel a student from her deepest grooves, particularly for a student who has never left her couch. On a screen everything, no matter how vividly presented, is flattened and made less real, and all the realms are compressed and equalized into a comfortable, closable haze. Most importantly, all the Zooming in the world has not established the screen as anything but a simulacrum of human interaction, a dim facsimile of pedagogical experience. One is no more than a partial student when one has no more than a partial teacher, or no teacher at all. Zooming is a stopgap measure that leaves one longing for actual presence, which is the condition of actual learning. It is a lot better than nothing, but nothing must never be the standard.

    Some Possible Grounds for Hope

    I don’t see how we get out of this. There is nothing truer that can be said of this time. It is a perverse measure of its truth that we have been inundated with books and bromides that purport to show the opposite, that have hit upon the way out, the solutions, or better, the solution, the formulas for the miracle, all the how’s and all the why’s. How can so many people understand so much and so immediately, when so many of our torments are so unfamiliar? Isn’t anybody stunned into silence anymore?

    So many words, so many numbers, so many “frames.” They are fortifying, I guess, and we certainly need strength. Let every-one come forward in the dark with their light. But I don’t see how we get out of this, not yet. 

    The empty streets of the covid nights are so candid in their desolation. They are thronged with the people who are not there. They provide a peculiar serenity, in which one can be alone with one’s fear, and take it for a walk.

    Philosophers since Seneca have known that fear and hope are twins. They are alternative ways of interpreting the opacity of the future. 

    If hope were rational, it would be redundant. Hope picks up where reason leaves off, like changing guides at the frontier. Hope is the best we can do with uncertainty. It is an image of happiness that cannot quite be dismissed as an illusion. If it cannot be proven, neither can it be disproven. Its enchantment lies in its cognitive limitation. It comes to an end with knowledge. 

    One of the characteristic errors of the American debate is to mistake the homiletical for the analytical — preaching for teaching. The objective of moral and social thought is not uplift. And as every religious person knows, castigation, too, can be experienced as uplift. It warms the heart to be told that we are all sinners, doesn’t it? Drop a coin in the charity box on the way out, you miserable excuse for finitude, and recover your contentment. It was never really damaged anyway. Of course this high-level complacency is abundantly found among the secular as well. They, too, like a warm sensation of their own shortcomings, as long as you do not overdo it. They, too, are lifted up by the sound of sermons, as in the editorial “must”: “We must restore trust.” Yes, we must!

    For many years I travelled around the country, like an itinerant preacher, chastising American Jews for their ignorance of Hebrew, which is their language even if they cannot speak it. I was received cordially almost everywhere I went. But I became suspicious of this cordiality: after  all, I had come to discomfit them. And on the occasions when I did discomfit them — as when, after one of those lectures, a woman came up to me and testily said, “Sir, that was a wonder-ful presentation, but I did not feel affirmed!” — I smiled politely and triumphantly. (Actually, what I said to the woman was this: “Madam, I did not come all this way to affirm you.”) But those occasions were rare. The futility of my efforts was owed to the tragi-comic fact that feeling bad makes some people feel good. Criticism assures them of their meaningfulness, which is really all they seek.

    “I don’t see how we get out of this.” Thank you for your honesty. It is not nearly as disagreeable as our circumstances.

    If hope and history ever rhyme, in accordance with the poet’s wishes, it will be a soft rhyme, a weak rhyme, a half-rhyme. 

    I don’t see how we get out of this. The country is poisoned. There is contempt everywhere; contempt and certainty. There are also wonderful people doing wonderful things for the weak and the needy and the scorned — a national plenitude of local kindnesses; but all these practices of solidarity have not yet altered the character of our politics and our culture, or banished our furies. Not just yet. The rampaging passions — otherwise known as populism — have not yet exhausted themselves. Perhaps it is just a matter of patience, except that patience is in ideological disrepute and was long ago retired by our technology.

    The greater the suffering, the greater the dream of redemption. An apocalyptic is a man in extreme pain. He can imagine only an extreme cure. He is not concerned that he may cause pain to end pain. He hurts that much. But must the magnitude of the cure always be commensurate with the magnitude of the pain? What if there are cases in which the only genuine relief is gradual relief? This is insulting to the sufferer, who expects that his view of his suffering to be definitive. Yet our compassion, our love, does not require that we agree with him. A person in pain knows only one thing, but he will be saved with the help of people who know more things. For example: a person in pain hates time, which is abolished by the immediacy of his torments. He lives (to borrow Robert Lowell’s piercing word) momently. A person in pain experi-ences time as an eternity. (In this way he resembles a person in ecstasy.) But time may be his ally, insofar as it is the only condi-tion of his healing. Recovering from pain is a way of returning from eternity to time. Or, more practically, of taking concrete and steady and reasoned steps.

    Of course there are sufferers who do not have time on their side. When we discover this about physical ills, we call it tragedy. But we have no right to invoke tragedy about social ills. The tragic sense connotes a certain helplessness about circum-stances, or more precisely, about other people’s circumstances. It promotes resignation. But whereas it may be legitimate for me to resign myself to my troubles, it is not legitimate for me to resign myself to your troubles. I can surrender myself, but I cannot surrender you.

    To approach injustice from the standpoint of tragedy has the effect of relaxing the will and shrinking the sense of agency, and even of usurping ethics with aesthetics. How do you fight tragedy?

    Was slavery tragic? In retrospect, yes. But in its time, no. In its time it was odious and disgusting and abominable. In its time it demanded resistance and abolition. Only evils of the past are tragic. The evils amid which we live are challenges — occasions of responsibility. Tragedy is precisely what we are charged to preempt.

    Was the catastrophe in Syria tragic? Only because nobody stopped it.

    “Interventionism” is now a dirty word. But it signifies more than a controversy — well, I wish it were still a controversy — about foreign affairs. Who ever did the right thing without intervening? Ethical action is always an intrusion, a refusal to leave a situation as one found it. Morality is a theory of meddling. What is intervention if not the Biblical injunction not to stand idly before the spilled blood of another? I do not recall any mention of costs and benefits in the verse. A government, of course, needs more than the Bible, more than high principle, to guide its actions. But does power exist only for the perpetration of evil? What about the costs and benefits of doing nothing? Or shall we acquiesce in the deformities of the world, except when there is money to be made?

    “But it’s complicated”: the streets of the capital, the corridors of power that masquerade as the corridors of powerlessness when it suits them, echo with those allegedly extenuating words. It is always smart to say that a problem is complicated. As if it is the duty of government to pursue justice only when it is not complicated.

    Tragedy, remember, is designed, in its most influential definition, to excite “pity and fear” so as to bring about “the proper purgation” of those emotions. It is a performance that exercises certain feelings so as to annul them. Never mind that those feelings may be put to good use outside the theater. Tragedy is an entertainment.

    Catharsis is the enemy of action. It leaves one spent and sated. It is the orgasm of conscience. I wondered about the relation of catharsis to politics as I joined the protests at Black Lives Matter Plaza. I was not worried about “performativity,” since the public expression of opposition is an essential element of opposition. I was worried about the problem of spiritual stamina, about the durability of the energy in the streets, about the overestimation of excitement, about the preference for the adventure of protest over its pedantic translation into policy. The politics of the streets can make do with catharsis. We will see.

    Concrete and steady and reasoned steps taken patiently and resolutely over time for the purpose of mitigating and eliminating the sufferings of others: in a word, liberalism. 

    The most widespread cliché of our time is “polarization.” Everyone laments it, and many scholars and commentators regard it as the most dire of our ills. It has provided work for a generation of social scientists. That we are living in an age of spectacular social division is undeniable, and the excesses of this discord are sometimes lunatic and criminal. But a little intellectual pressure needs to be put on this obsession with our lack of harmony. Is it worse than covid, or discrimination, or poverty? Of course not. There are those who argue that it will be impossible to address those monumental wounds in our society unless we overcome polarization. Barack Obama squandered the first two years of his presidency, when he had a majority in both houses of Congress, on lyrical exhortations to bipartisanship. But there is nothing freakish, or surprising, or unAmerican, about partisanship, even extreme partisanship. It is the stuff of which politics is made. But then one must take politics seriously — more, one must think highly of politics, and even revere it, and recognize that its ruthlessness is not inconsistent with its nobility; which is to say, one must come to value power.

    The words “value” and “power” look strange together, don’t they? The juxtaposition certainly makes many liberals uncomfortable. They have been mildly embarrassed about power for many decades, probably since Vietnam. But if you are not serious about power you are not serious about change.

    If despair is born of powerlessness, then power is a reason for hope. It sounds harsh and unlovely, but there is no other way to protect human dignity and its political home, which is democracy.

    Political ideas are not poems. They do not exist to deepen our grasp of reality. Their objective is to modify reality. For this reason, political thinkers may be held accountable for the consequences of their thoughts. Anyone who lacks the stomach for consequences should stick with poetry. (For the purpose of a rich life, however, it beats politics.)

    When the mad and beautiful Phil Ochs was asked for his verdict on the 1960s, he replied: “They won the war, but we had the best songs.”

    Polarization is one of the effects of partisanship and partisan-ship is one of the effects of human association. 

    To acknowledge reality without becoming complicit in it. To correct the world without destroying it. Those were the accomplishments of James Madison. His genius, and it was nothing less, was for being an optimist and a pessimist, an idealist and a realist, at the same time. He got the balance right, while the globe is littered with the ruins of political experiments that got it wrong. The equilibrium was revolutionary, especially on the question of the place of conflict in human affairs.

    A revolution of equilibrium: the American innovation.

    A reading from The Federalist Papers, 10. Please rise.

    The latent causes of faction are thus sown in the nature of man; and we see them everywhere brought into different degrees of activity, according to the differ-ent circumstances of civil society. A zeal for different opinions concerning religion, concerning government, and many other points, as well of speculation as of practice; an attachment to different leaders ambitiously contending for pre-eminence and power; or to persons of other descriptions whose fortunes have been interesting to the human passions, have, in turn, divided mankind into parties, inflamed them with mutual animosity, and rendered them much more disposed to vex and oppress each other than to co-operate for their common good. So strong is this propensity of mankind to fall into mutual animosities, that where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts. But the most common and durable source of factions has been the various and unequal distribution of property. Those who hold and those who are without property have ever formed distinct interests in society. The regulation of these various and interfering inter-ests forms the principal task of modern legislation, and involves the spirit of party and faction in the necessary and ordinary operations of the government.

    Here endeth the reading.

    So be of good cheer: it was always nasty. To borrow the famous phrase of Madison’s successor in the formulation of the Ameri-can philosophy, the better angels of our nature are not the only angels of our nature. The American system was constructed on the assumption that conflict is ineradicable. The foretold conflicts concern both principles and interests, and the expectation is that they will be brutal. “The causes of faction cannot be removed,” is Madison’s conclusion. Out of this dourness he designed a democracy.

    It should be added that the conflicts that constitute a permanent feature of society are not — as we, in our psychologizing habits, often prefer to think of them — misunderstandings. 

    There is no clarification, no revision of language, that will make them vanish. A misunderstanding is an apparent conflict, a temporary conflict. It can be resolved with some exploration and some patience, and an apology. But a contradiction between worldviews cannot be resolved; it can only be respected, and then managed. And if the opinions are sincerely and thought-fully held, neither side has anything to apologize for.

    Error is a form of innocence. There are many worse things in life than being wrong. (This is the courtesy that Americans seem no longer able to extend to each other.)

    Respect is more valuable, and more arduous, than reconciliation.

    The alternative to “polarization” is not consensus. There will be no consensus. Madison already warned against “giving to every citizen the same opinions, the same passions, and the same interests.” In the American tradition there is no fantasy of unanimity. Social agreement is not our eschaton. The American hypothesis is that consensus is not necessary for cooperation, that social agreement is not necessary for social peace. 

    The horror of uniformity is the democratic idea itself.

    In his painstaking attempt to describe an “overlapping consensus” for a democratic system that must accept “the fact of pluralism,” John Rawls admitted that “we do not, of course assume that an overlapping consensus is always possible, given the doctrines currently existing in any democratic society.” It is a bleak moment in his heroically optimistic enterprise. I think it passes too swiftly. He was a philosopher and he insisted upon a philosophical conception of justice, and for this reason he dismissed what he called a “mere modus vivendi.” He accused Madison (and Hobbes and Locke and Hume and Kant) of philosophical failure by contenting himself with the ideal of compromise between interests. Rawls thought that such a purely improvisational system is too fragile. Indeed it is; but it may be the finest we can do — one fragile compromise after another fragile compromise until the end of time. The problem is not only that we are not a nation of philosophers; it is also that in a pluralist society there is nothing “mere” about a modus vivendi. Madison should not be treated as the first transactionalist. It is dangerous to delegitimate compromise philosophically. Indeed, many unphilosophical activities hide philosophical principles and teach philosophical lessons. There are worse failures than theorylessness.

    I am always a little shocked, and pleasantly so, by the Founders’ ease about interests. They were unembarrassed by human partiality. And from the grubby they rose to the sublime.

    The United States Constitution is the greatest tribute to, and the greatest rebuke of, Hobbes. 

    A philosophy and a system of government that proposes to accept the collisions of society and leave the cacophony alone is a prescription for tough-mindedness. Or more accurately, tough-mindedness in the cause of the tender mercies. We are called upon to be not only sensitive but also effective.

    Too many worriers about “polarization” are so sentimental, so nostalgic, so exquisite in their sensitivity to the injuries of democratic combat, so anxious that taking a side might be a human failure. Yet an open society is a rough society. Polemic is one of the central methods of persuasion. “Deliberative democracy” is not the work of professors, even if it is the invention of professors.  

    We are a society that makes a cult out of honesty and then wants to be protected from it. 

    In an open society, inoffensiveness may be a delinquency of citizenship.

    Democracy is wasted on the timorous. The emboldening of ordinary men and women is its very purpose.

    A reading from The Social Contract, Book I. Please remain seated.

    Properly understood, all of these clauses [of the social contract] come down to a single one, namely, the total alienation of each associate, with all his rights, to the whole community…Instantly, in place of the private person of each contracting party, this act of association produces a moral and collective body, composed of as many members as there are voices in the assembly, which receives from this same act its unity, its common self, its life, and its will…For if the opposition of private interests made the establishment of societies necessary, it is the agreement of these same interests that made it possible….Either the will is general or it is not, It is the will of the people as a body, or of only a part…There is often a great difference between the will of all and the general will. The latter considers only the common interest; the former considers private interest, and is only a sum of private wills…In order for the general will to be well expressed, it is therefore important that there be no a partial society in the State…..

    Rousseau adds a footnote: “In order for a will to be general, it is not always necessary for it to be unanimous, but it is necessary that all votes be counted.” Not always! There is here a dream of social and political seamlessness, which is achieved by the dissolution of the individual in the community, the collectivity, the state. It was appropriate that the animadversion about unanimity, the mild concession to the stubbornness of difference, be a footnote, because in the holistic ethos of Rousseau’s state it really is just a footnote. These passages, and the notorious remark, also in Book I, that “whoever refuses to obey the general will shall be constrained to do so by the entire body, which means only that he will be forced to be free,” provoked a renowned historian to describe Rousseau’s ideal as “totalitarian democracy.” 

    He aspires to a perfect union, but we aspire to a “more perfect union.” The difference between democracy and totalitarian-ism is the difference between the belief in perfectibility and the belief in perfection. (I do not concur that Rousseau was a totalitarian, exactly; but his democracy repels me. I am an American.) He holds that the individual must “alienate” his rights, but we hold that the individual’s rights are “inalienable.” If you wish to understand the philosophical and political excruciations that France has endured in the wake of the murder of Samuel Paty, may his memory be a blessing, you could do worse than begin with the distinction between these notions of alienation and alienability.

    There they do not wish to recognize difference. Here we wish to recognize nothing else. Or so it sometimes seems.

    Is a nation a community? The communitarians among us would like to think so. It is certainly the case that a sub-national idea of community would leave us a state of states, a community of communities, a bubble of bubbles, a collection of monocultures paradoxically justified by multiculturalism. This would amount to a degradation of the pluralist promise, according to 

    which we can live together and apart. In order to cohere as a nation, we must extend ourselves beyond our particularities, beyond our cloisterings. A homogeneous nation has no need of universalism, but a heterogeneous nation is proof of its beauty. 

    Of course there is no such thing as a homogeneous nation. It was one of the necessary fictions of nationalism, and minorities have been paying dearly for it ever since. There is always someone unlike ourselves within our borders, and even if there were only one such person, he or she would still be the test of our decency. (And he or she may think it is me.) 

    Perhaps a nation should not be a community. Perhaps it is enough that it is a nation.

    In 1813, in a case in New York called People v. Philips, which considered the question of whether a Catholic priest could be forced to provide information that was obtained in the confessional, a lawyer named William Sampson told this to the court: “Every citizen here is in his own country.  To the protestant it is a protestant country; to the catholic, a catholic country; and the jew, if he pleases, may establish in it his New Jerusalem.”  An epochal declaration, a genuine liberation from the Old World. But what he described is both a blessing and a curse. It is pluralism carried to the limits of psychosis  For even if we are all of those countries, we are not any of those countries. We are a whole that does not devour its parts, but we are still a whole.

    Who in his right mind would wish to live only among his own? Give me goyim, please! Traditions wither in isolation. Only the infirm of identity seek more of themselves.

    It is the stupendous irony of a multiethnic society that it exposes the limitations of particularism. 

    In 1966, the brilliant Jewish historian Gerson D. Cohen gave a commencement address at the Hebrew Teachers College in Boston that he called, with a hint of wickedness, “The Blessing of Assimilation in Jewish History.” Reading it now, when a soft kind of separatism is enjoying a new prestige, is exhilarating. “A frank appraisal of the periods in which Judaism flourished will indicate that not only did a certain amount of assimilation and acculturation not impede Jewish continuity and creativity, but that in a profound sense, this assimilation and acculturation was a stimulus to original thinking and expression, a source of renewed vitality.” Our borders give us our shape, but their porousness contributes to our substance. A border is not a wall, it is the opposite of a wall, and the confusion of a border with a wall is a prescription for social and cultural disaster. 

    In the name of authenticity, people imprison themselves. And when they do so, these loyal sons and daughters, they usually insult their ancestors, who were less afraid of influences.

    The recent history of American society can be told as a story about the vicissitudes of the idea of integration.

    Differences are not discrepancies, except from the haughty standpoint of somebody else’s norm. They do not have to be brought into line. But we are not wanting in arguments for difference. Everybody screams their difference, which makes them all so tediously alike.

    Permeability ought to be a source of pride in mature individuals and mature societies. 

    A possible ground for hope: the individual. In a country in which people are masterfully manipulated by disinformation and demagoguery, in an electorate that increasingly consists of mobs and herds and gangs, in a society in which citizens are encouraged to seek intellectual strength in numbers, it is past time to remind ourselves of the dignities and the powers of the ordinary man and woman, of the autonomy of adults, of the ability of individuals to think for themselves and rise above the pernicious nonsense that their individuation is what ails them. 

    The religious extol the uniqueness of souls, the secular extol the uniqueness of selves. In this way they issue the elevating challenge that their integralist currents, religious and secular, retract.

    You cannot take your country back until you take your mind back. 

     

    I used to like bowling alone. Not always, but sometimes. Anyway, there is nothing like company to make you feel lonely. Loneliness is a social emotion.

    Individualism is a far larger dispensation than egotism, which is not to be confused with it. Egotism is a debasement of individualism, in the way that selfishness is a debasement of selfhood. The problem of individual self-love is as nothing compared to the problem of collective self-love. 

    The moral superiority of the community to the individual seems dubious to me. Belonging does not insulate anybody from transgression. Worse, there are depredations that we commit together than we would not commit alone. The haters among us, the killers among us, they may be members and they may be loners. They may speak for themselves and they may speak for their group. And communities may be kind or cruel. It’s a wash. The human heart is busy everywhere.

    In Hebrew, the root for “hope” is the same root for “gather together,” as in “Let the waters under the heaven be gathered together to one place.” As the authoritative concordance notes, 

    sperare and congregatio. I have often pondered this mysterious etymology. It suggests that hope is premised upon the end of a dispersal. But what has been dispersed that must be brought together — the community or the individual? If it is the former, if united we stand and divided we fall, then hope is to be found in the reconstitution of community. If it is the latter, then the dispersed self is what bars the way to hope, and the reconstitution of the self will confer the sought-after encouragement. I am reminded of a work of clinical psychology that appeared in the 1960s called The Psychology of Hope, which concluded with a chapter on “the therapy of hope.” In his account of what he calls “a therapeutic tour de force,” the author describes a clinician who “explicitly and deliberately employed communication of his high expectations of patients as a therapeutic procedure.”

    The critics of individualism, the whole army of them, propound a doctrine of demoralization. They have no faith in the actual person, or worse, they detest her. This is uncharitable, and also inaccurate about human capabilities. Given the irreversible fact of individuation, it can be spiritually damaging. 

    There is another option: that divided we stand. Madison’s motto!

    The Covid-19 virus came along to illustrate what genuine isolation is. Monads in masks now yearn nostalgically for their allegedly atomized life before the pestilence. They miss all the communal meetings and social minglings that were said to have been lost. Except of course the political ones, which have all thrown epidemiological caution to the winds.

    In a period of national emergency, and the Trump years were such a period, the ubiquity of politics, its penetration into the deepest recesses of life, its saturation of experience, is understandable. If you believe that your cause or your country is in peril, you will become a sentry and a soldier. There is integrity to such an intensity of commitment, though the question of whether your analysis is correct, whether reality warrants your panic and your politicization, is an important one. But liberals and conservatives both used to believe, as an axiom of both their worldviews, in the limits of politics, in fair weather or foul. Then foul weather arrived and their wisdom collapsed.

    Instead of decrying “polarization” and dreaming of the disappearance of division, we might turn our attention to the overpoliticization of human existence in America. There is no longer any domain of life from which politics is barred. People who deplore the destruction of privacy by Silicon Valley acquiesce in the destruction of privacy by politics. Perhaps the one prepared them for the other, and softened them up for the tyranny of publicity and the public. People who engage in politics for the defense of dignity acquiesce in the destruction of dignity that attends the destruction of privacy. 

    The first casualty of our overpoliticization was our culture, just about all of it. Art is now politics by other means, full stop. What fools we are to rob ourselves of what we do not have enough of, and for the sake of what we have too much of.

    “All art is political,” says Lin-Manuel Miranda. Bullshit.

    The most chilling instance of our overpoliticization, of course, is the ideological repudiation of science. When told by the government that their lives were in danger, millions of Ameri-cans said only, don’t tread on me. There is no longer any Archi-medean point outside these political self-definitions.

    As for the progressive bedroom, and the infiltration of intimacy by political standards for sexual behavior: make love, not history.

    What would a post-“polarized” America look like? I have a visionary inkling. It would consist of men and women who are not only who they vote for and not only who they agree with. They would hold political convictions and defend them, but they would be known also, and mainly, by other beliefs. They would accept the political dissonance but make themselves a little deaf to it, out of respect and for the sake of comity. They would have friends whose views they despise. They would not look forward to family gatherings as an occasion for gladia-torial combat about the issues of the day. They would give up their erotic relationship to anger, and to rectitude. They would renounce their appetites for last battles and last judgements. They would refuse to let even their own extremely correct views interfere with the fullness of living. They would march, and then they would come home. They would mobilize, and then repair to those realms in which mobilization is beside the point. They would not display their politics as proof of their goodness, because they would take note of the good people on the other side. (There are sides, of course, where no goodness can be found, but they are not many.) They would forgive.

    Joy in the struggle for justice: outside the contested epiphanies of mysticism, is there a more astonishing spiritual accomplishment? It is joy in the face of misery, after all; joy amid injustice, but deployed against it. When I watch films of the civil rights movement of the 1960s, I am always dumbfounded by the joy, which somehow never got in the way of strategy. What powers of soul! 

    In ancient Greece there was a sect of philosophers known as the Elpistikoi: the Hope-ists, or in another translation, the Hopefulists. We know nothing about them. They are mentioned only once, in Plutarch, in a discussion about “whether the sea or the land affords better food.” According to a certain Symmachus, “they believe that what is most essential to life is hoping, on the grounds that when hope is not present to make it pleasant, then life is unbearable.” Or in another translation: “in the absence of hope and without its seasoning life is unendurable.” Seasoning, indeed: Symmachus compares hope to salt. This is a utilitarian case for hope, which is undeniable, because in the absence of any verification we cling to hope entirely for its effects. But it is also more: for the hopefulist, life was not bearable or unbearable, but unbearable or pleas-ant. The hopefulist does not wish only to make it through the night. He wants a pleasant morning, too, and pleasant days. 

    Is hope a pleasure? I suppose it depends on what one fears. There may be terrors that hope cannot dispel. Or does hope rise to match them in scale? Hopelessness, in any event, appears when ignorance has passed. Ignorance is the soil of hope, which may be a chapter of its own in the legend about ignorance and bliss. 

    Not so, say the economists, whose subject is now the whole of life. Hope, they say, is an assessment of probabilities. But the more the probabilities are known, the less need there is for hope. If the probabilities could be entirely known, we would all be enlightened and hopeless. I am not sure I like the sound of that. But hope is not an assessment. It is a prayer — perhaps the only prayer that the godless, too, can pray.   

    Symmachus, lying in the Ionian sun, picking at salted delicacies, voluptuously hoping. 

    And back here, in the winter wastes, two possible grounds of hope: a new vaccine and a new president. We are not yet getting to the end.

    A Memory

    A sickness came over me
    whose origins were never determined
    though it became more and more difficult
    to sustain the pretense of normalcy,
    of good health or joy in existence —
    Gradually I wanted only to be with those like myself;
    I sought them out as best I could
    which was no easy matter
    since they were all disguised or in hiding.
    But eventually I did find some companions
    and in that period I would sometimes walk
    with one or another by the side of the river,
    speaking again with a frankness I had nearly forgotten —
    and yet, more often we were silent, preferring
    the river over anything we could say —
    on either bank, the tall marsh grass blew
    calmly, continuously, in the autumn wind.
    And it seemed to me I remembered this place
    from my childhood, though
    there was no river in my childhood,
    only houses and lawns. So perhaps
    I was going back to that time
    before my childhood, to oblivion, maybe
    it was that river I remembered.

    Trash

    General consensus in our home
    was candy or soda would kill us,

    or else rot our constitutions in some
    larger, metaphysical sense. Body & soul,

    to cite the old wisdom. In protest,
    my big sister & I would sneak the stuff

    through customs whenever we could:
    Swedish Fish & ginger beer, Kit-Kats,

    Mary Janes & Malta lining the sides
    of each pocket like the contraband

    spoils they were, smallest joys,
    our solitary arms

    in this war against the invisible
    wall our parents built to bar

    the world of dreams. Now that
    we are older, the mystery is all

    but gone. We were poor. Teeth
    cost. In the end, it was the same

    as any worthwhile piece
    of ancient lore: love obscured

    by law, our clumsy hands
    demanding heaven, forgetting

    the bounty in our bellies, the miracles
    our mother made from Jiffy mix

    & cans of salmon, all the pain
    we never knew we never knew

    held there, against our will,
    in the citadel of her care.

    Reparation

    How are you feeling is always your opening question
    & you know me. I always take it the wrong way
    when you say it like that.
    I hear you asking for damage reports, the autobiography
    of this pile of brown rubble bumbling on
    about his father’s beauty, this chasm splitting
    the voice in his unkempt head & the one
    which enters the realm of the living.
    You are good to me, & this kindness, I think, is not reducible
    to our plainly economic relation, the yellow carbon
    receipt at the end of each session a reminder
    that we aren’t just girls
    in the park catching up, estimating the cost
    of our high school errors.
    I never call you my analyst, because
    that makes me sound like a body
    of work, some extended meditation
    approaching theory, if only asymptotically.
    Anyways. I’m alright today. I remembered
    to eat breakfast, & went for a run uptown.
    I gave myself credit for trying to change.
    Something in me awakened, today,
    ready for liftoff. It sang.

    The Hatboro Blues

    To the memory of friends 

    The first thing I remember thinking about what we now call “the opioid crisis” is that it was making everything really boring. It was 2010, I was in eleventh grade and at a house party about which I had been excited all week. I had with me a wingman in the form of my buddy Curt, and a fresh pack of smokes, and — please don’t think less of me — 750 milliliters of Absolut blueberry vodka. In short, all that was needed for a good night.

    And yet the party was a bust. It seemed that every third kid was “dipped out,” as we called those in drug-induced comas, lit cigarettes still dangling from their lips. Even the terrible rap music wasn’t enough to wake them. Nobody was fighting, nobody was fornicating, nobody was doing much of anything. There was nothing about this sorry shindig that set it apart from many others just like it which were still to come, but it sticks in my mind now for a melancholy reason: It was the point at which I realized that something was very wrong.

    What follows is not some hardcore Requiem for a Dream kind of yarn. Different movies apply. My high school experience was plenty Dazed and Confused, but with shades of Trainspotting and maybe a flash of Drugstore Cowboy. It was like The Breakfast Club, if Claire had carried Percocet in her purse and the dope in Bender’s locker had been white, not green. This is a story about how a kid who enters high school as a Led Zeppelin-loving pothead can leave four years later with a needle sticking out of his arm. (Or not leave at all). It is a tale of a town and a generation held hostage by Purdue pharma — the story of every place on the edge of a big East Coast city flushed with cheap heroin and prescription pills in the mid-to-late aughts. Maybe you already know how it goes.

    Fifteen miles north of Philadelphia’s City Hall sits Hatboro. It is a majority-white town with an average per capita income of $35,000 per year. A set of train tracks dissecting the town can shoot you into the city in a few minutes and for a couple of bucks. My elementary school, Crooked Billet, was named after a Revolutionary-era battle that took place on its grounds on May 1, 1778. Every year on that day kids don tricorn hats and sing songs about America. The town is part of a larger school district encompassing a neighboring township called Horsham, which gets much wealthier as it creeps closer to Philadelphia’s Main Line. In high school, some kids lived in McMansions and drove new cars, others took the bus. The public schools were good.

    I was raised, along with a younger brother and sister, by a single mom who worked as a hairdresser and a waitress. I spent every other weekend with my father, who lived in the next town over and founded a tree and landscaping company and later worked in real estate. We qualified for the free lunch program at school, and some years were tougher than others, but we were not poor and always had everything we needed. One week every summer was spent on vacation in Wildwood, New Jersey. I began my career as a busboy in an Italian restaurant when I was fourteen and kept the job all through high school. Later I became the first person in my family to go to college.

    It started off as your regular suburban experience, innocent enough. I smoked my first cigarette on the same day as my first toke of pot, in the last week of eighth grade. The cigarette was a Marlboro Red, provided by a friend’s older sister whom everyone thought was hot. (Regrettably, I smoke them to this day). Weekends were spent with my three best friends, guzzling Canadian whisky lifted ever-so-gently from a parent’s liquor cabinet and chain-smoking in various parking lots. We were long-haired little gremlins who liked to venture into the city for Warped Tour, Ozzfest, and Marilyn Manson. We loved Cypress Hill and named my friend’s $45 bong “King Zulu.” We hated the rich fucks (that was our term of art for them) who wouldn’t shut up about tie-dying their shirts for the next Dave Matthews concert.

    Sandwiched between a scrap-metal yard and the Revolutionary-era battleground turned elementary school were the aforementioned train tracks and a pathetic patch of mud and trees we called “the woods.” It was to us what the country club was to that other Pennsylvanian, John O’Hara: a place to get soused and settle scores. A few yards down the tracks lived a homeless Vietnam veteran whom we’d christened “the Bum.” He would walk with us to a local bar to buy forty-ounce bottles of beer — usually Olde English or Steel Reserve — in exchange for a couple of bucks. (Bars in Pennsylvania sell beer-to-go, and many of them still allow you to smoke inside.) My best friend at the time was legendary for being able to down an entire forty in under sixty seconds. We played a clever game called “Edward Fortyhands,” in homage to the Tim Burton movie, in which a forty-ounce bottle would be duct-taped to each hand and use of both your mitts would not be regained until the bottles were emptied. A guy named James at the local Hess gas station would sell us cigarettes underage and one woman who operated the McDonald’s drive-thru traded Newports for dollar-menu items. The world was our malt liquor-soaked oyster.

    Another hangout was a place we called “Chronic Bay.” (We were heavily into Dr. Dre’s “The Chronic” back then.) It was a pond-sized storm drainage ditch located behind a sewage processing plant and an abandoned Sam’s Club that was shielded from view by a tree line. It smelled, literally, like shit, but it was the perfect place to smoke weed and drink fortys undetected. Our soundtrack at the time included lots of Sublime, Biggie Smalls, and some tragically awful emo albums. Most of my friends were skaters who loved to watch “Baker 3” on repeat. Those were the carefree days when everything felt like a party, the days before pregnancies and overdoses. Nobody was dying, or making their mom sad, or falling asleep behind the wheel, or stealing from their grandparents, or going to jail.

    People used to talk a lot about pot as a “gateway drug,” but I think about what came next in terms of floodgate drugs: the floodgates of an over-prescribed society opened, and suddenly drugs were everywhere. Some people would learn where or how to draw the line, but others could not see it; and crossing it became a death sentence. After booze and weed we all started to play around with prescription pills in a way that was always getting ratcheted up. It started light, with Klonopin (“K-pins”), and then Xanax.

    The first time I took Xanax was in a McDonald’s parking lot. I took both of the two milligram “bars” my friend Sam plopped in my hand, felt pretty damn loose, and then my memory disappeared.

    Most of my friends liked to eat pills, some more than others. In the first month of eleventh grade, in 2009, a black comedy called Jennifer’s Body starring a salacious Megan Fox as a demonic succubus, came out in theaters. A friend named Becky piled us into her Honda Accord for a trip to the movies. Most kids sneak candy or soda into the movie theater. Our clandestine appetites were different. We popped Klonopin and smuggled into the theater a backpack stocked with “Four Loko,” the fruity malt liquor concoction that contained so much caffeine that its manufacturer was later forced by the FDA to tweak its recipe, because people were dropping dead after drinking it. Why would anyone pay money to see a movie in this state? Most of us were passed out before the credits rolled. But that’s just how we rolled. Everything seemed like an occasion to get “fucked up,” even standardized testing. Before the PSATS, Sam ate so many Xanax “bars” that halfway through the test he dropped his sharpened number 2 pencil and told the proctor that if she didn’t let him out of the classroom he was going to vomit all over her. (She let him out.)

    Sharon was a year older than me and lived in the neighbor-hood. The year her mother was sent to jail, Sharon’s house became our free-for-all party pad and experimentation fort. Sharon’s scratchy baritone made for the perfect imitation mom-voice, so she could supply an alibi to any anxious parent inquiring about their child’s whereabouts. It always worked, including on my own mother.  One night at Sharon’s we couldn’t get our paws on any preferred substances, and so Collin, our friend with the stickiest fingers, had a brainstorm: He would go to the home of a girl he was seeing and raid her parent’s medicine cabinet. After he came back with a bottle of what we thought was pharmaceutical-grade sleeping medication, we decided to divvy up the bottle, pop all the pills at once, wash them down with fortys, and have a contest to see who could stay awake the longest. Fingers were crossed that we would be rewarded with hallucinations. But things went awry and it was only later, after consulting our handy-dandy Pillfinder (“Worried about some capsules found in your teenager’s room? Not sure about those leftover pills still in the bathroom cabinet? There’s a good chance that our Pill Identification Wizard (Pill Finder) can help you match the imprint, size, shape, or color and lead you to the detailed description in our drug database”) that we realized the Seroquel we had ingested was not knock-off Ambien but an antipsychotic medication used to treat schizophrenia. Oh well.

    Meanwhile, all the regular stuff associated with teenage development continued apace. I had some bad haircuts, kept decent grades, and rarely missed a day of work at the restaurant. (There was that one time, when Collin, Sam, and I each ate an eighth of magic mushrooms at midnight, went out to play in a state-of-emergency blizzard, and I missed a brunch shift the next morning. Otherwise I was a model employee and my bosses loved me.) I was the same bookish kid I had always been, devouring every Harry Potter and Lord of the Rings book in the library. I shared a room with my little brother. I hung a Pulp Fiction poster on my wall and bought CDs at the mall. I lost my virginity. I got my permit and then my license. My father bought me a 1999 Nissan Maxima with 190,000 miles on it for $2,000 and taught me how to drive a stick shift.

    Wheels meant freedom and access — to fine things, like trips to the shore, but to trouble, too. Now that our group was mobile, all my friends suddenly became two-bit drug dealers. Usually they had only an ounce or less of pot to peddle, but sometimes more. I held a pound of weed for the first time when a friend asked me to drive to nearby Norristown to pick it up and stash it in the trunk of my car. (Incentive: “I’ll fill your gas tank and smoke you up on the way.”) Most days after school my Maxima was transformed into a roving dispensary of marijuana and other delights. One night I decided to vacuum the thing and install some new air fresheners. Miraculously, the next day the school announced a surprise search of the grounds by the police and their drug-sniffing dogs. Midway through science class a principal knocked on the door and beckoned for me. The whole classroom shifted to watch as I traipsed out, fate unknown. We walked down the hall in silence and approached the exit to the parking lot, where a sortie of my buddies — who didn’t know I had just wiped “the whip,” as we called the car — had congregated with looks of abject terror on their faces to watch the pooches encircle my lemony-scented ride. Even though it had been cleaned, the dogs couldn’t help but stop on their adventure through the school’s parking lot. You can imagine the dismay of the principal and the officers upon finding nothing harder than a pack of cigarettes and some “Rohto Arctic” eye drops inside. As I say, a miracle.

    One friend, high on something or other, crashed his car through a storefront on the town’s Main Street. Later, after a new facade was constructed, we joked that he had merely given the place a free facelift. (No one was seriously injured.) Another time I was cruising around with my friend Ethan when a drug dealer named Pete got in touch. For reasons that now seem inexplicable, we thought Pete was cool and that his imprimatur meant something. At the time he was dating Diana, a beautiful brunette and a real Calamity Jane who had flitted in and out of our crew since the early days of eighth-grade summer, when she would never turn up any place without a Gatorade bottle full of vodka and a pack of Newport 100s. So when she dialed me up to say that Pete had an $800 bag of cocaine from which a modest profit could be made, and did I want to move it for him, I had to take a minute to think about it. Ethan and I both looked at each other and blithely shrugged, but my gut told me it was maybe a bad idea to become a coke dealer. Besides, I had a job already, a real one. I said I was honored but politely declined and hung up the phone.

    Then Ethan’s cell started to ring — it was Diana. He said yes, dropped out of school the next week, and started selling the pile of white powder, gram by gram. This posed two problems for the rest of us: We liked coke and we had no self-control. By the time the weekend rolled around, half the bag had disappeared up our little noses. Even worse, Ethan’s mother found the rest under his bed, freaked out and flushed it. We dodged Pete for as long as possible, and then he turned up on Ethan’s front lawn with a couple goons and baseball bats. Poor Ethan’s parents were left with no choice but to call the cops. Pete eventually backed off, but Ethan’s credit around town was pretty low afterward and there were more than a few parties to which we couldn’t bring him.

    Drugs beget drugs and things begin to blur. The halcyon days of fat blunts and warm beer in the woods were firmly in the rearview. Movie shorthand again: if the ninth and tenth grades were Fast Times at Ridgemont High, junior and senior year were more like Valley of the Dolls, all the Spicolis turned to fiendish Neely O’Hara’s. And it was not just my raggedy clique that was gobbling pills like Pac Man. The vicissitudes of the Lacrosse team and the Richie Rich kids from up the way seemed to mirror our own. Next came Percocet, an opiate, and therefore in the same drug family as heroin. “Perc 10s” and blueish “Perc 30s” could be crushed up and snorted. Luckily for me, I disliked the way Percocet made me feel. I didn’t enjoy the stomach pains, the itches, the bouts of narcolepsy — or the feeling that I was an actual drug user as opposed to a dumb kid having fun.

    When you are a teenager, it is of course easy to make bad choices, because you feel invincible. Maybe the worst decision one could make in pilltown was to try OxyContin. You can have fun, as we all did, with Klonopin, coke, Xanax, Percocet, Ecstasy, and tabs of acid, but there is usually no coming back from OxyContin. A seventeen-year-old doesn’t stand a chance. Adults who are prescribed it for legitimate reasons barely stand a chance. Oxycontin’s not a drug that one can “dabble” in. It is synthetic heroin in pill form manufactured by a gigantic pharmaceutical corporation, and in Hatboro it wasn’t hard to find 40 milligram doses of it — “OC 40s” for short, or the double dosage “OC 80s.” Ingested orally, Oxycontin is meant to mete out pain relief over a number of hours, but the “extended release” could be circumvented for an instantaneous high by crushing and then snorting the pills.

    In 2010, when I was in eleventh grade, Purdue Pharma tweaked its production so that the pills could no longer be crushed. It was like trying to plug a sinkhole with a wine cork. (Studies would later argue that this tweak only pushed people more quickly to heroin.) By then we all knew someone who was a full blown “jawn head,” as we called those addicted to OC’s. Maybe it was the kid next to you in homeroom who stopped showing up to school. Maybe it was a friend from the grade above. Maybe it was an older sibling. There was a stupid rap song called “OxyCotton” extolling the joys of OC’s and it became a kind of unofficial anthem of my high school, Hatboro-Horsham High School, now nicknamed “Heroin High.” The song was a menacing joint by an otherwise obscure rapper named Lil Wyte. One verse, rapped by Lord Infamous, went like this:

    Scarecrow, scarecrow whatʼs that youʼre popping

    A powerful pill they call Oxycontin

    But it’s so tiny and it catch you dragging

    Haven’t you heard big things come in small packages

    I prefer the oranges with the black OC

    Take two and you cannot move up out your seat

    Some people melt ‘em down in a needle and shoot ‘em up

    But I pop ‘em with Seroquel like glue, I am stuck

    This was hardly just a street drug, though. With so many people’s parents being over-prescribed opiates, nabbing pills out of a medicine cabinet became my generation’s version of raiding the liquor cabinet. In this way one of my earliest friends, Danny, got hooked. He lived two streets over and was in the grade above me. We’d known each other since we were in diapers. “In the beginning it was fun, there’s no two ways about it,” he now recalls. “If it wasn’t fun, we wouldn’t have done it. I don’t know if that was the only way we knew how to have fun or if we just took it to another level. Kids in different parts of the country will drink and party and take it to a certain level and there’s nothing else readily available so it fizzles out. Around here, it’s like you partied and then you met older kids and the older kids were doing this, and then, somehow — peer pressure, wanting to fit in and be cool — you somehow got into that.” The way he said it, “somehow” was another word for inevitably.

    I never touched the stuff, not because I was smarter than anyone else, I was just more of a wimp. I was already trepidatious owing to some unpleasant experiences with Percocet, and OxyContin seemed genuinely frightening. By now the kind of havoc that the drug could unleash was everywhere apparent, and snuffing the fun out of house parties was just the start. An older brother type with whom I had worked at the restaurant since the day I was hired was no longer funny, smart, or cool: He was a confirmed and abject jawn head, a zombie. It was heartbreaking to watch someone’s personality dim and die before he was even old enough to vote. You had to look out for your own, and my best buddies and I made a pact that, no matter how far we pushed our partying, we would stay away from OC’s. Still, everything was being warped around us. Even our mood music morphed from metal, grunge, and 90’s hip hop into the real hood stuff coming out of North Philly at the time, mix tapes about “trapping” and being “on the block” and pushing drugs 365 24/7 rain or shine. I hate to sound like Tipper Gore, but I believe that the music, if it did not directly influence us, at least reflected the spiraling and trashy subcul-ture of an ostensibly nice town littered with drug baggies.

    Hatboro is just across the city line and a thirty-minute drive from the open air drug markets of North Philly, known as “the badlands.” That is where all the heroin comes from once it is pulled from the docks and flooded through the streets. OxyContin is expensive, but a $10 “stamp bag” of heroin does the trick just as well. And so before long, in a kind of irreversible entailment, all the jawn heads devolved into dope heads, actual heroin addicts. Ground zero for dope was — and still is — an intersection called “K & A,” where Kensington and Allegheny Avenues meet in the Kensington neighborhood. The streets that spiderweb out from that junction are an addict’s bazaar, a warren of narrow blocks in which dealers sit on porches shouting out their merchandise to passersby. You don’t even have to know someone to collect. When cops roll down the block, the dealers simply retreat back inside. This is the hellish district in which suburban mothers go looking for their heroin-addicted children, bringing them peanut butter and jelly sandwiches or a new coat if they can’t coax them to come home. Half the kids on those streets are from towns just like mine.

    I started hanging out in the city more when Becky — she who had driven us to the movies to see Jennifer’s Body — began dating Matt. He was a year older, out of school, and living in a one-bedroom apartment on Rising Sun Avenue, about a fifteen-minute drive from the open air drug markets. Now drugs were more attainable than ever. A new cast of shady characters floated into our orbit and the old ones just got shadier. One night at Matt’s I pawned some of the Xbox 360 games I had received for Christmas to purchase a bag of ecstasy pills that turned out to be cut with methamphetamines. The red pills emblazoned with stars and the green ones imprinted with palm trees kept me, Sam, and Collin up all night — Sam vomited every hour on the hour and we pondered bringing him to the emergency room — and sent us into horrible withdrawal the next morning. It was the worst I had ever felt in all my short life. The kid who sold us the dirty E-pills, also named Matt, had his newborn baby with him that night. I can still remember Matt fishing for a Newport in his pocket while handing me his baby and saying “Here, you look like you’re good with kids.” That Matt is dead now. When I bumped into the baby’s mother at a bar last year, we didn’t even bother mentioning that fact. It was the order of things. The other Matt became an addict and a father and then, last I heard, got clean. Becky has two rugrats herself and just sent out wedding invitations.

    Until then, the city had always loomed large in our suburban imaginations as the place where we would spend the best nights of our lives. We used to head into the city to see our favorite bands at the Electric Factory or the Theater of the Living Arts on South Street. It was where the best cheesesteaks were, and the Italian market, and the Flyers and Melrose Diner. It was the home of magic. But then going to “the city” meant dipping into a dangerous neighborhood for drugs — a different kind of home for a different kind of magic. We were slowly being blasted. It was on another night at Matt’s when my own sense of invincibility was finally shattered. After polishing off a bottle of vodka we took a drive to K & A for some more provisions. I parked the car while Matt walked up the block. He came back empty handed, but with two cops in tow. They pulled up next to my Maxima, yanked us out, slapped handcuffs on our wrists, and searched my car. There was nothing to find, but one cop grabbed my red Verizon enV3 flip phone, turned to me and asked, “Who am I calling, Mom or Dad?” I thought for a second and then gulped, “Dad.”

    The cop left a voicemail on my father’s phone, gripped me up and spat, “Now go back to the suburbs and stick to smoking your fucking grass, white boy.” When I got home, my father was nothing but rage. He yelled so loud I can still remember the foundations of the house shaking. I try to imagine what the voicemail said: “Hey, we’ve got your loser son down here trying to buy narcotics in a neighborhood where people are shot in broad daylight. Where did you think he was, the mall?” When I reflect on that episode now, what is most shocking to me is the blatant and incontrovertible white privilege. Here we were, teenagers drinking and driving and looking for drugs, a menace to ourselves and to anyone who might encounter us, and my interaction with the police amounted not to a  rap sheet or a bullet but parental concern and an actual slap on the wrist.

    For me, the alarm had sounded. What on earth was I doing in North Philly or with people like Matt? I really harbored no desire to destroy myself. I really was hungry for life. Despair was never my affliction, so why was I acting as if it was? And so I stopped going to the city and cut out everything except pot and booze — a renunciation which, given the habits of most of my friends, was practically monastic. The fact that I had been scared straightish did not mean that anyone else was. The opposite was the case. Things were getting worse. Rehab stints to the local clinic, court mandated or otherwise, became a rite of passage for hard partiers. This meant that Suboxone, a drug just as powerful as heroin that is used to wean one off it, entered an already bleak picture. One day after school I watched as Ethan and Curt split one tiny Suboxone pill, letting it overpower them to the point that they could barely walk or keep from vomiting. Hard drugs were no longer the realm of upperclassmen, either. When Curt’s parents went out of town, we threw a party at his place and were deeply unsettled to discover a fifteen-year-old freshman girl snorting lines of heroin in the upstairs bathroom. We were the moralists! It was an odd sensation for us to be clutching our pearls at the ripe old age of eighteen, but that episode shocked even us.

    My story is coming to its end. In the years after I graduated, the bill for a class of kids hooked on heroin came due. One of the first people with whom we ever smoked weed in eighth grade overdosed and died. So did the kid who used to sell it to us. Two of the most beloved girls in town, lifelong friends who grew up on the same block as each other, both overdosed and died. Danny overdosed a number of times, he was even found turning purple on the floor of a Rite Aid bathroom once, and against all odds he is now sober. (To this day his mother carries two forms of Narcan in her purse because you never know.) Diana, who was dating the drug dealer Pete, descended further into addiction, stole from friends, and fell off the map altogether. One day last year I received a frantic Facebook message from her mother, who was reaching out to Diana’s old school friends for any clues as to her whereabouts. She finally turned up a few months ago newly sober, and posted a long status on Facebook about how, at her lowest, she had picked up a meth addiction, weighed less than ninety pounds, and was hearing voices. Her ex-boyfriend Pete lost his little brother to dope. The list of the lost goes on. And not only of the young. Some of the parents were just addicted as their children. My mom’s ex-boyfriend, who was like a stepfather to me during the years when I was in middle school, became an addict and is now dead. The man she dated when I was in eleventh grade ended up addicted to opiates. As for any judgment about the quality of anyone’s parenting: I have come to believe that no level of awareness about the danger could have prevented it. You can keep a close eye on your child, but when drugs are ubiquitous, when they are a central feature of social life, when the surrounding culture confers prestige upon them, the best you can do is cross your fingers and pray.

    A whole vocabulary has sprung up to convey the shared experience of addiction, a vernacular of the carnage. When I go home and visit with old friends, there is always a grim roll call conducted over beers. “When was the last time anyone heard from her?” “Oh, I heard she’s still really bad.” There is a lot of sorrowful shaking of heads. Another one I’ve heard often and with nonchalance: “So, guess who’s a dopehead nowadays?” Social media has become a surreal forum for this conversation, too. Facebook newsfeeds are so peppered with remembrances and R.I.P posts that you might not even pause while scrolling past one. Many of them include poorly cropped angel wings or some variant of “Heaven just gained another angel,” a phrase so anodyne and overused I consider it Hatboro’s version of a Hallmark card. These were the clichés of social destruction. In the years since I graduated, heroin has been largely edged out by fentanyl, a synthetic opioid that is much easier to overdose on than your garden-variety dope. Meth, which was never around in my picaresque youth, has found a big market in the suburbs, too.

    The crisis is in your face everywhere you go. It is the driver next to you at a stoplight falling asleep at the wheel. It is the dopehead in line in front of you at the 7-Eleven or the grieving mother of one of your school chums standing behind you. Who should we turn to? God, perhaps; but look at His record. The government, perhaps; but look at its record.

    To confront the addiction of the despairing produces its own variety of despair. Along with some of my closest friends from back then, I marvel that we made it out when so many of our comrades did not. Melancholy permeates my town. And it is never really over. One of those friends recently became a cause for concern among our circle after he was fired for dipping out at work, just the way we did at house parties in eleventh grade. He is not returning anyone’s calls, and word is that he has stopped paying some of his debts. It beggars belief: opiates now, after everything we remember? But we are too sober to delude ourselves about what is possible in our town, and in other towns. We have seen this movie before.

    Note: The names in this essay have been changed out of respect for the privacy of its subjects.

    Steadying

    For some time now it has felt like history is itself the pandemic. In our country and elsewhere, it has been in overdrive, teeming with evils, flush with collapses, abounding in fear and rage, a wounding contest between the sense of an ending and the sense of a beginning, between inertia and momentum, with all the terribilities of ages of transition. What is going has not yet gone and what is coming has not yet come. We have become connoisseurs of convulsion. At sea is our new sea.

    For better and for worse, axioms and assumptions are dying everywhere around us. Such vertiginous hours always come with both clarities and confusions — there is no promise of illumination. The guidance we need in our circumstances will not be provided by the circumstances themselves: they are too many and too contradictory and too volatile; passion increasingly unconstrained and power increasingly unconstrained. As the sense of injustice grows, injustice seems to keep pace with it. There is a piercing sensation of flux, of uncontrollable effects and unmanageable consequences. The masks on our faces are emblems of an entire era of vulnerability. The most important thing, therefore, is that we keep our heads. A disequilibrium of history demands an equilibrium of the mind. Steadiness in the midst of turbulence is not complicity with the existing order. It is precisely in such binges of history that we must teach ourselves to sort through the true and the false, the good and the bad, the continuities and the discontinuities, the right statues and the wrong statues, the humane and the utopian.

    Everything will be different: this is a ubiquitous sentiment. In all our upheavals — social and epidemiological — so much seems to be wrong and so much seems to be slipping away that one may be forgiven for enjoying a fantasy of total change. All these horrors, all these outrages, all these marches, and the world stays the same? So the first thing that needs to be said in the effort to keep our heads is that everything never changes. More, the idea that everything will change usually plays into the hands of those who want nothing to change. The cycle of revolution and reaction has never been the most effective engine of progress. Nothing suits the interests of the old regime like utopianism. The thirst for change will not be slaked by the cheap whiskey of apocalyptic thinking. The only certain outcome of the apocalyptic temper is catharsis, and one way of describing the decline of our politics in recent decades is that it has increasingly become a politics of catharsis, in which crisis is met mainly by emotion. (Populism is just mass emotionalism, and the emotions are often ugly ones.) Apocalypse is not an analysis, it is the death of analysis. It sets the stage only for salvation, but salvation must never become a political goal. This is especially true in a democratic society, where the only saviors are, alas, ourselves.

    Thus it is that the struggle against injustice imposes upon us a paradoxical psychology: it demands both impatience and patience. Impatience about injustice, patience about justice. This is hard to do. It looks too much like, and in many cases it may well be, complacence. It is certainly difficult to preach incrementalism to the injured. So why not be impatient about justice, too? There are historical and practical reasons why not. History is stained by tales of instantaneous justice, by the consequences of the rush to perfection, by the victims of the victims. The ethical calculus of means and ends is never teleologically suspended, if just causes are to remain just. Nor is it a quantitative calculus: when I first studied the modern history of the Jews I drew a variety of conclusions from the Dreyfus affair, and one of them, which was an important moment in my moral education, was that Zola and his comrades appropriately threw an entire country into crisis for the sake of one man. Similarly, due process is not a legal formality, a procedural exercise that slows the way to a satisfying climax; it is the very honor of a liberal society.

    More concretely, the establishment of justice involves not only revisions in opinions but also revisions in institutions. A dreary point! But anyone who denies the institutional dimension, in all its exasperating machinery, is not serious about the change. Paroxysms, unlike laws, vanish. This was the year in which the campaign for racial justice found support in virtually all the sectors of American society, with the exception of the White House — an unprecedented national epiphany that cannot be dismissed as “performative”, because culture matters; but the road from protest to policy is long and winding. It is not a betrayal of the ideal of social justice to tread carefully and tenaciously, with a mastery of the scruples and the methods that would make a reform defensible and durable. Tenacity is what patience looks like in the middle of a struggle.

    I will give an example of the complicated nature of the mentality of change. One of the consequences of recent social movements in America — #MeToo (which came also to my door, with its lesson and its recklessness) and Black Lives Matter—has been to reveal how poorly we understand each other. Or more precisely, they have exposed the extent to which the failure to understand others may be owed to the failure to understand oneself — the limitations of one’s own standpoint, the comfortable assumption that one appears to others as one wishes to appear to them, or to oneself. This is nonsense, though sometimes you learn so the hard way. There are limits to our epistemological jurisdiction. The failure to observe these limits is solipsism, and we all begin as solipsists, awaiting correction by social experience.

    Our epistemological jurisdiction stops at the encounter with another person. She is another epistemological kingdom, not more perfect but certainly different, with something important to add, and a perceptual contribution to make. I may like to think that I am what I present myself to be, but I am also what she sees me to be, because she sees me as I cannot, or will not, see myself. I am never in control of my self-representation and never complete in my self-awareness. We always show more of ourselves than we think we do, which is why we may learn from the responses of others. We spill beyond our intentions and our conceits, and what we gain from this overflow is criticism.

    But criticism, too, must be assessed critically – there is no exemption. The enlightenment that one acquires from the judgments of others is owed only to their accuracy. It is certainly not warranted by the belief that a person’s identity or socio-economic position or experience of hardship confers an absolute authority, a special relationship to truth, a vatic privilege. What a simple world it would be if pain were a sufficient guarantee of credibility. But it is not – indeed, the opposite is the case, pain is myopic and sees chiefly itself, which is one of the reasons it hurts. Finally we are all left with the modesty of our grasp. No whole classes of people are right and no whole classes of people are wrong.

    The ineradicability of ambiguity from human relations, the ignorance of ourselves that accompanies our ignorance of others, the whole fallible heap, creates an urgent need for tolerance and, more strenuously, for forgiveness. Historians will record that in the early decades of the twenty-first century we became an unforgiving society, a society of furies, a society in search of guilt and shame, a society of sanctimonies and “struggle sessions” American-style. They will admire our awakening to prejudice but lament the sometimes prejudicial ways in which we acted on our progressive realizations. In this respect America should become more Christian. (There, I said it.) For all our elaborate culture of self-knowledge, for all the hectoring articulateness of our identity vocabularies, we are still, each of us, our own blind spots. We should welcome every person we meet as a small blow against blindness.

    The partiality of perspective: this is the great teaching of the contemporary tumult. The problem is that we have not only begun to acknowledge our partiality, and the partiality of others, we have also begun to revere it, and this is a mistake. We are gagging on all our roots. If pain does not provide access to truth, neither does particularity. The worship of particularism is one of the great impediments to social justice, and in its exhilarating way it coarsens us all. In our moral and social thinking, our obsession with otherness has concealed that the foundation of moral and social action is sameness. The “other” is exotic, but there is nothing exotic about the homeless man on the street: he is the same as me, a human being, except that he is hungry and I am not. The difference in our circumstances is not a difference in our definition. When I hand him a few dollars I am not extending myself toward an alien being; I am practicing species solidarity. I am not discovering his humanity; I am responding to it. I am acting, in other words, universally, and none of the social problems that afflict us will be solved unless we recover the universalist standpoint that sees beyond the visible divisions, and is not trapped in, or enraptured by, the specificities of our tribes. Pluralism secures the right to turn inward, but it also broaches the duty to turn outward. By surrounding us with other partialities it legitimates our own partiality, but it also reveals that there is more to the world than what is merely ours.

    A great deal has been written in recent years about the discovery of our commonplace biases and the techniques for overcoming them. Much of this literature is psychological, but some of it is political, and its aim is to confine us proudly within our limits and call them wonderful. In the name of authenticity, we are instructed that the partiality of our perspective is all we will ever have, and that the aspiration to impartiality is an aspiration to power, or a justification of power. Every view is a view from somewhere. Nobody escapes his or her position. We are all marooned in our respective glories. Objectivity, according to this advanced opinion, is an epistemological plot of the elites.

    This inculcates a kind of localist arrogance that is fully the match of the globalist kind. Such “perspectivism” was one of Nietzsche’s lasting provocations, and in American philosophy it was ringingly championed by Richard Rorty, who was the only man I have ever known to use the word “ethnocentrism” positively. He denounced objectivity in favor of solidarity, and his children are everywhere, in all the movements; and a similar war on truth flourishes, for less sophisticated reasons, also in the offices of prime ministers and presidents. The outlook for intelligence, as Paul Valery used to say in an earlier era of confusion and peril, is not heartening. Truth in America is a refugee, an undocumented immigrant. Philosophers and political operatives have joined together to proclaim the fictive nature of fact. About this there is no “polarization.” It is not only policy over which we differ: we differ also over the description of reality. (And even if science is not all we need to know, is there any plainer measure of stupidity than the mockery of science?)

    All these communitarianisms of the mind are absurd. If all one can express with one’s beliefs is solidarity with one’s community, then how is it possible to disagree with one’s community, and what is the origin of dissent? If it is impossible for people of different backgrounds, or classes, or races, or genders, to understand each other, why are they disappointed or angry when they are not understood? If people who are white or male or rich cannot claim to comprehend people who are black or female or poor, how can people who are black or female or poor claim to comprehend people who are white or male or rich? Of course the world does not work this way, according to this Empedoclean epistemology, for which like can only know like. The startling reality – it is one of the tremendous features of human existence – is that, within societies and among societies, across nations and cultures, we manage to be intelligible to one another. If you don’t get it, you can get it. As a strategy for thwarting human communication, Babel was a bust.

    This everyday mental commerce, this regular passage through these permeable frontiers, sometimes needs the assistance of translation, and always needs the assistance of imagination, but it proves that the inherited perspectives may be enlarged and that the despair of a greater commonality is a self-inflicted wound. Perfect objectivity may never be attained, but that is no excuse to act like merry peasants. “Positional objectivity,” as Amartya Sen has described the only plausible mitigation of our parochialism, will get us very far. Moreover, chafing against one’s limits is a condition of ethical sensitivity: if I were to be content with what my own life has taught me, I could not recognize sufferings which I have not lived and against which I have a responsibility to act. All that I need to know I cannot learn in my town, even if I can learn a great deal there. We have moral obligations in unfamiliar situations.

    I am not a woman and so I must imagine rape. I am not a black man and so I must imagine chokeholds. I am not a Syrian and so I must imagine that charnel house. I am not a Uighur and so I must imagine those camps. (But I am a Jew and so I expect others to extend the same imaginative respect to the fate of my people.) If victims were the only ones who understood oppression, who would help them? Often they insist that they must help themselves, which is correct, and evidence of their irreducible dignity, but there are limits to what they can do, and their “auto-emancipation” does not absolve the rest of us from the work of their emancipation. This work involves shaking ourselves loose from the mental dullness that is the product of our distance. As Judith Shklar once observed, “it will always be easier to see misfortune rather than injustice in the afflictions of others.”

    Objectivity, in other words, is the sturdiest ground of justice, and the despisers of objectivity are playing with fire. Feelings are a reedy basis for reform. After all, the other side also has feelings – which is how we wound up with the revolting solipsist in the Oval Office. In a democratic society, reform comes about by means of persuasion, and the feelings of others may not do the trick. I may not feel what you feel. I will not be convinced that you are right by the fervor of your feeling that you are right. I need reasons to agree with you, that is, appeals to principles, to rational accounts of preferences, to terms and values larger than each of us which, unlike feelings, we may share.

    Without objectivity, without the practice of detachment that makes genuine deliberation possible, without tearing ourselves away from ourselves, justice in our society will mean only what the majority, or the crowd, or the media (all of them fickle) want it to mean. We will gag on our roots. We will continue to despise each other, some scorning the weak and others scorning the strong. Our system of disagreement will continue to be degraded into a system of umbrage, in which a dissenting opinion may be dismissed as “tone-deaf”. Empathy, where it exists, will be remorselessly selective and most often reserved for one’s own kind. (Down with himpathy! Up with herpathy!) We will remain stalled in our excitability. But none of the questions that we are asking as a society can be answered with a scream or a scowl.

    Some of what I have written here will please progressives. Some of it will please conservatives. I call it liberalism.

    “When the facts change, I change my mind. What do you do, sir?” Legend attributes that swaggering pronouncement to Keynes, and it has become the canonical formulation of the anti-dogmatic mentality, the credo of the open and empirical mind. It has always irritated me, and not because I have a complaint about the admiration for factuality. These days the facts are the front lines in the battle for reason in America. The power of the state has been pitted against them.

    Keynes was an economist, and I have no doubt that the relation that he posits between facts and opinions is entirely appropriate for purposes of administration – say, setting an interest rate. As conditions change, policies must be adjusted. Only a fool would think otherwise. If you are not fascinated by the question of what works, stay away from government. (Or join up, because these days nothing gets done.) Practicality is always reactive; its timeline is short. Pragmatism waits on the news. There is even a current in modern American thought for which democracy is itself an exercise in unceasing pragmatism, in trial and error unto the generations. Its definitive statement can be found in the conclusion to Holmes’ renowned dissent in Abrams in 1919. Immediately following his famous observation that “the best test of truth is the power of the thought to get itself accepted in the competition of the market,” which was an important moment in the infiltration of the non-economic spheres of American life by the vocabulary of economics, Holmes went on to declare about the Constitution that “it is an experiment, as life is an experiment.” Whatever the merits of such a philosophy of existence, the sense of the provisional championed by Holmes is admirable for the mental patience that it imparts, and for its revulsion from absolutism.

    Yet Keynes’ statement seems to be reaching for more than a merely managerial responsiveness. It appears to be making a more general claim about the dependence of beliefs on facts. There are many kinds of belief, of course. But there are some kinds of belief that do not originate in the facts, that are not hostage to changes in the facts, that exist prior to the facts and provide the framework within which the facts are understood and assessed. I cannot agree that moral opinions and philosophical opinions, if indeed Keynes had such opinions in mind when he made his remark, require such a tight association with fact. Even the belief that beliefs must be based in facts cannot be based on facts. There are views I hold about right and wrong, about the individual and the group, about ethical obligation, about the duties and the limits of power, about the nature of truth, about the nature of beauty, and about spiritual meanings that will not be revised by the morning paper, whatever it brings. Before tomorrow’s bad news, I already know that the world is an unkind place and that there are a variety of ways to interpret its cruelty, and I have, to the best of my abilities, in ways that I can explain, already chosen an interpretation.

    It is possible, over time and by means of careful reflection, taking your experience into account but not only your experience, to arrive at a view of life, a worldview, and to hold it continuously, through thick and thin, regardless of who the president is, without embarrassment at the steadfastness with which you maintain it, so long as you give reasons and present them for critical examination. There is no shame in intellectual constancy. It is nothing like dogmatism, if it is thoughtful. And the caprices of external events, even when they are cataclysmic, need not throw one into philosophical crisis. Especially in times of cataclysm, one should aspire to what Rebecca West called “an unsurpriseable mind.”

    I remember a conference, not long after the earthquake of 2016, where I was holding forth on the characteristics of populism. When it came time for questions, an acquaintance of mine, a fiendishly intelligent woman with a saturnine look on her face, a distinguished international civil servant, raised her hand. “After what just happened,” she asked, “how should we revise our views?” It was not the first time that I heard this question in the aftermath of the Trump ascendancy. I disliked the question. It represented a fundamental misunderstanding about the formation of belief. We should not revise our views, I replied. The election did not prove that our views are wrong. It proved only that our views are unpopular. (And the well-named popular vote did not prove even that.) All that a poll can establish is the popularity of a belief, its distribution across a population. It has no bearing whatever upon its substance. What we believe may be wrong, but not because many people disagree with us. This is precisely the problem with Holmes’ idea of verification, with his contention that truth will be established in the competition in the market: success in the market has nothing to do with truth. The interminable history of human illusion shows that the “marketplace of ideas” is like every other marketplace. It reflects only appetites and interests; it is easily manipulated; it is quantitative.

    I may have been a little sharp in my reply to the questioner. My disrespect for her notion of intellectual flexibility must have showed. Politicians, of course, must evaluate ideas politically, but this was not an exchange about politicians. A losing side may need to revise its tactics, but beliefs are not tactics. There is nothing illegitimate or disqualifying about a minority position. A democracy, indeed, should be judged by how it treats its minorities, not least its intellectual minorities. There is honor in minority life. There is honor also in defeat, if one stands for something more than victory. If you stand for principle and you lose, you are equipped to fight again. Sometimes there is good company in the wilderness. In wondering whether defeat should inspire second thoughts about first things, my rattled interlocutor was skirting the problem known as the tyranny of the majority, which was long ago identified as one of the supreme abuses of democracy. When I assured her that the results of the election did not constitute a refutation of her views, I did not mean to lull her into a feeling of righteousness about what she – and I – believed. I wished only to draw a line between disappointment and crippling doubt.

    Here is what I do, sir. When the facts change, I interpret the facts according to the methods and the assumptions in which I have the most intellectual confidence. If I can vouch for the integrity of those methods and assumptions, which in my case are liberal methods and assumptions,I will be reluctant to give them up – especially in a dizzying world, where the people with moorings will be better able to explain and to lead. I recognize that moorings come in many forms – evil, too, comes with intellectual frameworks; but those frameworks will be most effectively challenged and repudiated by those who have a different one of their own. As for the facts, I am all for them; but I am not sure they can do all the work that needs to be done. Will bigotry be vanquished by data? A hatred cannot be dispelled for being non-factual. Sooner or later we have to engage at the level of moral and philosophical principle. We must make ourselves competent in kinds of discourse that are not only empirical. We must not forget how to believe.

    This journal begins its life in a time of breakdown and bewilderment, of arousal and expectancy. It is called Libertiesbecause of all the splendid echoes of the word – liberty, liberal, liberate, liberality, even libertarian, even libertine. (The question of the place of pleasure in human life is one of the fundamental questions.) It is both a grave word and a joyous word. The plural is a tribute to the plurality of freedoms that we enjoy as a matter of right, and also to the plurality of freedoms that the citizens of a growing number of countries are being ruthlessly denied. Above all, it is meant to announce that, in this universe of fascists and commissars, the objective of these pages will be, by argument and by example, in politics and in culture, the rehabilitation of liberalism.

    The slander of liberalism is one of the spectacular idiocies of our age. The errors and the failures of the liberal order, at home and abroad, need to be acknowledged, but they do not need to be exaggerated. The pride of liberals deserves to be much greater than their guilt. A glance at history abundantly demonstrates this, as the issues of this journal will explain. But the historical events that provoked the social, economic, and moral achievements of the liberal order have receded in time, and the experience of time itself has been accelerated, so that historical memory can no longer be relied upon for the work of explanation and nothing is obvious anymore. The work of explanation, guided by reason and humaneness and the study of the past, needs to start again. There is nothing nostalgic about such a project. The restoration of liberal ideas and practices – a social equality based not on venerations of identity but on universal principles; an economic equality based not a delusion of dirigisme but upon a rigorous regulation of capitalism; a faith in government as one of the great creations of human civilization and the protector of the weak against the strong; an affirmation of American power in the world because of the good that American power can do in the world – is entirely forward-looking. To curse liberalism is to curse the future.

    It is no longer trite or tautological to say that a democracy is a place that behaves democratically. Within our democracy, and within other democracies, there are many leaders and movements who behave undemocratically or anti-democratically – who view democracy expediently, as an instrument for the acquisition of power and nothing more. For this reason, the philosophical grounds and political benefits of democracy also need to be re-clarified. In 1938, on a lecture tour of the United States, Thomas Mann observed to his American audiences that democracy “should put aside the habit of taking itself for granted, of self-forgetfulness. It should use this wholly unexpected situation – the fact, namely, that it has again become problematical – to renew and rejuvenate itself by again becoming aware of itself.” He was speaking, of course, with the ruefulness of his German experience. Our situation is not as bleak and bitter, but an authoritarian temper is flourishing in our midst too, in the West Wing and the streets and the media and the platforms. We, too, have become self-forgetful. “No,” Mann told the crowds from coast to coast, “America needs no instruction in the things that concern democracy…Europe has had much to learn from America as to the nature of democracy. It was your American statesmen and poets such as Lincoln and Whitman who proclaimed to the world democratic thought and feeling, and the democratic way of life, in imperishable words.” It is bruising to read those sentences. We no longer offer such instruction to the world, or even care about the condition of freedom beyond our own borders.

    The question of how to live is more than the question of how to vote. The liberal idea was never just a political idea. It is, more generally, a grand belief in human capacity, and in the obligation – exclusive to no group and no tradition – to cultivate it. When Henry James wrote about “the liberal heart”, he meant a large heart, a generous heart, a receptive heart, an expansive heart, an unconforming heart, a heart animated by a wide variety of human expressions. Such an ideal of heartfulness pertains not only to politics but also to culture. The war against callousness cannot be won without the resources of culture. There is no more lasting education in human sympathy than an exposure to literature and the arts.

    The dwindling position of the humanities in American society is one of its most catastrophic developments. This journal, an independent journal, will take a side in this struggle. It will champion sensibility as well as controversy, and attend to culture with the same ardor with which it attends to politics. But it will refrain from aligning cultural criticism with political criticism, in grateful awareness of the multiplicity of the realms in which we lead our lives, and in awareness also of the insidious history of the synchronization of culture with politics. Pardon the counter-revolutionary thinking, but culture must never become politics by other means. Of course this is precisely what culture is becoming, thanks not least to the zealous synchronizers at the New York Times. (And at The New Yorker, which is what PM would have been if it had the money.) The autonomy of art threatens nobody and enriches everybody. The social and political origins of artists vitiate the freedom of art about as much as the social and political origins of thinkers vitiate the freedom of thought. When art is weaponized, it is compromised. Racial justice does not require the racialization of all things. And culture harbors no dream of consensus. An aversion to controversy is an aversion to culture, just as it is an aversion to democracy.

    Not least because it will appear only four times a year, this journal will not be in the business of rapid response to the emergencies and the imbecilities with which we are currently inundated. We will crusade, but slowly. There is a deeper reason for this counter-cultural pace. It is that the investigation into bigger ideas and larger causes takes time. If the sorting out of our intellectual pandemonium should not be conceived under the aspect of eternity, neither should it be conceived under the aspect of the news cycle. American journalists have brilliantly responded to an assault on their integrity and their legitimacy with a golden age of investigative journalism, but they cannot be expected to do more: the exposure of lies in a regime of untruth is as exhausting as it is essential. (How many synonyms are there for “madman”?) So in these pages we will be indifferent to the chyrons. There will be no quick takes and immediate reactions and emotional outbursts, nothing driven by velocity or by brevity. At this journal we are betting on what used to be called the common reader, who would rather reflect than belong and asks of our intellectual life more than a choice between orthodoxies. We are not persuaded that it is a losing bet. With a melancholy sense of the fragility of what we cherish, and with a bestirring sense of how much injustice there is in the country and the world, we wish to bring an old intellectual calling into a new era and see what together we can learn. Nothing quickens the mind like hope.

    Plagues

    Consider the plague. I mean the actual, literal, bubonic plague, the disease caused by the bacterium Yersinia pestis. In this pestilential season the subject has been impossible to avoid, because so many people are calling coronavirus “plague” — even though, as pandemics go, they have almost nothing in common. Plague has an astonishingly high fatality rate — between 50% and 80% of its victims die — but is rarely transmitted directly from person to person, traveling instead through the bites of infected fleas. Covid19, by contrast, is much more contagious but significantly less fatal. And there are other distinctions. While the plague comes with painful, swollen tumors, running sores, and putrid secretions, coronavirus leaves no visible marks on the body. Most victims will survive it. Some might never even know they had it.

    There has also been plenty of talk about Ebola and AIDS and influenza and what all of them have to tell us about the present crisis. (I have no intention of interpreting the present crisis). But plague has retained a special hold on the imagination. To Thomas Dekker, the Elizabethan hack pamphleteer, it was simply “the sicknesse,” a disease with “a Preheminence above all others…none being able to match it for Violence, Strength, Incertainty, Suttlety, Catching, Universality, and Desolation.” The Black Death is still the most deadly pandemic in recorded history. At its height, between 1348 and 1351, the disease may have killed half the population of Eurasia. It has only two close rivals for sheer morbidity: the Spanish influenza of 1918-1919 and the smallpox pandemic brought to the Americas by Europeans after 1492. Both events caused untold human suffering, but neither left behind the same long history of written records. That was because the plague kept coming back. Its periodic recurrences swept through Europe with devastating regularity until the 1770s, and continued to ravage the Ottoman Empire into the 1850s. For almost five centuries, it was not unusual for cities to lose a quarter of their population in a year.

    So when Asiatic cholera spread to Europe in the 1830s, a century after the last plague outbreak, it was swiftly termed “the new plague.” Newspapers from 1918 proclaimed that influenza was “just like a plague of olden times.” Yellow Fever was called “the American plague” when it struck Philadelphia in 1793, and early coverage of AIDS in the 1980s demonized its victims by calling it “the gay plague.” Like coronavirus, none of these diseases are particularly similar to bubonic plague. They have different symptoms, causes, biological agents, and epidemiologies. What they share is a particular social profile: all are epidemic diseases of unusual suddenness and severity. They take populations by surprise. Cholera was the most feared disease of the nineteenth century, not the more deadly and more familiar tuberculosis. Endemic childhood illnesses killed more people than the plague before the invention of vaccination, but they did not inspire nearly the same terror. Fear of plague is not just about death or pain: more fundamentally, it is the fear of not knowing what comes next.

    Unsurprisingly, plague literature is currently having a moment. Publishers have announced a flood of upcoming books about the coronavirus experience. Recent months have seen rising sales of everything from Boccaccio’s Decameron to Dean Koontz’s The Eyes of Darkness (a novel about a fictional bioweapon called the Wuhan-400 virus). Camus’ The Plague is a best-seller in Italy and Korea; Penguin is currently issuing a reprint. For a couple of days in March, Defoe’s A Journal of the Plague Year was actually sold out on Amazon.

    Defoe might not be the best-selling plague author of the moment (though it’s close), but he has almost certainly been the most reviewed. After all, the Journal is the original plague novel, and arguably the only genuine historical narrative of the lot. By reading Defoe, we can tell ourselves a story about what really happened in 1665, when the Great Plague swept through London — and by extension, what has really happened to us now. In just a few days I read that it “speaks clearly to our time,” offers “some useful perspective on our current crisis,” and gives an “eerie play-by-play” of recent events. And at times, reading the Journal really did give me an uncomfortable sense of familiarity. Vague rumors of the plague reach London. The threat is discussed, then dismissed. The government waffles. Deaths start to mount through the winter of 1664 and the spring of 1665. By the time quarantines are established, schools closed, and public events  banned, it’s too late to prevent the worst. There is flight, uncertainty, panic, and lots of hoarding. Grocery shopping is perilous — careful vendors make sure never to touch their customers and keep jars of vinegar on hand to sanitize coins. Quack doctors peddle toxic “cures” and citizens obsess over mortality statistics. Everyone is constantly terrified and also somehow really bored.

    And then there is the famous ending:
    A dreadful plague in London was 
    In the year sixty-five,
    Which swept an hundred thousand souls  Away; yet I alive!

    I suspect that this is the true appeal of plague literature: the narrator always survives to tell the story. The glimpses of the present that we find in Defoe or Camus or Manzoni have a kind of talismanic effect, somewhere between a mirror and a security blanket. The more similarities we find — and judging by the current spate of writing about plague literature, there are always a great deal of “striking parallels” — the easier it is to tell ourselves that things will play out the same way. This, too, shall pass. My copy of the Journal is only 192 pages long and at the end of it the outbreak is over.

    There is nothing wrong with seeking this kind of comfort, but it does make me wonder: what is hiding behind the reassuring promise of human universals? If you read a lot of plague novels, you will notice that they tend to hit similar beats. The threat is dismissed, things get worse, quarantines are imposed, city-dwellers flee, the rule of law breaks down, we learn a very valuable lesson about man’s inhumanity to man and emerge on the other side not unscathed but wiser. Another advantage of fiction over reality is that everything occurs for a reason. Epidemics create a natural backdrop for extreme heroism or extreme selfishness. The disease itself, an inhuman killer that turns fellow-survivors into existential threats, naturally lends itself to allegorical interpretation. Plague is a divine punishment (Defoe) or a parable for totalitarianism (Camus). If we expand the genre a little, it is the inevitability of mortality (Edgar Allen Poe), a device to pare civilization down to stark moral binaries (Stephen King), or whatever it is Thomas Mann is doing in The Magic Mountain —  it is anything at all, that is, except a real disease. By treating fiction as a window into the past, we substitute a particular author’s attempt to make meaning out of meaninglessness for the full, complicated, messy range of responses which every outbreak has inspired.

    A Journal of the Plague Year is a particularly strong object lesson in the creative and purposeful appropriation of history. Defoe was five years old in 1665, too young to remember the epidemic in much detail. He wrote the book almost sixty years later, in response to an outbreak of the plague in Marseilles. Then as now, it was a good time for plague writing: 50,000 of the city’s 90,000 inhabitants had perished, and fears were high that the disease would cross the channel. Parliament issued new quarantine laws. Public fasts were proclaimed. The book was an instant success. Defoe paints a truly apocalyptic picture of London in the grip of the worst outbreak in its history: mass hysteria, corpses rotting in the streets, infants smuggled out of infected houses by desperate parents, the agonized screams of the dying in an unnaturally quiet city. Above it all, there is the omnipresent fear that an incidental touch or stray breath from a seemingly healthy person could spread the contagion.

    Critics have spent the better part of the past three hundred years debating just how accurate this portrait really is. Defoe liked to mix fact and fiction. Just four years earlier, he had published Robinson Crusoe as an authentic travelogue (it sold thousands of copies). The Journal also purports to be a factual account, “written by a Citizen who Continued All the While in London.” When the book was published in 1722, the great plague was still within living memory, and Defoe’s account rang true enough that his contemporaries largely accepted it as fact. His pseudonymous narrator, H.F., freely cites real mortality statistics, veiled or overt references to historical figures, and anecdotes found in genuine accounts of the plague year. Few scholars would go as far as his most peevish defender, Watson Nicholson, who asserted in 1919 that “there is not one single statement in the Journal, pertinent to the history of the Great Plague in London, that has not been verified” — but there is no denying that Defoe did his research.

    At the same time, Defoe’s concerns in the novel have at least as much to do with the present as the past. In the first place, horror sells. Defoe, who ghost-wrote the memoirs of a notorious thief to sell at his execution, was well aware of the commercial value of ghoulishness. He also had definite opinions about public health legislation. Defoe was a vocal advocate of the government’s new and highly unpopular maritime quarantine laws, which included an embargo on trade with plague-stricken countries. In the Journal, he portrays the similar restrictions put in place in 1665 as necessary life-saving measures. True, he acknowledges, they are costly and inconvenient — but that hardly seems relevant in the face of his catastrophic account of the alternative.

    While in favor of maritime quarantine, Defoe was one of a growing number of critics in the seventeenth and early eighteenth centuries who opposed the practice of impris-oning whole families in their homes at the first sign of infection. Some of the book’s most bone-chilling anecdotes are devoted to this “cruel and Unchristian” practice, which increased death tolls, he argued, by shutting up the healthy with the sick, and was in any case ineffective, since the plague was most contagious before its symptoms were evident. (Notably, household quarantine was not one of the provisions adopted in the controversial Quarantine Act of 1721. Here, too, H.F.’s recommendations for containing the disease support the tottering Whig government.)

    Defoe’s account of London in 1665 reflects the particular polit-ical conditions of London in 1722, but it also draws on a much older tradition of English Protestant plague writing.

    By 1665, plague was a very familiar occurrence. “It was a Received Notion amongst the Common People that the Plague visited England once in Twenty Years, as if after a certain Interval, by some inevitable Necessity it must return again,” wrote Nathaniel Hodges, one of the few physicians to remain in London during the Great Plague. In fact, its recurrences were even more frequent: an elderly Londoner in 1665 would have witnessed seven plague outbreaks in his or her lifetime, and only one interval of more than two decades without a visitation.

    The plague inspired unequaled terror, accompanied by intense religious fervor. Since it was universally accepted that the disease was a manifestation of divine vengeance, plagues made for powerful rhetorical tools in sectarian disputes. Under Queen Mary, plague was the consequence of Protestantism; when Queen Elizabeth restored the Anglican church, it was blamed on Catholics. Nonconformists were especially well-placed to take advantage of the revivals which nearly always accompanied outbreaks. Thomas Vincent, a Puritan minister who continued to preach in London through the worst months of 1665, noted that his sermons had never been so well-attended: “If you ever saw a drowning man catch at a rope, you may guess how eagerly many people did catch at the Word, when they were ready to be overwhelmed.” It didn’t hurt that Puritanism stressed emotional piety with an emphasis on sin, punishment, and predestination — all popular themes during outbreaks of a horrific disease that seemed to strike at the virtuous and the wicked indiscriminately.

    For Anglican and Nonconformist ministers alike, the plague was an opportunity to frighten a very receptive audience back into God’s good graces. Their grotesque eyewitness accounts and graphic descriptions of the suffering of plague victims warned readers of the consequences if they failed to repent. Defoe was raised a Calvinist and once intended to pursue a career as a minister. His stock of metaphors, anecdotes, and moral tales recalls the preachers and pamphleteers of earlier outbreaks. Like them, the Journal features lengthy excurses on the plight of the poor, the corruption and hypocrisy of the court, the benefits of piety and charity, and the grisly details of what a bubo really looks like up close. Defoe waxes especially poetic on the stench they emit while being lanced.

    The authors of these materials were quite willing to exaggerate certain details in the interest of leading their readers to religion. In reality, the Great Plague subsided gradually, with deaths returning to pre-plague levels by February 1666. Defoe, in one of his few outright falsehoods, has the plague end abruptly: “In the middle of their distress, when the condition of the city of London was so truly calamitous, just then it pleased God … to disarm this enemy.” This sudden reprieve cannot be attributed to medicine, public health, or anything but “the secret invisible hand of Him that had at first sent this disease as a judgement upon us.” This is where Defoe drops the pretense that he is writing a history book. His words are a warning to the reader: beware. Quarantine laws are all to the good, but if you do not repent, nothing on earth can save you.

    The Great Plague provoked just as much apocalyptic preaching as any other outbreak, but intense religiosity was not the only or even the dominant response. Indeed, the 

    biggest difference between Defoe’s Journal and the diaries of actual plague survivors is how much less the plague features in them. When we consider the scope of the disaster — 100,000 dead, large-scale quarantines, the total cessation of public life — it is hard to imagine how anyone who lived through it could think about anything else. Remarkably, they could and they did. “It is true we have gone through great melancholy because of the great plague” wrote Samuel Pepys, the least inhibited diarist in seventeenth-century England, but “I have never lived so merrily (besides that I never got so much) as I have done this plague time.”

    The Journal picks up in September 1664, with the first rumors of an attack of the plague in Holland. Pepys doesn’t mention the plague at all until the end of April 1665, and then drops the subject entirely for another month. By summer, the traditional peak of the plague season, the epidemic had grown impossible to ignore. John Evelyn, another diarist, first brings up the plague in his entry for July 16: “There died of the plague in London this week 1,100; and in the week following, above 2,000. Two houses were shut up in our parish.” Both men shared Defoe’s interest in mortality statistics. The numbers punctuate Evelyn’s diary for the next few months: “Died this week in London, 4,000.” “There perished this week 5,000.” “Came home, there perishing near 10,000 poor creatures weekly.” But between them, life goes on. Evelyn goes about his business as a commissioner for the care of sick and wounded sailors and prisoners of war. (Unsurprisingly, he is very busy.) He pays social calls. His wife gives birth to a daughter. The plague clearly weighed on his mind, but Evelyn treats it matter-of-factly. The disease is frightening, inconvenient, and a nuisance at work, but it is not the end of the world.

    Throughout the months of August and September, Pepys manages to fit a regular diet of plague-related anxiety in and around more important topics such as food, sex, and earning large quantities of money. He worked as a naval administrator, and the Anglo-Dutch war provided good opportunities for business. In his diary, Pepys is equally assiduous in recording plague mortality, monetary gains, and the “very many fine journys, entertainments and great company” which he consistently manages to provide for himself. The frequent, intense, and jarring juxtaposition of life and death makes for a bizarre reading experience. In a typical entry, Pepys enjoys a venison pasty with some business associates, complains of a mild cold, spends a pleasant evening with his family, and remarks that fatalities have jumped by almost 2,000, bringing this week’s total to 6,000 — though the true number is probably higher.

    It’s not that Pepys is insensitive to the suffering around him —  in fact, he seems keenly aware of it. He records his grief at the deaths of friends and servants, his own fears, the dismal mood in the city. At the same time, he seems to possess a preternatural ability to experience everything fully, from existential dread to a particularly good breakfast. For him, the greatest disaster in living memory is just another part of life. In his entry for September 3, which I can’t help but quote at length, Pepys describes his morning toilette: “Up; and put on my coloured silk suit very fine, and my new periwigg, bought a good while since, but durst not wear, because the plague was in Westminster when I bought it; and it is a wonder what will be the fashion after the plague is done, as to periwiggs, for nobody will dare to buy any haire, for fear of the infection, that it had been cut off of the heads of people dead of the plague.” What indeed will the plague do to periwiggs? The question is so delightfully specific. Nobody but a fashion-conscious seventeenth-century Londoner could possibly think to ask it. In its concreteness it sticks in my mind more than any given passage in Defoe, or any observation about the universal effects of epidemics. Here the disease is human-scale, an event in a particular place and a particular time, a cause of small vanities as well as mass tragedies.

    The specific has more sticking power than the general — which is another reason we look to Defoe. The Great Plague of London seems so familiar to modern readers not because there is some fundamental human response to outbreaks of infectious diseases, but because the reactions it inspired were so different from the medieval outbreaks that came before it. Everything from enforced isolation to widespread fear of infection and attempts to understand the plague’s progress were relatively new developments. The practice of quarantine emerged in northern Italian city states in the aftermath of the Black Death, along with systematic methods of state surveillance, recorded death tallies, and dedicated plague hospitals. This apparatus of plague regulation diffused gradually throughout Europe. By the turn of the seventeenth century, England had official mortality statistics and punitive sanctions to enforce home quarantine.

    The outbreak of 1665 marked another transition. Rather than an unpredictable act of providence, the plague became a predictable act of providence: while still a manifestation of divine punishment, it was carried out through natural means and could be discussed in detached and objective terms. (This development also began in Italy, but there is no great English-language novel of the plague in sixteenth-century Milan.) The Great Plague was the first outbreak in which the discourse of naturalism prevailed, and medical treatises on plague outnumbered religious ones. This medical literature included recipe books of cures and prophylactics, lengthy volumes on the nature of the disease, and theoretical debates carried out in pamphlets and broadsides. While medical writers all acknowledged God as the “first cause” of the epidemic, they established a clear separation between religious and naturalistic inquiry.

    It is tempting for the modern reader, looking back on the past with the benefit of hindsight and germ theory, to treat religious etiologies of plague as a response to a lack of available medical explanations. In fact, early modern Londoners had no shortage of naturalistic causes to choose from. A list by Gideon Harvey, a Dutch-born and Cambridge-educated member of the Royal College of Physicians, includes “great Inundations, Stinks of Rivers, unburied Carcases, Mortality of Cattel, Withering of Trees, Extinction of Plants, an extraordinary multiplication of Froggs, Toads, Mice, Flies, or other Insects and Reptils, a moist and moderate Winter, a warm and moist Spring and Summer, fiery Meteors, as falling Stars, Comets, fiery Pillars, Lightnings, &c. A ready putrefaction of Meats, speedy Moulding of Bread, briefness of the Small Pox and Measles, &c.” Other proposed sources of the plague included rotten mutton, imported carpets, and a particular dog in Amsterdam. 

    William Boghurst, an apothecary who remained in London during the plague, took a cynical view of these lengthy traditional lists: “because they would bee sure to hitt the nayle, they have named all the likely occasions they could think of.” Noticing that most of the commonly listed causes related to dirt or rot, he traced the origin of the plague to corrupt particles lurking in the earth. Like many others, his theory combined the two dominant explanatory frameworks for disease in Early Modern Europe. The classical explanation, derived from the Greek physician Galen, connected plagues and other infectious diseases to miasma, or poisonous effusions from rotting organic matter. The more modern contagionist view held that the plague could be transferred invisibly from person to person. Boghurst believed that outbreaks began when miasmas rose from disturbed earth, and quickly spread through contagion. In a similar vein, Harvey wrote that “the Plague is a most Malignant and Contagious Feaver, caused through Pestilential Miasms.”

    The fear of contagion drove Londoners to measures that even Boghurst considered excessive. He complained of the extreme lengths to which his patients would go to avoid even incidental contact: “for example, what care was taken about letters. Some would sift them in a sieve, some wash them first in water and then dry them at the fire, some air them at the top of a house, or an hedge, or pole, two or three days before they opened them … some would not receive them but on a long pole.” He was right — though he had no way of knowing it — that the plague bacterium does not live for very long on paper. But frightened citizens were eager to implement the mass of medical knowledge suddenly made available to them.

    As we have seen, this enthusiasm for information had a statistical bent. The city of London started to publish weekly bills of mortality during the outbreak of 1592. During times of plague, Londoners enthusiastically read, reprinted, and circulated the bills, which they used to track the progress of the disease from parish to parish. In 1662, John Graunt published the first statistical analysis of the data in his Natural and Political Observations Made upon the Bills of Mortality. Graunt argued that the number of deaths which the bills attributed to the plague during past outbreaks was inaccurate, and speculated that reporting was less than reliable. When the plague struck again in 1665, many Londoners adopted a similarly critical attitude to the reported death rates, suggesting that some groups (Quakers, the poor) might be undercounted, or that fatalities from other diseases were being reported as plague deaths.

    The weekly bills gave rise to one of the weirder genres of English plague publishing: the “Lord Have Mercy” broadside, named for the title which nearly all of them shared. These documents, which were reprinted almost identically in each outbreak, usually included a prayer, a woodcut, some remedies, maybe a poem, and mortality statistics from six or seven previous visitations. Examples from 1665 typically featured data from 1592, 1603, 1625, 1630, 1636, 1637, and the current week. They also included pre-printed headings for the next few weeks or months for the reader to fill in as the epidemic wore on.

    For anyone who has checked the numbers, again, just to see if they have changed, it is not hard to imagine what people got out of this practice. But the historical data is harder to interpret. Knowing how many people died in 1636 is not particularly useful in 1665. Why did Londoners want this? And why did they want it again and again in exactly the same form? Of course, this is the central question of plague literature in general. When historians discuss it, they tend to use phrases like “conventional and derivative” or “a vast and repetitive outpouring.” It is, famously, boring.

    In outbreak after outbreak, plague tracts featured the same assortment of prayers, cures, and exhortations to repent. They also shared the same stories. Some served as cautionary tales: a wealthy man refuses to assist a plague victim and immediately falls ill. Another is struck down after boasting about  his own safety. Premature interment is a common theme. One of Defoe’s anecdotes concerns a drunk piper who passes out in the streets and is loaded onto a dead-cart, only to wake up just as he is about to be buried. In another variant of the story, he is tossed into a plague pit and terrifies the sexton the next morning by calling out from the grave. In yet another, he is thrown out of a tavern for fear that his dead-sleep will  be mistaken for actual death and the whole establishment  will be declared infected.

    The same tale appears in the memoirs of Sir John Rareseby, a bona fide survivor of the plague of 1665, who certainly believed it to be both true and current. “It was usual for People to drop down in the Streets as they went about their Business,” he reports, and it may well have been — but the tale of the drunk piper also appears in plague tracts from 1636 and 1603. Repeated over decades or even centuries, these stories imposed a kind of narrative order on outbreaks. The residents of an infected city knew what to expect when the plague came. They were so familiar with the cultural scripts that they began to see them everywhere.

    The extent to which first-person plague narratives draw on earlier accounts makes it difficult to tease out the subjective experience of individual survivors. “To a degree, interpretations and responses to plague were copied and taught, not reinvented and coined afresh whenever plague occurred,” the historian Paul Slack has observed. When Samuel Pepys and John Evelyn talk about grass growing in the streets of London in 1665, they are quoting Paul the Deacon a thousand years earlier (whether they know it or not), and nearly everybody is citing Thucydides nearly all of the time. His account of the plague in Athens in 430 B.C. is the source of innumerable plague tropes, from the image of bodies lying unburied in the streets to the moral lesson that the disease brings about the collapse of social order. As with Sir John Raresby, there is no reason to believe that later chroniclers used these common-places intentionally to mislead. Expectations have a powerful ability to shape perception. Through them, the disease is tamed, familiarized, and given meaning.

    We are among the first human beings for whom the experience of a disease outbreak so severe and wide-ranging is outside of living memory. Our generation has inherited no familiar stock of coronavirus parables; no script that tells us exactly why we are suffering; no sheets of mortality statistics with an empty space left over for next time. Our fascination with plague literature is a sign that some things never change: this desire to tell and retell stories puts us in the company of every other set of survivors in recorded history. The instinct to impart structure and purpose to a fundamentally purpose-less crisis might be the only truly universal response to life in a pandemic. That we should feel it so strongly is all the more remarkable in a society as blissfully and unprecedentedly pandemic-free as the developed world was at the beginning of the twenty-first century. But no more: now we have narrative resources of our own, stories of contagion and endurance and recovery, to bequeath to the vulnerable who come after us. When faced with the unimaginable, we did what we have always done: look back. 

    The Doctrine of Hate

    Julius Margolin was born in 1900 in Pinsk. After studying philosophy in Germany in the 1920s he moved to Poland with his family, where he became active in Revisionist Zionism  and published a Yiddish book on poetry. From there he and his family moved to Palestine. For economic reasons, Margolin returned to Poland in 1936, where he was trapped by the Nazi invasion, and was eventually imprisoned in Soviet labor camps. In July, 1945 he was released and made his way back to Tel Aviv, where he wrote a pioneering memoir of the Gulag and died in 1971. The full text of Journey into the Land of the Zeks and Back was not published in his lifetime.

    After my release from Maxik’s hospital, having had an opportunity to rest, and armed with certification as an invalid, I returned to the camp regime. In Kruglitsa, a certified invalid with a higher education has a wealth of possibilities. You can choose: assist the work supervisor in compiling the lists of personnel in the brigades; work in the Cultural-Educational Sector (KVCh); or be an orderly in the barrack. Until a prisoner is taken off the official work register, he will not be sent to such unproductive work. The place for a healthy, able person is in the forest or field, where hands and shoulders are needed. The work boss will not allow an able-bodied worker to have an office or service job. An invalid is another matter. Whatever he is able and willing to do without being obliged to do so is a pure gain for the state.

    At first, I was amused at the accessibility of work from which I had been barred as a third-category worker. When they found out that Margolin had been deactivated, people immediately invited me to work in various places, and I succumbed to temptation. An invalid is allotted the first level food ration and 400 grams of bread. By working, I received the second level and 500 grams.

    For an entire month, I tried various places. After a ten-week stay in the hospital, it was pleasant to be occupied and to be listed in a job. After a month, however, I came to feel that I had been deactivated for a reason. I lacked strength. The job with the work supervisor dragged on until late at night. Work at the KVCh entailed being in motion all day, making the rounds of the barracks, rising before reveille. As a worker in the Cultural-Educational Sector, I had to get up an hour before everyone else: by the time the brigades went out to work, I had to list on the huge board at the gate the percentage of the norm that each brigade had fulfilled the previous day.

    A worker calculated these norms in the headquarters at night and, before going to sleep, he left the list for me in a desk drawer in the office. The camp was still sleeping, the dawn reddened behind the barracks, and the guards were dozing on the corner watchtowers, when I would climb with difficulty onto a stool that I had placed in front of the giant chart and begin writing in chalk on the blackened board the figures for the twenty brigades.

    This work bored me. The thought that as an invalid I was not obliged to endure this misery gave me no rest. I had been an invalid for an entire month and had not yet utilized the blessed right to do nothing; I had not taken advantage of my marvelous, unbelievable freedom. In the middle of the summer in 1943, I declared a grand vacation. At the same time, it represented a great fast: 400 grams of bread and a watery soup. It was June. Blue and yellow flowers bloomed in the flowerbeds in front of the headquarters; under the windows of the infirmary, the medical workers had planted potatoes and tobacco. In the morning, the patients crawled out to the sun and lay on the grass in their underwear or sunned themselves in the area around the barracks. When I went by, barefoot, in my mousy gray jacket without a belt, fastened by one wooden button near the collar, they shouted to me: “Margolin, you’re still alive? We thought you were gone already!”

    Without stopping, I went on to the farthest corner of the camp territory. I had a blanket, a little pencil, and paper. There was lots of paper: in the past month, I had hoarded a respect-able amount. I even had a little bottle of ink from my work in the KVCh. I would take a rest from people, the camp, work, and eternal fear. I lay on my back, watching the clouds float above Kruglitsa. A year earlier, I had worked in the bathhouse and ran into the forest for raspberries. Amazingly, then I was able to carry three hundred buckets of water a day. That year depleted me. Now there were no raspberries, but neither did I have to drag water buckets. I was satisfied; it was a profound rest.

    In the summer of 1943, a storm raged over Kursk, and Soviet communiqués spoke of gigantic battles, as if all the blood receded from this great country and flowed to the single effort in that one spot. One hardly saw healthy males in Kruglitsa. Women guarded the prisoners and conducted the brigades to work. Gavrilyuk, who the past summer had been a Stakhanovite wagoner, now, like me, had been retired from work, and women prisoners worked as wagon drivers in camp. Women, like reservists, went to the first line of work. We knew from the newspapers that, throughout the country, women were working as tractor drivers, in factories, and in the fields. The free men held the battle front while the male prisoners in the camp melted like snow in the spring sun and descended under the ground. I knew that in another year I would be weaker than I was at present. If the war dragged on, I would die and not even know how it ends. Out of pure curiosity, I wanted to make it to the end of the war.

    That summer, my first grand interlude as an invalid, I wrote “The Doctrine of Hate.” That summer I was preoccu-pied with thoughts about hate. Lying in the grass behind the last infirmary, I returned to the topic from day to day and turned out chapter after chapter. I experienced a profound and pure enjoyment from the very process of thought and from the awareness that this thinking was outside-the-camp, normal, free thought, despite my current conditions and despite the barbed wire fence and guards. This was “pure art.” There was no one to whom I could show it or who could read what I was writing, and I felt pleasure from the very activity of formulating my thoughts, and as the work advanced I also felt carry three hundred buckets of water a day. That year depleted me. Now there were no raspberries, but neither did I have to drag water buckets. I was satisfied; it was a profound rest.

    In the summer of 1943, a storm raged over Kursk, and Soviet communiqués spoke of gigantic battles, as if all the blood receded from this great country and flowed to the single effort in that one spot. One hardly saw healthy males in Kruglitsa. Women guarded the prisoners and conducted the brigades to work. Gavrilyuk, who the past summer had been a Stakhanovite wagoner, now, like me, had been retired from work, and women prisoners worked as wagon drivers in camp. Women, like reservists, went to the first line of work. We knew from the newspapers that, throughout the country, women were working as tractor drivers, in factories, and in the fields. The free men held the battle front while the male prisoners in the camp melted like snow in the spring sun and descended under the ground. I knew that in another year I would be weaker than I was at present. If the war dragged on, I would die and not even know how it ends. Out of pure curiosity, I wanted to make it to the end of the war.

    That summer, my first grand interlude as an invalid, I wrote “The Doctrine of Hate.” That summer I was preoccupied with thoughts about hate. Lying in the grass behind the last infirmary, I returned to the topic from day to day and turned out chapter after chapter. I experienced a profound and pure enjoyment from the very process of thought and from the awareness that this thinking was outside-the-camp, normal, free thought, despite my current conditions and despite the barbed wire fence and guards. This was “pure art.” There was no one to whom I could show it or who could read what I was writing, and I felt pleasure from the very activity of formulating my thoughts, and as the work advanced I also felt proud that to a certain degree I was prevailing over hatred, was able to grasp it, and to subject it to the court of Reason.

    This subject was dictated by my life. What I had endured and seen around me was a true revelation of hate. In my previous life, I only heard or read about it, but I never encountered it person-ally. Neither racial nor party hatred had crossed the threshold of my peaceful home. In camp, for the first time, I heard the word “kike” directed at me, felt that someone wanted me to perish, saw victims of hate around me, and witnessed its organized apparatus. In camp, I, too, for the first time learned to hate.

    Now it was time for me to elaborate all this material theoretically. How simple it would be to go away from the haters to that bright kingdom of warmth and humanity in which I, unawares, lived before the Holocaust. It is natural for a person to live among those who love and are loved by him, not among enemies and haters. But this was not my fate. Nor was I able to resist hatred actively. The only thing that remained free in me was thought; only by thought could I respond. There was nothing else I could do but try to understand the force that wanted to destroy me.

    I was less interested in the psychology of individual hatred than in its social function, its spiritual and historical meaning. I saw hatred as a weapon or as a fact of contemporary culture.

    The most important thing, with which I began, was the dialectic of hate. Hatred is what unites people while dividing them. The link via hate is one of the strongest in history. Souls come together in hate like the bodies of wrestlers — they seek each other like wrestlers in a fight. You cannot understand hate as pure negation, because if we merely do not love or do not want something, we simply walk away from it and try to eliminate the unnecessary and unpleasant from our life. There was something in my hatred of the camp system that forced me to think about it, and I knew that my hatred would not let me forget it even when I got out of here. Hate arises in conditions when we cannot escape. Hate is a matter of proximity. Personal, class, or national hatred — it is always between cohabitants, between neighbors, between Montague and Capulet, over borderline and frontier.

    The paradox of hate is that it leaves us in spiritual proximity to that which we hate until, ultimately, there arises rapprochement and similarity. Sometimes, the hate itself turns out to be merely a concealed fear of what attracts us, as in Catullus’ poem Odi et amo, as in Hamsun’s “duel of the sexes,” as in a lackey’s hatred for the lord, and finally, in antisemitism of the maniacal type, when people cannot do without the Jews. Here is an acute example. Adolf Nowaczyński, a talented Polish writer, was a malicious hater of everything Jewish. When he approached old age, he took off for Palestine to see things with his own eyes, and it turned out that he felt quite good in Tel Aviv. This man’s life would have been empty without Jews. If they had not existed, he would have had to invent them, and ultimately that is what he did all his life. There is hatred toward fascism and even hatred of communism that derives from a certain moral closeness and, in any case, leads toward it over time. We cannot hate what is absolutely incomprehensible and alien. The incomprehensible arouses fear. Hatred, however, needs an intimate knowledge and multiplies it, and it endlessly forces us to take an interest in what we detest.

    This was the paradox of hatred that I examined from all sides while lying in the sun in the corner of the camp yard. Hatred was not only before me — it was also inside me. In me, however, it was different from the hatred against which my entire being rebelled. It thus was necessary to differentiate the various forms of hatred, in order to distinguish between  the hatred that was inside me and what to me was an odious and evil hatred.

    I began by identifying some bogus and altered forms, the pseudo-hatred that only obscures the essence of the matter. I saw that an inapt item or something with an external resemblance paraded under the label of hatred. Away with counterfeits!

    First: juvenile hatred, odium infantile. Children are capable of the most fierce, frantic hatred, but that is only “ersatz,” not serious. Juvenile hatred is a momentary reaction, an acting out. It boils up in an instant and passes without leaving a trace; it rises and bursts like a soap bubble. In essence, it is an outburst, a fit of emotional distress. This is precisely the reason why, in its mass manifestation, by virtue of its qualities of easy arousal, easy manageability, and evanescence, it is particularly suitable for the purposes of cold-blooded producers of this hatred and inciters, who always mobilize it in the masses when it is necessary to stimulate them to an extraordinary effort, to struggle in the name of changing goals. Hatred goes to the masses, flows along the channels of calculated propaganda, but it is all on the surface; it has neither depth nor stability. Left to itself, it dies out or unexpectedly changes direction, as in 1917, when the masses, filled by Tsarist governments with pogromist and front-line hatred, turned against the govern-ment itself. The savage hatred of the incited mass, like fuel in a car, turns the wheels of the military machine, but the ones at the steering wheel are calm and cool.

    Ripe, mature hatred does not have the nature of a momentary reaction; it is a person’s automatic, internally determined and stable position. It does not exhaust itself in one ferocious outburst but gnaws at a person’s entire life and lurks behind all his manifestations and deeds. Psychologically it is manifested in a thousand ways. From open hostility to blind nonrecognition, all shades of dislike, malice, vengeful-ness, cunning and envy, mockery, lies, and slander form the vestments of hatred, but it is not linked exclusively with any one of them. There is no specific feeling of hatred; in its extreme form, it ceases to need any kind of “expression.” 

    A child’s hatred is expressed in screaming, foot stamping, and biting. The hatred of a savage, which is the same as a child’s hatred, elementary, bestial fury, is expressed in a pogrom, in broken skulls and bloodletting. There is, however, mature hatred that is expressed only in a polite smile and courteous bow. Perfect hatred is Ribbentrop in Moscow, kissing the hands of commissars’ wives, or Molotov, smiling at the press conference. We adults have learned to suppress and regulate manifestations of our hatred like a radio receiver, turning it off and on like a light switch. Our hatred is a potential force; therefore, it can be polite and calm, without external manifestations, but woe to the one who shakes an enemy’s extended hand and walks along with him.

    The second form of pseudo-hatred is odium intellectuale: the hatred of scientists, philosophers, and humanists — it is the hatred of those incapable of hating, the academic hatred of intellectuals, which was introduced as an antidote and placed as a lightning rod against barbarism. This vegetarian, literary hatred would have us hate abstract concepts — not an evil person but the evil in man, not the sinner, but sin. This hatred unceasingly exposes vices and fallacies, mistakes and deviations against which we are ordered to fight. This theoretical hatred completely fences itself off from the practical. Unfortunately, the street does not understand these fine distinctions: mass hatred recognizes only that enemy whose head one can break.

    Humanism in its essence cannot oppose hatred. We know of two attempts in the history of culture to eliminate hatred from human relations: “nonresistance to evil” and the view that the end does not justify immoral means. Passive resistance to evil, however, invariably switches to active resistance against the bearers of evil, and the question of “ends and means,” with its artificial division of the indivisible, remains intractable so long as we do not know what specific means are being used for precisely what goals. Historically, butchers and murderers invariably used abstract, theoretical hatred for their own purposes, expertly contriving to turn every intellectual product into a weapon of mass murder and unlimited slaughter.

    Christ drove the money lenders out of the Temple. His successors excommunicated the heretics from the church and lit the bonfires of the Inquisition, up to Torquemada and that papal legate who, upon suppressing the Albigensian heresy, said, “Kill all of them; God will recognize his own.” The Encyclopédistes and Rousseau hated vice and believed in the triumph of virtue. The French Revolution introduced the guillotine. Marx started with the liquidation of classes and of exploitation in human relations. His followers turned Marxism into a formula of mass terror, when a “class” is destroyed not as an economic category but as millions of living, innocent people. “Kill them all; history itself will revive what it needs.” The process contains a tragically inevitable progression, and, unavoidably, the warrior-humanist becomes a captive of an alien element, as in the case of Maxim Gorky in the role of a Kremlin dignitary. The teachers either capitulate in the face of the conclusions that the pupils derive from their lessons or perish in prison or on the scaffold.

    Odium intellectuale, the theoretical hatred of scholars, thus either fails to achieve its goal or leads to results that are diametrically opposite to the original intention. Luther throws an inkpot at the devil. The devil turns the philosopher’s ink into blood and a sea of tears.

    The third form of hate that I isolated in my analysis is odium nationale, the well-meaning hatred of those who take up arms in order to halt the force of evil. Evidently, there was never a dark force that did not try to pass itself off as just and worthy. Evidently, we have no other means of distinguishing between good and evil than by Reason and Experience, which teach us to recognize the essence of phenomena from their manifestations and consequences. There is, thus, a hatred that is rational and transparent in all its manifestations. It is clear to us why and when it arises. Its logical basis is at the same time the reason for its conditional nature, as it disappears along with the causes that evoked it. This hatred is so secondary and reactive that we can easily designate it as counter-hatred. We do not need it intrinsically, but when an enemy imposes it upon us, we do not fear to take up the challenge, and we know that there are things in the world that are worth fighting against — the passion and force of survival which do not yield to the enemy’s force and passion but have nothing in common with them in their inner essence.

    Having thus carefully differentiated the historically present forms of pseudo-hatred — mass-juvenile and intellectual-ab-stract, and the rational counter-hatred of the warrior — I approached the eyeless monster that at the time of my imprisonment had spread over all of Europe.

    Unlike the superficially emotional, infantile hatred of the crowd, the theorizing hatred of the intellectual, and the sober, clear conviction of the defenders of humankind, there is a force of primal and pure hatred, active despite its blindness, and blind despite its initiative, and the more active the less causally provoked. It fears only the light of day. Reason is its natural enemy. 

    Haters of the world are united in their negation of freedom of the intellect. The mark of Cain by which one can recognize genuine hate is scorn of free thought, rejection of the intellect. For Hitlerism, free thought is “a Jewish invention”; for the Inquisition, it is a mortal sin; for the ideologues of communism, it is counterrevolution and bourgeois prejudice. Every basis for such hate is imaginary and pseudo-rational. It is therefore natural that the people who established forced-labor camps in Russia simultaneously eradicated freedom of discussion and the right of independent investigation there. 

    In a pure, undiluted form, hatred is self-affirmation via another’s suffering. People become haters not because their surrounding reality forces them to that. There is no sufficient basis for hatred in the external world. There is nothing in the world that could justify the annihilation of flourishing life and proud freedom undertaken by Hitler, the fires of the Inquisition, or the prisons and pogroms and the camp hell of the Gestapo and the NKVD.

    There is a pyramid of hate, higher than the Palace [of the Soviets] that is being constructed in Moscow at the cost of hundreds of millions while people are dying of starvation in the camps. At the base of this pyramid are people similar to children, wild savages, like the one who hit me with a board on the road to Onufrievka, or the SS man who shot my elderly mother on the day the Pinsk ghetto was liquidated. These people rape, destroy, and murder, but tomorrow they themselves will be the most mild and obedient and will serve the new masters and believe the opposite of what they believed yesterday, and others — just like them — will come to their homes to murder and rape. Above these people stand others who teach them and entrust them to do what they do.  Above them are still others, who engage in ideology and theoretical generalizations, and those embellishers, who service the hatred, deck it out, put it to music, and dress it in beautiful words. 

    Ultimately, however, at the very top of the pyramid stands a person who needs all this: the incarnation of hatred. This is the organizer, the mastermind, the engineer and the chief mechanic. He has assembled all the threads in his hands, all the subterranean streams and scattered drops of hatred; he gave it direction, a historic impetus and scope. At his signal, armies cross borders, party congresses adopt resolutions, entire peoples are exterminated, and thousands of camps are erected. And he may be kind and sweet: he may have six children as Goebbels did or a “golden heart” like Dzerzhinsky’s, an artistic nature like Nero’s or Hitler’s, and the Gorkys and Barbusses will not stop slobbering over him. He, however, decreed that somewhere people must suffer. He executed them in his mind when no one yet knew about his existence. Even then he needed this.

    This brings up a central question in the doctrine of hate: What is the makeup of a person, a society, an epoch if naked hatred has become such a necessity for them, if the senseless torment-ing of their victims becomes a necessary condition of their own existence? It is not at all easy to answer this question if one does not adduce the familiar so-called arguments that the German people “were defending themselves against the Jews,” that the Inquisition was “saving souls,” or that Stalin is re-educating and reforming “backward and criminal elements” with the help of the camps. This is obvious nonsense. Of course, I in no way harmed the Germans or needed a Stalinist re-education, but even if that had been the case, it would not justify the gas chambers or turning millions of people into slaves. Germany did not need the gas chambers; the Russian people did not need the camps. But they are truly necessary for the big and little Hitlers and Himmlers, Lenins and Stalins, of the world. What, indeed, is going on?

    One must clearly recognize that the people holding the keys of power are fully aware of and admire the extent of the avalanche of human and inhuman suffering that seems like an elemental misfortune to us little people. Those people are responsible for its existence every minute and second. They have started it and control it, and it exists not because of their ignorance or impotence but precisely because they know well what they are doing, and they are doing precisely what meets their needs. Only a dull, wooden German lacking imagination, such as Himmler, needed to visit Auschwitz in person in order to look through a little window of the gas chamber to see how hundreds of young Jewish girls choked to death, girls who had been specially dispatched to execution that day for that purpose. People of the Kremlin do not need to observe personally; they have statistics about the camp death toll.

    There is no answer to why this is necessary other than to analyze the known pathological peculiarities of human nature. There is no rational, “economic,” or other explanation of hatred. The logic of hatred is the logic of madness.

    That man [Stalin] hates: He cannot do without this attitude to people; without it, he suffocates. Hate is the oxygen that he breathes. Taking hatred away from him would leave him destitute.

    That man hates, which means that some kind of inner weakness develops into hate, the result of some organic problem. Some kind of lack, defect, or unhappiness may remain within the bounds of his sense of self, but it may also spread to his social milieu and be transmitted to other people. There are wounded people, vulnerable classes, ready to turn into breeding grounds of collective hate. There are situations when people, groups, or societies are unable or unwilling to look truth in the face.

    In Vienna, young Hitler discovered that the Jews are responsible for depriving him and the German people of their deserved place in the sun. This is preposterous but, indisputably, this man started with some feeling of pain; he was deeply hurt. Had he wanted the truth, he would have found a real cause, but the truth was too much for him to bear. He therefore began to search for external guilty parties. Here the mechanism of hate begins to operate. The real pain turns into an imagined insult. An enemy and offender must be found. 

    The need for an enemy is radically different from the need for a struggle that is characteristic of every strong person. Strong people seek an arena, an outlet for strength. The hater seeks offenders to accuse. On the one hand, the need for a struggle engenders courage and initiative. On the other, the need to deal with a cunning enemy engenders aggressiveness and malice. The offender is always nearby. If he is not visible, that means he is in disguise and must be unmasked.

    All haters are great unmaskers. Instead of a mask, however, they tear off live skin, the true nature, and they replace reality with a creation of their inflamed fantasy. Hatred starts with an imaginary unmasking and ends with real flaying, not in theory but in practice.

    The analysis of our epoch given by Marx and developed by Lenin crossed all bounds of a reasonable interpretation of reality. Pseudo-rational theory turned into a Procrustean bed that did not accommodate real life. It is sufficient to compare the tirades of Mein Kampf with Lenin’s passionate polemics  and his thunderous charges against capitalism to sense their psychological affinity. It is the language of hate, not of  objective research. We can learn as much about reality from Marxist-Leninist scholastics as we can from the Protocols of the  Elders of Zion.

    Every hatred that reworks pain into insult carries out “transference,” in the language of modern psychoanalysis. The source of the pain is internal but we transfer it to the outside. Others are to blame when things go wrong for us, when our plans do not succeed and our hopes are crushed. We thus find an outlet, a relief, but only an illusory one. Hate acquires an address — a false one. Revenge, dictated by hate, misses the mark, like a letter sent to an incorrect address. Hatred engenders a constantly hungry vengefulness. 

    An imagined or real hatred becomes a pretext for hateful acts if a person has a need and desire to hate. Sooner or later, this need will be expressed in aggression. Even if there is a real objective murderous force, which derives from a hopeless attempt to build one’s own cursed existence on the misfortune and death of those around.

    In order to find support in the external world, this deadly force needs to falsify it. The world is not suitable as is. It is literally true that Streicher and Goebbels could not hate Jews because they did not know them at all. If they had known this people with true, live knowledge, this hatred could not have developed. Their hatred related to that distorted, deformed notion of the Jewish people that they themselves had created and that was dictated by their need to hate. In the institutions of the National Socialist Party, in the Erfurt Institute, there were enormous piles of material about the Jewish people, but the thousands of pieces served them only to create a monstrous mosaic of slander. 

    In the same way, the people who sent me to this camp did not know me. Their hatred consisted precisely of their not wanting to know me and not having hesitated to turn my life and face into a screen onto which to project an NKVD film: “A threat to society, a lawbreaker. Henceforth this person will be not what he thought he was but what we want him to be and what we shall make of him.” In order to erase my existence as they did, one had to harbor a great, formidable hatred  of humanity.

    Until we uproot this hatred, it will not stop slandering people and their real impulses, will not cease circling around us, seeking out our every weakness, mistake, and sin, which are numerous, not in order to understand us and to help us but in order to blame us for its own thirst for cruelty and blood.

    Pathological hate reflects the primal instinct of the rapacious beast who knows that he can appease his agonizing hunger by the warm blood of another. Millennia of cultural development infinitely distanced and complicated this instinct by pseudo-rational sophistry and self-deception. Human rapaciousness exceeded that of the beasts, differing from it in that it manifested itself under senseless pretexts in the name of imaginary goals. The struggle against hatred is thus not limited by humankind’s biological nature but encompasses all the specifically inhuman, the perversions, and the lies that comprise the anomaly of highly developed culture and cannot be destroyed until its existence becomes common knowledge. 

    Free and perspicacious people someday will destroy  hatred and create a world where no one will need to hate or oppose hatred. The human striving for freedom is incompatible with hate. Without going into complex definitions of freedom, one can agree that as it develops, freedom will steadfastly expel lies and hatred not only from the human heart but also from human relationships and the social order. Opposition to lies and hatred is thus already the first manifestation of human freedom.

    Having finished my investigation of hatred with this proud phrase, I turned over onto my back and looked around: I was lying in a meadow, on green grass at the end of the camp. The forbidden zone started five steps away and a tall palisade with barbed wire spread around. Several prisoners swarmed in the forbidden zone; they were cutting the grass and digging up earth. Under the windows of the hospital kitchen formed a line of medics with buckets for soup and kasha.

    In the most minuscule hand, I erased all dangerous hints.  I read it with the eyes of the security operative: it was an “antifascist” document written by a stranger, but it was not blatantly counterrevolutionary. Understandably, there was not a word about Soviet reality in this manuscript. I had to keep in mind that it could be taken away at any moment in a search. 

    But I had pity on my manuscript. There was no chance of hiding a work of that size for a long time in the camp. Suddenly, I had a fantastic idea. I got up and went to the KVCh, where two girls were sitting at two tables. 

    “What do you want?”

    “This is what I want,” I said slowly. “I have a manuscript of about a hundred pages. … I am an academic and wrote something in my specialty. In the barrack, you know, it’s dangerous. They’ll tear it up to use for rolling cigarettes. I want to give it to the KVCh for safekeeping. When I leave here, you’ll return it to me.”

    The girl was taken aback. She and her friend looked at me in dull astonishment, suspiciously, as at someone abnormal. In the end, she went to the phone and asked the guardhouse to connect her to the security supervisor.

    “Comrade supervisor, someone came here, brought a manuscript, and asks that we take it for safekeeping. He says that he is a scientific worker.”

    She repeated this several times over the telephone, then she turned to me:

    “Your name?” 

    I gave it.

    The girl conveyed my name, listened to the answer, and hung up the receiver. “The supervisor said,” she turned to me, hardly keeping back laughter, “’let him throw his manuscript into the outhouse’.”

    The Wonder of Terrence Malick

    The best American film of 2019, A Hidden Life, was little seen, and nominated for nothing. Why be surprised? Or assume that our pictures deserve awards any more than the clouds and the trees? Try to understand how movies may aspire to a culture that regards Oscars, eager audiences, and fame as relics of our childhood. The ponderous gravity of The Irishman and its reiterated gangster fantasy, the smug evasiveness of Once Upon a Time … in Hollywood, were signs that old movie habits were defunct. Parasite was no better or worse than cute opportunism. It was a wow without an echo. Whereas A Hidden Life was like a desert, known about in theory but ignored or avoided. I use that term advisedly, for Malick is a student who knows deserts are not dull or empty. They are places that can grow the tree of life as well as any forest. Simply in asking, what is hidden here?, Terrence Malick was leading us to ponder, What should a movie be?

    He had never volunteered for conventional schemes of ranking. His creative personality can seem masked or obscure, but his reticence is portentous too, and it belongs to no one else. Had he really taught philosophy at M.I.T. while doing a draft for Dirty Harry? Please say yes: we so want our auteurs to be outlaws.  His self-effacement, his famous “elusiveness,” was often seen as genius. Yet some early admirers felt he had “gone away” in the twenty-first century, or migrated beyond common reach. People regarded his private and idiosyncratic work as egotism, no matter how beautiful it might be. Some were disinclined even to try A Hidden Life after the inert monuments that had preceded it. But it was — I say it again — the best American film of 2019, a masterpiece, and it invited us to try and place Malick, and to ponder if our “map” was part of the problem. To put it mildly, A Hidden Life does not seem American (or even Austrian, where it was set and filmed). It is occurring in cultural memory as a sign of what we might have been.

    There was never a pressing reason to make up our minds about Malick. He was casual, yet lofty; he might be an artist instead of a regular American moviemaker in an age when it was reckoned that tough pros (like Hawks and Hitchcock) made the best pictures. Thus he began with two unwaveringly idiosyncratic films — Badlands in 1973 and Days of Heaven in 1978. He took in their awed reception and then stopped dead for twenty years, and let his reputation become an enigma. Did he really prefer not to appear with his movies, or give helpful interviews, so that he could be free to pursue ornithology and insect life? Was he unpersuaded by careerist plans, or cleaning up in the manner of Spielberg or Lucas? In never winning an Oscar, he has made that statuette seem a stooge.

    It has always been hard to work out his intentions. Going on the titles, Badlands could be a perilous vacation, while Days of Heaven might promise transcendence. In the first, across the empty spaces of the Dakotas and Montana, Kit Carruthers found his daft halcyon moment of aimlessness while being taken for James Dean, while in the latter, in its gathering of rueful magic hours, we encountered a broken family where a guy was shot dead, his girl was thinking of being a hooker to survive, and the kid sister was left alone with her mannered poetry (like Emily Dickinson voiced by Bonnie Parker). In its locusts and fire, and a screwdriver thrust in the farmer’s fragile chest, Days of Heaven spoke to the ordeal of frontier people in 1916 going mad, skimming stones at entropy, or posing for the pictures in Wisconsin Death Trip (published by Michael Lesy in the year Badlands opened). The two films together said that America was an inadvertently gorgeous place full of terrors. 

    Those early films were filled with love and zest for drab characters buried in the hinterland yet nursing some elemental wonder. But decades later, in 2012, To the Wonder felt like a crushing title for a film that had lost touch with ordinary poetry. Its women were models fleeing from Vogue. Whereas Sissy Spacek as Holly in Badlands (twenty-four yet doing fourteen without strain or condescension) was somehow routine as well as brilliant. Her unwitting insights hovered at the brink of pretension, but any doubt we had was lost in captivation for this orphan who packed vivid party dresses for her violent spree into emptiness. This was after Kit had shot her father dead, not just because dad didn’t approve of a garbage collector outlaw going with his Holly, but because he hadn’t the patience to listen to the rambling solo that was so folksy and menacing — “Oh, I’ve got some things to say,” Kit promised. “Guess I’m lucky that way.”

    And Holly did feel wonder for this vagrant actor. It was there in the flat adoration that Spacek offered him. She slapped his face for killing Dad, but then went along with him, too matter of fact to pause over spelling out love, but utterly transported by this signal for young getaway. Badlands was akin to Bonnie and Clyde, but you felt that Kit and Holly were in a marriage they did not know how to express. And they were sustained by Malick’s amused affection. He was close to patronizing his couple, maybe, making them babes in the woods in a surreal frame, but he felt their romance as much as he was moved by sunsets and the childish tree houses that they built. They were savage yet humdrum, and Kit’s killings were as arbitrary or impulsive as his funny chat. Yes, he was psychotic and headed for the electric chair, but the sweet interior desolation of their America understood them and treated them kindly. When Kit was captured at last, the horde of cops, sheriffs, and soldiers recognized that he was a cockeyed hero, the escapee they had dreamed of.

    One can love Bonnie and Clyde, but that love story is self-conscious about its lust for fame; it really was the heartfelt cry of Beatty and Dunaway and a generation yearning to be 

    known. Badlands, by contrast, is so casual or inconsequential, and so appreciative of a wider span of history, the one we call oblivion. It has a notion that vagrancy and lyricism were responses to the heart of it all, the vast stretch of land where badness is as implicit as beauty. Bonnie and Clyde do not notice where they are, but Kit and Holly are specks in an emptiness as infinite as breathing. It’s only now, in retrospect, that the film seems so intoxicated with talk and its futile liberty, when Malick was headed towards a sadder future in which his stunned characters said less and less, and sometimes became so reduced to half-stifled sighs you wished they’d shut up. That early Malick loved loose talk. Compared with the directors of the early 1970s he was like a muttering Bartleby alone in a crew of insistent press-agented Ahabs.

    This leaves you wondering how few authentic outsiders American film has permitted.

    Malick was thirty-five when Days of Heaven opened, the son of a geologist, born in a small town in Illinois, of Assyrian and Lebanese descent. He graduated from Harvard, and then went on to Oxford as a Rhodes scholar, without getting any degree there. The general estimate is that he was brilliant, as witness his published translation of Heidegger’s The Essence of Reasons. But who has read that book, or is in a position to judge the translation? So it’s part of the uncertain myth that includes our wondering over whether Malick has had private money. Or some lack of ordinary need for it. How has he seemed so unprofessional?

    He is credited with the script for Pocket Money (1972), a Stuart Rosenberg film with Paul Newman and Lee Marvin that is more odd than striking. But it led to Badlands, for which he had some money from the young producer Edward Pressman, from the computer pioneer Max Palevsky, and from a few people he knew. All of which meant it wasn’t a regular production like other films of 1973 — The Exorcist, Mean Streets, The Sting,  American Graffiti, The Way We Were. Caught between two parts of The Godfather, it didn’t seem to hear them or know them. Badlands may have cost $300,000. Warner Brothers bought the picture and released it: it closed the New York Film Festival in 1973, and if it perplexed audiences, there was a sense 

    that something rare and insolent had passed by. Badlands didn’t care what we felt: suspense and attention were mere horizons in its desert, not luxury hotels. It was an American product, but it had a more hushed European ambition. You could label it a Western if you were ready to agree that Hollywood, born and sited in the West, never knew or cared where it was.

    Some tried to see Badlands as a slice of picaresque life. We knew it was derived from a real case, a minor outrage on the remote prairie. In 1957-1958, in Nebraska mostly, the nineteen-year-old Charles Starkweather had killed ten people with fourteen-year-old Caril Ann Fugate as his companion. Fugate actually served seventeen years in prison, no matter that in the movie she says she married her lawyer. (That was a prettification or a kind of irony.) And there was more real grounding in the steady assertion that Martin Sheen’s Kit was a lookalike for James Dean and therefore rooted in popular culture. Kit and Holly dance to Mickey and Sylvia singing “Love is Strange,” from 1956.

    Strange was only half of it. In 1973, the feeling that sex was at hand on the screen was still pressing. As Kit took off with Holly, it was natural to think they would soon be fucking. Malick allowed an offhand obligation to that craze — $300,000 carried some box office responsibility — but he was too unimpressed or philosophical to get excited about it. Married three times by now, he doesn’t do much sex on screen. “Did it go the way it’s supposed to?” Holly asks Kit about their unseen coupling. “Is that all there is to it? Well, I‘m glad it’s over.” All said without prejudice to their affinity or their being together. 

    The absent-minded talk meant more than the way Sissy Spacek secured the top button on her dress “afterwards.” After all, her character was fifteen and he was twenty-four. Yet they were both children in Malick’s art. And then, like kids, they lost interest in their adventure, even in sex, the sacrament in so many pictures of the 1970s. The novelty of Badlands was its instinct that life was boring or insignificant. And that was asserted amid a culture where movies had to be exciting, urgent, and “important.”

    Malick knew that “importance” was bogus. Or he had his eye on a different order of significance. And other truths and differences were inescapable in his film: that no runaway kids had the temerity or the rhythm for talking the way these two did; that stranger than “Love is Strange” was the way Carl Orff and Erik Satie played in their summer as warnings against “realism.” The people in the film were not just movie charac-ters, they were shapes in a mythology. A similar thing happened in Days of Heaven with its triangle of attractions, where Richard Gere, Brooke Adams, and Sam Shepard seemed unduly pretty for the Texas Panhandle. Malick had narrative problems on that picture which he solved or settled by summoning the voice of Linda Manz’s kid sister — a laconic, unsentimental, yet dreamy observer of all the melodrama. (The voice was sister to Holly, too.) She was part of the family, but her voiceover let us feel the narrative was already buried in the past, and nothing to fret over. Life itself was being placed as an old movie.

    Days of Heaven was extreme in its visualization: it included a plague of locusts, which was an epic of cinematography and weird special effects, involving showers of peanut shells and characters walking backwards. But the quandary of the Brooke Adams character, in love with two men, both unlikely in the long term, was the closest Malick had come to novelistic drama. I still feel for Shepard’s farmer, a rich man at a loss with feelings, though Malick had the sense to save the reticent Shepard from “acting.” Instead he was simply photographed, as gaunt and theoretical as his great house planted on the endless prairie. Just as he was shy of sex, so Malick the director was hesitant over what the world called story.

    No great American director has carried himself with such indifference as to whether he was being seen, let alone understood.  To see Malick’s work has always been a way of recognizing that the obvious means of doing cinema — appealing stories with likeable actors that move us and make money — was not settled in his mind. I think that is one reason why he stopped for twenty years — just to remain his own man, and not to yield to the habit of eccentric beauty in case it became studied, precious, or crushingly important.

    Thus, in 1998, The Thin Red Line seemed to believe in a new kind of authenticity and directness. Wasn’t it “a war movie”? Didn’t it make more money than Malick has ever known? Wasn’t it about killing the enemy, that blessed certainty that films provide for fearful guys? It offered Guadalcanal in 1942, and it came from a James Jones novel, the writer of From Here to Eternity, which for someone of Malick’s age really was the Pacific War, despite being short on combat and going no farther west than Hawaii. The Thin Red Line is the infantry, landing on an island, and reckoning to take its peaks and destroy the enemy. It is a man’s world that male audiences might relax with. There are only fragmentary glimpses of women left at home — a rapturous shot of an occupied dress on a summer swing, something that would become an emblem of happiness in Malick’s work.

    But nothing competes with the ferocity of Colonel Tall, played by Nick Nolte in the most intense performance in a Malick picture, as a commander whose orders were abandoned and denied. That is not how war films are supposed to work: no one ever challenged John Wayne at Iwo Jima or Lee Marvin in The Big Red One. But Malick’s thin red line is less conventional or reliable. It finds its example in the wistful instinct to desert on the part of a common soldier, Private Witt (Jim Caviezel). For Jones, Witt was an extension of the brave victim Prewitt 

    whom Montgomery Clift played in From Here to Eternity, but for Malick the lonely private is another version of Bartleby, who gives himself up finally not just in heroism but in almost yielding to hesitation.

    Maybe this was once a regular combat picture, to be set beside the work of Sam Fuller or Anthony Mann. But not for long: inscape pushes battle aside for a contemplation of tropical grasses tossing in the wind, insect life, parrots and snakes, intruded on for a moment by war but not really altered by it. Malick has an Emersonian gift for regarding human affairs from the standpoint of nature. It is in the perpetuity of nature that Malick perceives the strangeness, and the humbling, in Earth’s helpless duration. This war prepares us for the bizarre little dinosaurs in The Tree of Life, and the unnerving perspective in which we observe or suffer the earnestness of Sean Penn in that film.

    That touches on a fascinating atmosphere attached to Malick and his blithe treatment of stars. In his long absence from the screen, the glowing characters in those first two films seemed to attract actors, as if to say it might be them, too. He seemed as desirable for them as Woody Allen — and sometimes with a similar diminution of felt human reality. He must have been flattered that so many stars wanted to work for him; he may have forgotten how far he had excelled with newcomers or unknowns. Still, I found it disconcerting when John Travolta or George Clooney suddenly turned up in spiffy, tailored glory in The Thin Red Line, and one had the feeling with The Tree of Life that Sean Penn was vexed, as if to say, “Doesn’t Terry know I’m Sean Penn, so that I deserve motivation, pay-off, and some scenes, instead of just wandering around?” Led to believe he was central to The Thin Red Line, Adrien Brody was dismayed to find he had only a few minutes in the finished film.

    Was this just an experimenter discovering that his film could remain in eternal post-production? Or was it also a creeping indifference to ordinary human story? Was it an approach that really required novices or new faces? How could big American entertainments function in this way? How was Malick able to command other people’s money on projects that sometimes seemed accidental or random, on productions that had several different cuts and running times? He seemed increasingly indecisive and fond of that uncertainty, as if it were a proof of integrity. Was he making particular films, or had the process of filming and its inquiry become his preoccupation? How improvisational a moviemaker is he? And what were we to make of its end products — or was “the end” a sentimental destination mocked by the unshakable calm of 

    duration? How could anyone get away with The Thin Red Line costing $90 million and earning back only a touch more? I could make a case for The Thin Red Line as Malick’s best film and the most intellectually probing of them all. But “best” misses so many points. To shoot it, Malick had gone to the jungles of northern Queensland and even the Solomon Islands. The weapons and the uniforms seemed correct, but the hallowed genre of war movie was perched on the lip of aestheticism and absurdity and surrealism.

    As a world traveler and a naturalist — his nature films are certainly among his most marvelous achievements — Malick was especially sensitive to terrain. For The New World, in 2005, he went to the real sites, the swampy locations, of early settlement in Virginia. He researched or concocted a language such as the natives might have spoken. His tale of John Smith, Pocohontas, and John Rolfe has many enthusiasts for its attempt to recreate a time so new then and so ancient now. This was also a historical prelude to the wildernesses in Badlands and Days of Heaven. It might even begin to amount to a history of America.

    I had worries about the film, and I have never lost them. Its Pocohontas was exquisite and iconic, even if the picture tried to revive her Powhartan language. But the actress, Q’orianka Kilcher, was also part German, part Peruvian, raised in Hawaii, a singer, a dancer, a stunt performer, a princess of modernity, with evident benefit of cosmetics and a gymnasium. Whereas Sissy Spacek in Badlands had a dusty, closed face credible for the look of a kid from Fort Dupree in South Dakota in the 1950s, uneducated, indefatigably unradiant, born bored, more ready for junk food than primetime fiction. That background was what made Holly so absorbing, and it was Kilcher’s emphatic beauty that shifted The New World away from urgency or naturalism. It was as if Angelina Jolie or Joan Crawford were pretending to be the Indian maiden.

    In a way, Pocohontas was the first adult female in Malick’s work, but was that a warning sign that maybe he didn’t fathom grown up women once they had got past the wry baby talk that makes the first two films so endearing? The New World did not really have much caring for Native Americans, for women, or for the challenge of Europeans determined to take charge of any viable Virginia. It was a film that opted for the picturesque over history, whereas Badlands and Days of Heaven lived on a wish to inhabit and understand America in the unruly first half of the twentieth century as a wilderness succumbing to sentimentality. But the picturesque has always been a drug in cinema, and it had been lurking there in the magic hours in Days of Heaven.

    There was a gap of six years before the pivotal The Tree of Life, perhaps Malick’s most controversial film. Here was a genuinely astonishing picture, ambitious enough to range from intimacy to infinity. In so many ways, it was an eclipsing of most current ideas of what a movie might be. At one level, it was entirely mundane, the portrait of two parents and their three sons in a small town in Texas in the 1950s. For Brad Pitt (a co-producer on the project), the father was a landmark role in which he allowed his iconic status to open up as a blunt, stubborn, unenlightened man of the 50s. Jessica Chastain was the mother, and she was placid but eternal — she was doing her pale-faced best, but surely her part deserved more substance to match not just Pitt but the wondrous vitality of the boys (Hunter McCracken, Finnegan Williams, Michael Koeth, and Tye Sheridan).

    All his working life, Malick has excelled with the topic of children at play, and as emerging forces who jostle family order. Don’t forget how in his first two pictures adult actors were asked to play child-like characters. The family scenes in The Tree of Life are captivating and affirming with a power that is all the more remarkable because the subject of the film is the family’s grief at the death of one of these children. The Tree of Life insists that the death of a child is a cosmic event. Not long after the young man’s death is announced, and before the story of the family is told in flashback, there is an unforgettable yet pretentious passage shot with almost terrifying vividness from nature — the bottom of the sea, the fires of a volcano, the reaches of space — accompanied by religious music. With an epigraph from Job, the real subject may be sublimity itself.

    No one had ever seen a film quite like it. Reactions were very mixed. The picture won the Palme d’Or at Cannes; it had many rave reviews; it did reasonable business. There were those who felt its perilous edging into pretension and a sweeping universality in which the movie vitality of the family succumbed to the melancholy of grazing dinosaurs who had never been moviegoers. But there were more viewers who recognized an exciting challenge to their assumptions. The Tree of Life prompted a lot of people in the arts and letters to revise their ideas about what a movie might be. Pass over its narrative situation, this was a film to be measured with Mahler’s ruminations on the universe or with the transcendent effects of a room full of Rothkos.

    And then Malick seemed to get lost again. He veered away from the moving austerity of Days of Heaven to a toniness more suited to fashion magazines.  There was widespread disquiet 

    about his direction, owing to the modish affectation in To the Wonder (2012), Knight of Cups (2015) and Song to Song (2017). From a great director, these seemed confoundingly hollow films that almost left one nostalgic for the time when Malick worked less.

    Ironically, To the Wonder is the one film for which he has owned up to an autobiographical impulse. It grew out of hesitation over his third and fourth wives, presented in the movie as Olga Kurylenko and Rachel McAdams, two unquestioned beauties. McAdams delivers as good a performance as Brooke Adams in Days of Heaven, but there are moments where her character’s frustrations could be interpreted as the actress’ distress over poorly written material. Malick was now running scared of his ear for artful, quirky talk. But the women in To the Wonder are betrayed by the worst example of Malick’s uninterested stars. Ben Affleck is the guy here, allegedly an “environmental inspector.” That gestural job allows some moody depictions of wasteland and some enervated ecstasy over the tides around Mont-Saint-Michel in France. Yet the situation feels the more posed and hollow because of Affleck’s urge to do as little as possible. His hero is without emotional energy; he deserves his two women as little as male models earn their expensive threads in fashion spreads. The film’s clothes are credited to the excellent Jacqueline West, but they adorn a fatuous adoration of affluence.

    West was part of Malick’s modern team: the film’s producer was Sarah Green; the engraved photography was by the extraordinary Emmanuel Lubezki; the production design was from Jack Fisk still, who had held that role since Badlands, where he met and then married Sissy Spacek; the aching music was by Hanan Townshend in a glib pastiche of symphonic movie music — it was so much less playful or spirited than the score for Badlands. The only notable crew absentee was Billy Weber, who has been the editor on many Malick pictures. To the Wonder is said to have earned $2.8 million at the box office, and it’s hard to believe it cost less than $20 million. If that sounds like a misbegotten venture, wait till you struggle through it and then wonder what let Malick make another film in the same clouded spirit, Knight of Cups. And then another: Song to Song, the ultimate gallery of beautiful stars, supposedly about the music world of Austin, which came off semi-abstract no matter that Malick had lived there for years.

    Any sense of experience and vitality seemed to be ebbing away. Was he experimenting, or improvising, or what? The several loyalists involved, as well as those players who were filmed but then abandoned, might say it was a privilege to be associated with Terry. I long to hear some deflating rejoinders to that from Kit Carruthers. There was a wit once in Malick that had now gone missing. I say this because a great director deserves to be tested by his own standards, which in Malick’s case are uncommonly high. Even with the more adventurous Christian Bale as its forlorn male lead — a jaded movie screenwriter — Knight of Cups is yet more stultifyingly beautiful and Tarot-esque, with a placid harem of women (from Cate Blanchett to Isabel Lucas, from Imogen Poots to Natalie Portman), all so immediately desirable that they do not bother to be awake. Richard Brody said it was “an instant classic,” which only showed how far “instant” and “classic” had become invalid concepts. The film earned a touch over $1 million, and it had disdain for any audience. It was a monument to a preposterous cinephilia and to a talent that seemed in danger of losing itself.

    Those are harsh words, but I choose them carefully, after repeated viewings, and in the confidence that Badlands, Days of Heaven and The Thin Red Line are true wonders. The Terrence Malick of early 2019, passing seventy-five, was not a sure thing. And then he retired all doubt about his direction and released his fourth great film; and surely four is enough for any pantheon.

    Malick had been contemplating A Hidden Life and the historical incident upon which it is based for a few years. In 1943, Franz Jagerstatter was executed in Berlin for refusing to take an oath of loyalty to Adolf Hitler. He was a humble farmer high in the mountains of northern Austria, where he lived with his wife, his three daughters, his sister-in-law, and his mother. They were valued members of a small community and worked endlessly hard to sustain their meager living. They were devout Catholics, and Franz had done his military service without thinking too much about it. His farm and his village are surrounded by breathtaking natural beauty, and Malick lingers long over the fields and the peaks and the clouds in a way that teaches us that even Nazism is ephemeral.

    The film has few long speeches in which Jagerstatter spells out his reluctance to honor the Nazi code. He is more instinctive than articulate. He knows the fate he is tempting; he understands the burden that will put upon his wife and children; he appreciates that he could take the oath quietly and then do non-combatant service. It is not that he understands the war fully or the extent of Nazi crimes. He is not a deliberate or reasoned objector. But just as he feels the practical truths in his steep fields and in the lives of his animals, and just as he is utterly loyal to his wife, so he believes that the oath of allegiance will go against his grain. He does not show a moral philosophy so much as a moral sense. He cannot make the compromise with an evasive form of words.

    There is no heavy hint in A Hidden Life of addressing how Americans in our era might withhold their own allegiance to a leader. But the film rests on a feeling that such cues are not needed for an alert audience living in the large world. We are living in a time that will have its own Jagerstatters. That is part of the narrative confidence that has not existed in Malick since Days of Heaven. It amounts to an unsettling detachment: he shares the righteousness of Jagerstatter, but he does not make a fuss about his heroism. In the long term of those steep Alps and their infinite grasslands, how much does it matter? Do the cattle on the farm know less, or are they as close to natural destiny as the farmer’s children?

    That may sound heretical for so high-minded a picture. And there is no escaping — the final passages are shattering — how Jagerstatter is brutalized and then hung by the Nazi torturers and executioners. The Catholic church would make a saint of him one day, and Malick has taken three hours to tell what happened, but the film has no inkling of saintliness or a cause that could protect it.  The farmer’s wife, rendered by Valerie Pachner as sharp and uningratiating, does not need to agree with her man, or even to understand him. People are alike but not the same, even under intense pressure. No one could doubt Malick’s respect for Jagerstatter, and August Diehl is Teutonically tall, blond, and good-looking in the part. But he is not especially thoughtful; his doubts over the oath are more like a limp than a self-consciously upright attitude. Certainly the old Hollywood scheme of a right thing waiting and needing to be done leaves Malick unmoved; he would prefer to be a patient onlooker, a diligent chronicler, attentive and touched, but more rapt than ardent, and still consumed by wonder.

    Malick has admitted how often he had got into the habit of working without a script (or a pressing situation), so that he often filmed whatever came into his head. But he seems to have learned how far that liberty had led him astray. So A Hidden Life has as cogent a situation as those in Badlands and Days of Heaven. That does not mean those three films are tidy or complacent about their pieces clicking together. They are all as open to spontaneity and chance as The Thin Red Line. But just as it is trite and misleading to say that The Thin Red Line was a film about war, so A Hidden Life feels what its title claims: the existence of an inwardness that need not be vulgarized by captions or “big scenes.” The film concludes with the famous last paragraph of Middlemarch, about the profound significance of “hidden lives” and “unvisited tombs.” Yes, this is what a movie, a heartbreaking work, might be for today. As for its relative neglect, just recall the wistful look on the dinosaur faces in The Tree of Life.

    We can do our best, we can make beauty and find wisdom, without any prospect of being saved from oblivion.

    Owed To The Tardigrade

    Some of these microscopic invertebrates shrug off temperatures
    of minus 272 Celsius, one degree warmer than absolute zero.
    Other species can endure powerful radiation and the vacuum of space.
    In 2007, the European Space Agency sent 3,000 animals
    into low Earth orbit, where the tardigrades survived
    for 12 days on the outside of the capsule.

    The Washington Post, “These Animals can survive until the end
    of the Earth, astrophysicists say”

    O, littlest un-killable one. Expert
    death-delayer, master abstracter

    of imperceptible flesh. We praise
    your commitment to breath.

    Your well-known penchant
    for flexing on microbiologists,

    confounding those who seek
    to test your limits using ever more

    abominable methods: ejection
    into the vacuum of space, casting

    your smooth, half-millimeter frame
    into an active volcano, desiccation

    on a Sunday afternoon, when the game
    is on, & so many of us are likewise made

    sluggish in our gait, bound to the couch
    by simpler joys. Slow-stepper, you were

    called, by men who caught first
    glimpse of your eight paws walking

    through baubles of rain. Water bear.
    Moss piglet. All more or less worthy

    mantles, but I watch you slink
    through the boundless clarity

    of a single droplet & think
    your mettle ineffable, cannot

    shake my adoration
    for the way you hold fast

    to that which is so swiftly
    torn from all else living,

    what you abide in order
    to stay here among the flailing

    & misery-stricken, the glimpse
    you grant into limitless

    persistence, tenacity
    under unthinkable odds,

    endlessness enfleshed
    & given indissoluble form.

    A Democratic Jewish State, How and Why

    The question of whether Israel can be a democratic Jewish state, a liberal Jewish state, is the most important question with which the country must wrestle, and it can have no answer until we arrive at an understanding of what a Jewish state is. A great deal of pessimism is in the air. Many people attach to the adjective “Jewish” ultra-nationalistic and theocratic meanings, and then make the argument that a Jewish democratic state is a contradiction in terms, an impossibility. On the left and on the right, among the elites and the masses, people are giving up on the idea that both elements, the particular and the universal, may co-exist equally and prominently in the identity of the state. This way of thinking is partly responsible for the recent convulsions in Israeli politics, for the zealotry and the despair that run through it. Yet it is an erroneous and unfruitful way of thinking. It rigs the outcome of this life-and-death discussion with a tendentious and dogmatic conception of Judaism and Jewishness.  

    There is another way, a better way, to arrive at an answer to this urgent and wrenching question. Let us begin by asking a different one, a hypothetical one. Let us imagine the problem in a place that is not Israel or Palestine. Could a Catalan state, if it were to secede from Spain, be a democratic Catalan state, a liberal Catalan state? Catalan nationalism is a powerful force, and many Catalans wish to establish an independent state of their own with Barcelona as its capital, based on their claim that they constitute a distinct ethnocultural group that deserves the right to self-determination. Though recent developments in Spain have shown that the establishment of an independent Catalan state is far from becoming a reality in the near future, let us nonetheless consider what it might look like. In this future state — as in other European nation-states, such as Denmark, Finland, Norway, Germany, the Czech Republic, and others that have a language and state symbols that express an affinity to the dominant national culture — the Catalan language would be the official language, the state symbols would be linked to the Catalan majority, the official calendar would be shaped in relation to Christianity and to events in Catalan history, and the public education of Catalans would insure the vitality and the continuity of Catalan culture, transmitting it to the next generation. Revenues from taxation would be distributed solely among Catalan citizens and not across Spain, and the foreign policy of the Catalan state would reflect the interests of the ethnocultural majority of the state. It is very probable that Catalunya’s immigration policy, like that of all contemporary European and Scandinavian states, would attempt to safeguard the Catalan majority in its sovereign territory. 

    It is important to note that these aspects of a Catalan state would not reflect anything unusual in the modern political history of the West. The Norwegians, for example, demanded all these characteristics of statehood in 1907, when they seceded from Sweden (under threat of war) since they saw themselves as a separate national group. In the matter of identity, Catalunya, like Norway, would not be a neutral state in any meaningful fashion, and there is no reason that it should be a neutral state. Members of the Catalan group deserve a right to self-determination, which includes a sovereign territory inhabited by a Catalan majority in which a Catalan cultural public space is created and the culture of the majority is expressed. 

    But this is not all we would need to know about a Catalan nation-state that purports to be a democracy. The test of the question of whether Catalunya, or any other state, is democratic is not dependent upon whether it is neutral with respect to identity. Its moral and political quality, its decency, its liberalness, will be judged instead by two other criteria. The first is whether its character as a nation-state results in discriminatory policies towards the political, economic, and cultural rights of the non-Catalan minorities that reside within it. The second is whether Catalunya would support granting the same right of self-determination to other national communities, such as the Basques. Adhering to these two principles is what distinguishes democratic nation-states from fascist ones. 

    Ultra-nationalist states are sovereign entities in which the national character serves as a justification for depriving minorities of political, economic and cultural rights. In the shift to ultra-nationalism that we are witnessing around the world today, such states also attack and undermine the institutions that aim at protecting minorities — the independent judiciary, the free press, and NGO’s dedicated to human and minority rights. In addition, ultra-nationalists states do not support granting rights of self-determination to nations that reside within them or next to them. They generally claim that no such nations exist, or that the ethnic groups that call themselves a nation do not deserve the right to self-determination.

    The legitimacy of Israel as a nation-state should be judged just as we would judge any other nation-state, according to these two principles. If, in the name of the Jewish character of the state, the Arab minority in Israel is deprived of its rights, the very legitimacy of the State of Israel as a Jewish nation-state will be damaged. Discrimination in the distribution of state resources in infrastructure, education, and land, and the refusal to recognize new Arab cities and villages in the State of Israel, threatens to transform it from a democratic nation-state into an ultra- nationalist state. Such a threat to the democratic character of the state is posed also by recent legislative attempts (which fortunately have failed) to demand a loyalty oath solely from Israel’s Arab citizens. The threat is heightened by a political plan put forth by elements of the Israeli radical right, which, in a future agreement with the Palestinians, would deny Israeli citizenship to Israeli Arabs, by virtue of a territorial exchange that would include their villages in the territory of a future Palestinian state. This is to act as if the Israeli citizenship of the Arabs of Israel is not a basic right, but a conditional gift granted to them by the Jewish nation-state — a gift that can be rescinded to suit the interests of Jewish nationalism. The Nation-State law that was passed by the Israeli parliament in 2018, which formulates the national identity of the country in exclusively Jewish terms, is an occasion for profound alarm, in particular in its glaring omission of an explicit commitment to the equality of all Israeli citizens, Jews and Arabs alike. Such a commitment to the equality of all citizens was enshrined in Israel’s Declaration of Independence, the founding document that to this day contains the noblest expression of the vision of Israel as Jewish and democratic. The commitment to the equality of all citizens might be legally and judicially ensured in relation to other basic laws in Israel’s legal system, yet its striking absence from this latest official articulation of the character of the state is yet another marker of the drift to ultra-nationalism. 

    The structural discrimination manifested in these examples constitutes an unjustified bias against the Arab citizens of Israel. It also serves to undermine the very legitimacy of the Jewish state. A Jewish nation-state can and must grant full equality to its Arab citizens in all the realms in which it has failed to do so until now. It must recognize them as a national cultural minority, with Arabic as a second official language of the state and the Islamic calendar as an officially recognized calendar. The public educational system must be devoted, among other goals, to the continuity of the Arab cultural traditions of Israel’s citizens. 

    In the recent elections held in Israel, three within a single year, the participation of the Arab citizens of Israel in the vote increased by 50%, reaching very close to the percentage of the vote among Jewish citizens. This is a wonderful and encouraging sign of the greater integration of the Arab population in larger Israeli politics. As a result the Joint List, the Israeli Arab party, which encompasses different ideological and political streams in the Arab community of Israel, increased its seats in Israel’s Knesset from ten to fifteen — an extraordinary achievement. But its positive impact was undone by the disgraceful failure of the left and center to form a government with the Joint List on the grounds that a government that rests on the Arab vote is unacceptable. Thus was lost an historic opportunity to integrate the Arab minority as an equal partner in sharing governmental power.  

    As is true of all other legitimate democratic nation-states, the second condition that Israel must maintain is the recognition of the right of the Palestinian nation to self-determination in Gaza and the West Bank — the same right that Jews have rightly demanded for themselves. The denial of such a right, and the settlement policy that aims at creating conditions in which the realization of such a right becomes impossible, similarly damage the legitimacy of Israel as a Jewish nation-state. The Trump plan for peace includes, among its other problematic aspects, the annexation of the Jordan Valley to the state of Israel, which would constitute yet another significant impediment to the possibility of a two-state solution. If any Israeli government includes such an annexation in its plans, it will also create de facto conditions that will undermine the possibility of a Jewish democratic state in the future. 

    It is important to stress that the fulfillment of the first condition — equal rights to minorities — is completely within Israel’s power. Discrimination against citizens of your own country is always a self-inflicted wound. The second condition, by contrast, the recognition of the Palestinian right to self-determination, is not exclusively in the hands of Israel. The conditions of its realization are much more complicated. It depends to a significant degree upon Palestinians’ willing ness to live side by side with the State of Israel in peace and security. The situation with regard to the possibility of such co-existence is difficult and murky and discouraging on the Palestinian side — and yet Israel must nevertheless make clear its recognition of the Palestinian right to self-determination, not least for the simple reason that achieving it will lend legitimacy to Israel’s own claim to the same right.  

    If democracy and decency do not require cultural neutrality from a nation-state, then how should the identity of the majority be recognized in such a state without vitiating its liberal principles? There are four ways, I believe, that the Jewish nature of the State of Israel should be expressed. The first is to recognize the State of Israel as the realization of the Jewish national right to self-determination. In this era, when the meaning of Zionism is mangled and distorted in so many quarters, it is important to recognize what Zionism incontrovertibly is: a national liberation movement aimed at extracting a people from the historic humiliation of dependence on others in defining their fate. That remains its central meaning. Zionism gave one of the world’s oldest peoples, the Jewish people, the political, military, and economic ability to define themselves and defend themselves.  

    The most fundamental feature of Israel as a Jewish state resides, therefore, in its responsibility for the fate of the Jewish people as a whole. If the responsibility of the State of Israel were confined only to its citizens, it would have been only an Israeli state. In light of this responsibility to a people, it has the right and the duty to use the state’s powers to defend Jews who are victimized because they are Jews.

    The second feature that defines Israel as a Jewish state is the Law of Return. This law, which was established in 1950, and is intimately connected to the first feature of national self-determination, proclaims that all Jews, wherever they are, have a right to citizenship in the State of Israel, and can make the State of Israel their home if they so desire. The State of Israel was created to prevent situations — plentiful in Jewish history — in which Jews seeking refuge knock on the doors of countries that have no interest in receiving them. For the same reason, Palestinian refugees in the Arab states ought to have immediate access to citizenship in the state of Palestine when it is established.

    Yet the justification of the Law of Return does not rest exclusively on conditions of duress. If national groups have a right to self-determination — the right to establish a sovereign realm where they constitute the majority of the population, and where their culture develops and thrives — it would be odd not to allow Jews or Palestinians a right of citizenship in their national territory. It is also important to emphasize that the Law of Return is legitimate only if accompanied by other tracks of naturalization. If the Law of Return were the only way of acquiring Israeli citizenship, its exclusively national character would harm the rights of minorities and immigrants who are not members of the ethnic majority. Safeguarding the ethnocultural majority in any state is always severely constrained by the rights of minorities. Thus the transfer of populations, or the stripping of citizenship by the transfer of territory to another state, are illegitimate means of preserving a majority. It is crucial, therefore, that other forms of naturalization exist as a matter of state policy, including granting citizen-ship to foreign workers whose children were born and grew up in Israel, and to men and women who married Israeli citizens.

    The third expression of the Jewishness of the State of Israel relates to various aspects of its public sphere, such as its state symbols, its official language, and its calendar. These symbolic institutions are derived from Jewish cultural and historical symbols, including the menorah and the Star of David; Hebrew is the official language; Israel’s public calendar is shaped according to the Jewish calendar; and the Sabbath and Jewish holidays are official days of rest. Yet a democratic state demands more. The public expression of the majority culture must go along with granting official status to the minority cultures of the state, including Arabic as the second official language of the state of Israel, and recognizing the Islamic calendar in relation to the Arab minority. Again, official symbols and practices that have an affinity to the majority culture exist in many Western states: in Sweden, Finland, Norway, Britain, Switzerland and Greece, the cross is displayed on the national flag. In all those cases, the presence of state symbols that are connected to the religion and culture of the majority does not undermine the state’s democratic and liberal nature. In many of those states, however, there are powerful political forces that wish to limit democracy to the dominant ethnicity. The historical challenge in these multiethnic and volatile societies — and Israel also faces this challenge — is to prevent the self-expression of the majority from constraining or destroying the self-expression of the minority. 

    The fourth essential feature of a democratic nation-state, and the most important one, relates to public education. In the State of Israel, as a Jewish state, the public system of education is committed to the continuity and reproduction of Jewish cultures. I emphasize Jewish cultures in the plural, since Jews embrace very different conceptions of the nature of Jewish life and the meaning of Jewish education. In its commitment to Jewish cultures, the State of Israel is not different from many modern states whose public education transmits a unique cultural identity. In France, Descartes, Voltaire, and Rousseau are taught, and in Germany they teach Goethe, Schiller, and Heine. The history, the literature, the language, and sometimes the religion of different communities are preserved and reproduced by the system of public education, which includes students of many ethnic origins. Jews who happen to be German, American, or French citizens and wish to transmit their tradition to their children must resort to private means to provide them with a Jewish education. In Israel, as in other modern states (though not in the United States), such an education should be supported by state funds. This commitment does not contradict — rather, it requires — public funding for education that, alongside the public education system, insures the continuity of the other traditions represented in the population of the state, the Islamic and Christian cultures of the Arab minority in Israel. The culture of a minority has as much right to recognition by the state as the culture of the majority. 

    There are voices that maintain that the only way to secure Israel’s democratic nature is to eliminate its Jewish national character and turn it into a state of all its citizens, or a bi-national state. This sounds perfectly democratic, but it would defeat one of the central purposes of both national communities. In this territory there are two groups that possess a strong national consciousness — Jews and Palestinians; and there is no reason not to grant each of them the right of self-determination that they deserve. Moreover, a state of all its citizens in the area between the Jordan River and the Mediterranean Sea would, in fact, be an Arab nation-state with a large Jewish minority. It would become a place of exile for the Jewish minority. Historical experience in this region, where national rights and civil liberties are regularly trampled, suggests that Greater Palestine would be one of the harshest of all Jewish exiles.

    Honoring the status of the Arab citizens of Israel and espousing the establishment of a Palestinian state ought not to focus on — and does not require — the impossible and unjust annulment of the Jewish character of the State of Israel. It should focus instead on the effort to create full and rich equality for the Arab minority in Israel, and on the possibility of establishing a Palestinian nation-state alongside the state of Israel.

    In a Jewish state, the adjective “Jewish” carries within it another crucial challenge to liberal democracy, which is not tied to its national content but to its religious implications. This Jewish character, or the religious meaning of the adjective “Jewish,” might harm the freedom of religion in the state. Indeed, some currents in Israeli Judaism — and some religiously inspired ideological and political trends in the Jewish population of Israel — constitute a powerful and complex challenge to Israeli liberalism. Some voices assert that the Jewish identity of the state justifies granting the weight of civil law to Jewish law, and the use of the coercive machinery of the state for the religious purposes of the dominant community. 

    But a Jewish state conceived in this way could not be democratic in any recognizable manner, for two reasons: it would harm both the religious freedom of its citizens and the religious pluralism of the communities that constitute it. The attempt to “Judaize” the state through religious legislation, above and beyond the four features mentioned above, would undermine Israel’s commitment to liberalism and destroy some of its most fundamental founding principles. It would take back the pluralism that was explicitly and stirringly guaranteed in Israel’s Declaration of Independence. 

    Since the nineteenth century, Jews have been deeply divided about the meaning of Jewish identity and their loyalty to Jewish law. Jews celebrate the Sabbath in a variety of ways. They disagree ferociously about basic religious questions, including the nature of marriage and divorce. Any attempt to use the power of the state to adjudicate these deep divisions would do inestimable damage to freedom of religion and freedom from religion. In this case it would be the freedoms of Jews that would be violated. 

    The role of the state is not to compel a person to keep the Sabbath or to compel her to desecrate it. The state must, instead, guarantee that every person has the right to behave on the Sabbath as she sees fit, as long as she grants the same right to individuals and communities who live alongside her. All attempts at Judaizing the state through religious legislation — such as the law prohibiting the selling of bread in public during Passover, or the law prohibiting the raising of pigs — are deeply in error, since it is the obligation of a liberal democratic state to allow its citizens to decide these matters autonomously, as they see fit.

    The Sabbath, like other Jewish holidays, ought to be part of the official calendar of Israel as a Jewish state. A shared calendar, with Islamic and Christian holidays on it too, is an essential feature of the life of a state, and it enables a kind of division of cultural and spiritual labor, a pluralist form of cooperation among its citizens. If state institutions do not function during the Sabbath, it is not only because we would like religious citizens to be able to take equal part in the running of those institutions, but also because Israel ought to respect the Jewish calendar. The same applies as well to factories and businesses that must be shuttered during days of rest. 

    Such a policy, moreover, should be supported not for religious reasons, but owing to secular concerns about fairness. First, it allows equal opportunity to workers and owners who wish to observe the Sabbath. Historically, in the various Jewish exiles, the observance of the Sabbath sometimes caused Jews a great deal of economic hardship owing to the advantage that it conferred upon competitors who did not observe the same day of rest. In a Jewish state, Jews who observe the Sabbath ought to be free from such an economic sacrifice. The second reason for closing businesses and factories on the Sabbath concerns the rights of workers. The institution of the Sabbath is more widespread than most Jews know, and it is consistent with universal ethical considerations. Constraining the tyranny of the market over individual and family life by guaranteeing a weekly day of rest for workers and owners is common in European states which, in accordance with the Christian calendar, enforce the closing of businesses on Sunday. In a similar spirit, factories, malls, stores, and businesses ought to be closed during the Sabbath in a Jewish state — but art centers, theaters, museums, and restaurants should continue to function, so that Israeli Jews may choose their own way of enjoying the day of lovely respite.

    The abolition of the coercive power of the state in matters of religion should be applied as well to the primary domain of religious legislation in Israel: divorce and marriage. The monopoly granted to rabbinical courts in issues of divorce and marriage must finally be terminated. It is an outrageous violation of the democratic and liberal ethos of the state. Alongside religious marriage, Israel must recognize civil marriage. Such a reform would allow a couple that cannot marry according to Jewish law to exercise their basic right to form a family. It would also recognize the legitimate beliefs of many men and women who do not wish to submit to the rabbinical court, which is often patriarchal in its rulings and financially discriminates against women in divorce agreements.

     The claim of some religious representatives that establishing civil marriage would cause a rift among Jews, since members of the Orthodox Jewish community would not be able to marry Jews who did not divorce according to rabbinical procedure, is not persuasive. Many Jews all over the world marry and divorce outside the Orthodox community, and this is de facto the case in Israel as well, since many Israelis obtain civil marriages outside Israel, or live together without marrying under the jurisdiction of the rabbinate. The establishment of two tracks of marriage and divorce, religious and secular-civil, would not create division, which already exists in any case, but it would remove the legal wrong caused to Israelis who cannot practice their right to marry within Jewish law, and it would liberate those who aspire to gender equality from the grip of the rabbinical courts.

    I should confess that my analysis of the place of religion in Israel does not rest exclusively upon my liberal commitments. It is grounded also in my concern for the quality of Jewish life in Israel. Religious legislation has had a devastating impact on Jewish culture and creativity in Israel. The great temptation to settle the debate over modern Jewish identity through the coercive mechanism of the state justifiably alienates major segments of the Israeli public from Jewish tradition, which comes to be perceived by many Israelis as threatening their way of life. The deepening of alienation from the tradition, and its slow transformation into hostility, suggests that the more Jewish the laws of Israel become, the less Jewish the citizens of Israel become. 

    The Israeli parliament is not the place to decide the nature of the Sabbath, or which Jewish denomination is the authentic representation of Judaism, or who is a legitimate rabbi. Such controversies have corrupted the legislature, creating cynical political calculations in which religious matters have served as political payoffs to maintain government coalitions. The unavoidable debate on Jewish culture and religion must move from parliament to civil society. The nature of Jewish life in Israel must be determined by individuals and communities who will themselves decide how to lead their lives without interference from the state. For instance, there is no law in Israel prohibiting private transportation during the sacred day of Yom Kippur, yet the sanctity of the day is generally observed without any coercion. Wresting Judaism from the control of the politicians will unleash creative forces for Jewish renewal and allow for new ways of refreshing the tradition and extending its appeal. 

    Among the precious and time-honored institutions of Judaism which have been corrupted by the state is the rabbinate. The methods used for nominating and choosing the chief rabbis, and the rabbis of cities and neighborhoods, demonstrates that the rabbinate has turned into a vulgar patronage system, used by politicians to distribute jobs to their supporters. In many places, there is no affinity between the state-appointed rabbis and their residents. It is urgently in the interest of both Judaism and Israel that the state rabbinate be abolished.

    I do not support the total separation of religion and state as practiced in the United States. It seems to me that the model of some European countries is better suited to Israel. The establishment of synagogues and the nomination of rabbis ought to be at least partially supported by public funds, in the same way that museums, community centers, and other cultural activities are supported by the state. But this funding should be distributed in accordance with the communities’ needs and preferences, without allowing for a monopoly of any particular religious denomination over budgets and positions. Each community should choose its own rabbi according to its own religious orientation, as was the practice of Jewish communities for generations. And these same protections of freedom of religion must be granted to Muslim and Christian communities of Israel.

    Israel can and should be defined as a Jewish state, where the Jewish people exercises its incontrovertible right to self-de-termination; where every Jew, wherever he or she lives, has a homeland; where the public space, the language, and the calendar have a Jewish character; and where public education allows for the continuity and flourishing of Jewish cultures. These features do not at all undermine the democratic nature of the state, so long as Israel’s cultural and religious minorities are also granted equal and official recognition and protection, including state funding of Muslim and Christian public education systems, and the recognition of Arabic as a second official language of the state and the Muslim and Christian calendar as state calendars. In this sense, there is nothing contradictory or paradoxical about the idea of a Jewish democratic state. 

    The pessimism is premature. These essential principles can be reconciled and realized. Yet there are significant limits in such an experiment that must be vigilantly respected. Any attempt to “Judaize” the state of Israel beyond those limits would transform it into an undemocratic nation-state, and compromise its liberal nature, and undo its founders’ magnificent vision, and damage the creative Jewish renewal that may emerge from the great debate about modern Jewish identity. The tough question is not whether a Jewish state can be both democratic and liberal, but rather what kind of Jewish state do we wish to have. [END]

    Dark Genies, Dark Horizons: The Riddle of Addiction

    In 2014, Anthony Bourdain’s CNN show, Parts Unknown, travelled to Massachusetts. He visited his old haunts from 1972, when he had spent a high school summer working in a Provincetown restaurant, the now-shuttered Flagship on the tip of Cape Cod. “This is where I started washing dishes …where I started having pretensions of culinary grandeur,” Bourdain said in a wistful voiceover. For the swarthy, rail-thin dishwash-er-turned-cook, Provincetown was a “wonderland” bursting with sexual freedom, drugs, music, and “a joy that only came from an absolute certainty that you were invincible.” Forty years later, he was visiting the old Lobster Pot restaurant, cameras in tow, to share Portuguese kale soup with the man who still ran the place.

    Bourdain enjoyed a lot of drugs in the summer of 1972. He had already acquired a “taste for chemicals,” as he put it. The menu included marijuana, Quaaludes, cocaine, LSD, psilocybin mushrooms, Seconal, Tuinal, speed, and codeine. When he moved to the Lower East Side of New York to cook profession-ally in 1980, the young chef, then 24, bought his first bag of heroin on the corner of Bowery and Rivington. Seven years later he managed to quit the drug cold turkey, but he spent several more years chasing crack cocaine. “I should have died in my twenties,” Bourdain told a journalist for Biography.

    By the time of his visit to Provincetown in 2014, a wave of painkillers had already washed over parts of Massachusetts and a new tide of heroin was rolling in. Bourdain wanted to see it for himself and traveled northwest to Greenfield, a gutted mill town that was a hub of opioid addiction. In a barebones meeting room, he joined a weekly recovery support group. Everyone sat in a circle sharing war stories, and when Bourdain’s turn came he searched for words to describe his attraction to heroin. “It’s like something was missing in me,” he said, “whether it was a self-image situation, whether it was a character flaw. There was some dark genie inside me that I very much hesitate to call a disease that led me to dope.”

    A dark genie: I liked the metaphor. I am a physician, yet I, too, am hesitant to call addiction a disease. While I am not the only skeptic in my field, I am certainly outnumbered by doctors, addiction professionals, treatment advocates, and researchers who do consider addiction a disease. Some go an extra step, calling addiction a brain disease. In my view, that is a step too far, confining addiction to the biological realm when we know how sprawling a phenomenon it truly is. I was reminded of the shortcomings of medicalizing addiction soon after I arrived in Ironton, Ohio where, as the only psychiatrist in town, I was asked whether I thought addiction was “really a disease.

    In September 2018, I set out for Rust Belt Appalachia from Washington, D.C., where I am a scholar at a think tank and was, at the time, a part-time psychiatrist at a local methadone clinic. My plan was to spend a year as a doctor-within-borders in Ironton, Ohio, a town of almost eleven thousand people in an area hit hard by the opioid crisis. Ironton sits at the southernmost tip of the state, where the Ohio River forks to create a tri-state hub that includes Ashland, Kentucky and Huntington, West Virginia. Huntington drew national attention in August 2016, when twenty-eight people overdosed on opioids within four hours, two of them fatally.

    I landed in Ironton, the seat of Lawrence County, by luck. For some time I had hoped to work in a medically underserved area in Appalachia. Although I felt I had a grasp on urban opioid addiction from my many years of work in methadone clinics in Washington DC, I was less informed about the rural areas. So I asked a colleague with extensive Ohio connections to present my offer of clinical assistance to local leaders. The first taker was the director of the Ironton-Lawrence County Community Action Organization, or CAO, an agency whose roots extend to President Johnson’s War on Poverty. The CAO operated several health clinics.

    Ironton has a glorious past. Every grandparent in town remembers hearing first-person accounts of a period, stretching from before the Civil War to the early turn of the century, when Ironton was one of the nation’s largest producers of pig iron. “For more than a century, the sun over Ironton warred for its place in the sky with ashy charcoal smoke,” according to the Ironton Tribune. “In its heyday in the mid-nineteenth century there were forty-five [iron] furnaces belching out heat, filth, and prosperity for Lawrence County.” After World War II, Ironton was a thriving producer of iron castings, molds used mainly by automakers. Other plants pumped out aluminum, chemicals, and fertilizer. The river front was a forest of smokestacks. High school graduates were assured good paying if labor-intensive jobs, and most mothers stayed home with the kids. The middle class was vibrant.

    But then the economy began to realign. Two major Ironton employers, Allied Signal and Alpha Portland Cement, closed facilities in the late 1960s, beginning a wave of lay-offs and plant closings. The 1970s were a time of oil shocks emanating from turmoil in the Middle East. Inflation was high and Japanese and German car makers waged fierce competition with American manufacturers. As more Ironton companies downsized and then disappeared, the pool of living wage jobs contracted, and skilled workers moved out to seek work elsewhere. At the same time, the social fabric began to unravel. Domestic order broke down, welfare and disability rolls grew, substance use escalated. Most high school kids with a shot at a future pursued it elsewhere, and the place was left with a population dominated by older folks and younger addicts.

    Ironton continues to struggle. Drug use, now virtually normalized, is in its third, sometimes fourth, generation. Almost everyone is at least one degree of separation away from someone who has overdosed. Although precise rates of drug involvement are hard to come by, one quarter to one third is by far the most common answer I hear when I ask sources for their best estimate of people dealing with a “drug problem of any kind.” Alluding to the paucity of hope and opportunity, one of my patients told me that “you have to eradicate the want — why people want to use — or you will always have drug problems.”

    When Pam Monceaux, an employment coordinator in town, asked me whether I thought addiction was “really a disease,” she was thinking about her own daughter. Christal Monceaux grew up in New Orleans with her middle-class parents and a younger sister, and started using heroin and cocaine when she was nineteen. Pam blamed the boyfriend. “Brad sucked her in. Finally, she dumped him, went to rehab and did well, but a few months later took him back and the cycle began all over again.” Eventually Christal’s younger sister, who had moved to Nashville with her husband, persuaded her to leave New Orleans and join them. Pam, a serene woman who had over a decade’s time to put her daughter’s ordeal into perspective, said that relocating — or the “geographic cure,” as it is sometimes called — worked for Christal. A new setting and new friends allowed her to relinquish drugs. She got married, had children, and lived in a $400,000 house. The happy ending was cut short by Christal’s death at the age of forty-two of a heart attack. “If she could kick it for good when she was away from Brad and then when she moved to Nashville, how is that a disease?” Pam asked in her soft Louisiana drawl. “If I had breast cancer, I’d have it in New Orleans and in Nashville.”

    Unlike Christal, Ann Anderson’s daughter had not left drugs behind for good. So, at age 66, Ann and her husband were raising their granddaughter, Jenna. Ann, who worked for my landlord, was bubbly, energetic, and, curiously, sounded as if she were raised in the deep South. The welcome basket she put together for me when I arrived, full of dish towels, potholders, and candies, foretold the generosity that she would show me all year. Ann makes it to every one of Jenna’s basketball games. Jenna’s mom lives in Missouri and has been on and off heroin for years. “I love my daughter, but every time she relapsed, she made a decision to do it,” said Ann, matter-of-factly, but not without sympathy. “And each time she got clean she decided that too.”

    Another colleague, Lisa Wilhelm, formed her opinions about addiction based on her experience with patients. Lisa was a seen-it-all nurse with whom I had worked at the Family Medical Center located across highway 52 from the Country Hearth, a drug den that passed itself off as a motel. She did not ask for my opinion about addiction; she told me hers. “I think it is a choice. And I’ll devote myself to anyone who made that choice and now wants to make better ones,” Lisa said, “But it’s not a disease, I don’t think.”

    Then there was Sharon Daniels, the director of Head Start. Sharon managed programs for drug-using mothers of newborns and toddlers. “I see opportunities our women have to make a different choice,” she said. She is not pushing a naive “just say no” agenda, nor is she looking for an excuse to purge addicted moms from the rolls. This trim grandmother with bright blue eyes and year-round Christmas lights in a corner of her office is wholly devoted to helping her clients and their babies. But she thinks that the term disease “ignores too much about the real world of addiction. If we call it a disease, then it takes away from their need to learn from it.”

    Before coming to Ironton, I had never been asked what I thought about addiction by the counselors at the methadone clinic at which I worked in Washington. I am not sure why. Perhaps abstractions are not relevant when you are busy helping patients make step-wise improvements. Maybe the staff already knew what I would say. On those rare occasions when a student or a non-medical colleague asked me, generally sotto voce, if addiction were really a disease my response was this: “Well, what are my choices?” If the alternatives to the disease label were “criminal act,” “sin,” or “moral deprivation,” then I had little choice but to say that addiction was a disease. So, if a crusty old sheriff looking to justify his punitive lock-‘em-up ways asked me if addiction were a disease, I would say, “Why yes, sir, it is.”

    But Pam, Beckey, Lisa, and Sharon had no concealed motives. They were genuinely interested in the question of addiction. And they were fed up with the false choice routinely thrust upon them in state-sponsored addiction workshops and trainings: either endorse addicts as sick people in need of care or as bad actors deserving of punishment. With such ground rules, no one can have a good faith conversation about addiction. Between the poles of diseased and depraved is an expansive middle ground of experience and wisdom that can help explain why millions use opioids to excess and why their problem can be so difficult to treat. The opioid epidemic’s dark gift may be that it compels us to become more perceptive about why there is an epidemic. The first step is understanding addiction.

    Most people know addiction when they see it. Those in its grip pursue drugs despite the damage done to their wellbeing and often to the lives of others. Users claim, with all sincerity, that they are unable to stop. This is true enough. Yet these accounts tell us little about what drives addiction, about its animating causal core — and the answer to those questions has been contested for over a century. In the mid-1980s the Harvard psychologist Howard J. Shaffer proclaimed that the field of addiction has been in a century-long state of “conceptual chaos.” And not much has changed. For behaviorists, addiction is a “disorder of choice” wherein users weigh benefits against risks and eventually quit when the ratio shifts toward the side of risk. For some philosophers, it is a “disorder of appetite.” Psychologists of a certain theoretical stripe regard it as a “developmental” problem reflecting failures of maturity, including poor self-control, an inability to delay gratification, and an absence of a stable sense of self.  Sociologists emphasize the influence of peers, the draw of marginal groups and identification with them, and responses to poverty or alienation. Psycho-therapists stress the user’s attempt at “self-medication” to allay the pain of traumatic memories, depression, rage, and so on. The American Society of Addiction Medicine calls addiction  “a primary, chronic disease of brain reward, motivation, memory and related circuitry.” For the formerly addicted neuroscientist Marc Lewis, author of Memoirs of an Addicted Brain, addiction is a “disorder of learning,” a powerful habit governed by anticipation, focused attention, and behavior, “much like falling in love.”

    None of these explanations best captures addiction, but together they enforce a very important truth. Addiction is powered by multiple intersecting causes — biological, psycho-logical, social, and cultural. Depending upon the individual, the influence of one or more of these dimensions may be more or less potent. Why, then, look for a single cause for a complicated problem, or prefer one cause above all the others? At every one of those levels, we can find causal elements that contribute to excessive and repeated drug use, as well as to strategies that can help bring the behavior under control. Yet today the “brain disease” model is the dominant interpretation of addiction.

    I happened to have been present at a key moment in the branding of addiction as a brain disease. The venue was the second annual “Constituent Conference” convened in the fall of 1995 by the National Institute on Drug Abuse, or NIDA, which is part of the National Institutes of Health. More than one hundred substance-abuse experts and federal grant recipients had gathered in Chantilly, Virginia for updates and discussions on drug research and treatment. A big item on the agenda set by the NIDA’s director, Alan Leshner, was whether the assembled group thought the agency should declare drug addiction a disease of the brain. Most people in the room — all of whom, incidentally, relied heavily on NIDA-funding for their professional survival — said yes. Two years later Leshner officially introduced the concept in the journal : “That addiction is tied to changes in brain structure and function is what makes it, fundamentally, a brain disease.”

    Since then, NIDA’s concept of addiction as a brain disease has penetrated the far reaches of the addiction universe. The model is a staple of medical school education and drug counselor training and even figures in the anti-drug lectures given to high-school students. Rehab patients learn that they have a chronic brain disease. Drug czars under Presidents Bill Clinton, George W. Bush, and Barack Obama have all endorsed the brain-disease framework at one time or another. From being featured in a major documentary on HBO, on talk shows and Law and Order, and on the covers of Time and Newsweek, the brain-disease model has become dogma — and like all articles of faith, it is typically believed without question.

    Writing in the New England Journal of Medicine in 2016, a trio of NIH- and NIDA-funded scientists speculated that the “brain disease model continues to be questioned” because the science is still incomplete — or, as they put it, because “the aberrant, impulsive, and compulsive behaviors that are characteristic of addiction have not been clearly tied to neurobiology.” Alas, no. Unclear linkages between actions and neurobiology have nothing to do with it. Tightening those linkages will certainly be welcome scientific progress — but it will not make addiction a brain disease. After all, if explaining how addiction operates at the level of neurons and brain circuits is enough to make addiction a brain disease, then it is arguably many other things, too: a personality disease, a motivational disease, a social disease, and so on. The brain is bathed in culture and circum-stance. And so I ask again: why promote one level of analysis above all of the others?

    Of course, those brain changes are real. How could they not be? Brain changes accompany any experience. The simple act of reading this sentence has already induced changes in your brain. Heroin, cocaine, alcohol, and other substances alter neural circuits, particularly those that mediate pleasure, motivation, memory, inhibition, and planning. But the crucial question regarding addiction is not whether brain changes take place. It is whether those brain changes obliterate the capacity to make decisions. The answer to that question is no. People who are addicted can respond to carrots and sticks, incentives and sanctions. They have the capacity to make different decisions when the stakes change. There is a great deal of evidence to substantiate faith in the agency of addicts. Acknowledging it is not tantamount to blaming the victim; it is, much more positively, a recognition of their potential.

    The brain-disease model diverts attention from these truths. It implies that neurobiology is necessarily the most important and useful level of analysis for understanding and treating addiction.  Drugs “hijack” the reward system in the brain, and the patient is the hostage. According to the psychiatrist and neuroscientist Nora Volkow, who is currently the head of NIDA, “a person’s brain is no longer able to produce something needed for our functioning and that healthy people take for granted, free will.” Addiction disrupts the function of the frontal cortex, which functions as “the brakes,” she told a radio audience, so that “even if I choose to stop, I am not going to be able to.” Volkow deploys Technicolor brain scans to bolster claims of hijacked and brakeless brains.

    Rhetorically, the scans make her point. Scientifically, they do not. Instead they generate a sense of “neuro-realism” — a term coined by Eric Racine, a bioethicist at the Montreal Clinical Research Institute, to describe the powerful intuition that brain-based information is somehow more genuine or valid than is non-brain-based information. In truth, however, there are limits to what we can infer from scans. They do not allow us, for example, to distinguish irresistible impulses from those that were not resisted, at least not at this stage of the technology. Indeed, if neurobiology is so fateful, how does any addict ever quit? Is it helpful to tell a struggling person that she has no hope of putting on the brakes? It may indeed seem hopeless to the person caught in the vortex of use, but then our job as clinicians is to make quitting and sustained recovery seem both desirable and achievable to them.

    We start doing this in small ways, by taking advantage of the fact that even the subjective experience of addiction is malleable. As Jon Elster points out in Strong Feelings: Emotions, Addiction, and Human Behavior, the craving for a drug can be triggered by the mere belief that it is available. An urge becomes overpowering when a person believes it is irrepressible. Accordingly, cognitive behavioral therapy is designed precisely to help people understand how to manipulate their environment and their beliefs to serve their interests. They may learn to insulate themselves from people, places, and circumstances associated with drug use; to identify emotional states associated with longing for drugs and to divert attention from the craving when it occurs. These are exercises in stabilization. Sometimes they are fortified with anti-addiction medications. Only when stabilized can patients embark on the ambitious journey of rebuilding themselves, their relation-ships, and their futures.

    I have criticized the brain disease model in practically every lecture I have given on this wrenching subject. I have been relentless, I admit. I tell fellow addiction professionals and trainees that medicalization encourages unwarranted optimism regarding pharmaceutical cures and oversells the need for professional help. I explain that we err in calling addiction a “chronic” condition when it typically remits in early adulthood. I emphasize to colleagues who spend their professional lives working with lab rats and caged monkeys that the brain-disease story gives short shrift to the reality that substances serve a purpose in the lives of humans. And I proselytize that the brain changes induced by alcohol and drugs, no matter how meticulously scientists have mapped their starry neurons and sweeping fibers, need not spell destiny for the user.

    Yet despite my strong aversion to characterizing addiction as a problem caused primarily by brain dysfunction, I genuinely appreciate the good ends that the proponents of the brain model have sought to reach. They hoped that “brain disease,” with its intimation of medical gravitas and neuroscientific determinism, would defuse accusations of flawed character or weak will. By moving addiction into the medical realm, they can get it out of the punitive realm. And if addicts are understood to suffer from a brain disease, their plight will more likely garner government and public sympathy than if they were seen as people simply behaving badly. But would they? Research consistently shows that depictions of behavioral problems as biological, genetic, or “brain” problems actually elicit greater desire for social distance from afflicted individuals and stoke pessimism about the effective-ness of treatment among the public and addicted individuals themselves.

    Evidence suggests that addicted individuals are less likely to recover if they believe that they suffer from a chronic disease, rather than from an unhealthy habit. More radically, there is a grounded argument to be made for feelings of shame, despite its bad reputation in therapeutic circles. “Shame is highly motivating,” observes the philosopher Owen Flanagan, who once struggled mightily with alcohol and cocaine, “it expresses the verdict that one is living in a way that fails one’s own survey as well as that of the community upon whose judgment self-respect is legitimately based.” But under what conditions do feelings of shame end up prodding people into correcting their course, as opposed to making matters worse by fueling continued consumption to mute the pain of shameful feelings? The psychologists Colin Leach and Atilla Cidam uncovered a plausible answer. They conducted a massive review of studies on shame (not linked to addiction per se) and approaches to failure, and found that when people perceive that damage is manageable and even reversible shame can act as a spur to amend self-inflicted damage. They underscored what clinicians have long-known: only when patients are helped to feel competent — “self-efficacious” is the technical term — can they begin to create new worlds for themselves.

    Thinking critically about the disease idea is important for conceptual clarity. But a clinician must be pragmatic, and if a patient wants to think of addiction as a disease I do not try to persuade them otherwise. Yet I do ask one thing of them: to be realistic about the kind of disease it is. Despite popular rhetoric, addiction is not a “disease like any other.” It differs in at least two important ways. First, individuals suffering from addiction respond to foreseeable consequences while individuals with conventional diseases cannot. Second, this “disease” is driven by a powerful emotional logic.

    In 1988, Michael Botticelli, who would go on to become President Obama’s second drug czar over two decades later, was charged with drunk driving on the Massachusetts Turnpike. A judge gave him the choice of going to jail or participating in a treatment program. Botticelli made a decision: he went to a church basement for help, joined Alcoholics Anonymous, 

    and quit drinking. Yet on CBS’ 60 Minutes he contradicted his own story when he drew an analogy between having cancer and being addicted. “We don’t expect people with cancer to stop having cancer,” he said. But the analogy is flawed. No amount of reward or punishment, technically called “contingency,” can alter the course of cancer. Imagine threatening to impose a penalty on a brain cancer victim if her vision or speech continued to worsen, or to offer a million dollars if she could stay well. It would have no impact and it would be cruel. Or consider Alzheimer’s, which is a true brain disease.

    (True insofar as the pathology originates in derangements of brain structure and physiology.) If one held a gun to the head of a person addicted to alcohol and threatened to shoot her if she consumed another drink, or offered her a million dollars if she desisted, she could comply with this demand — and the odds are high that she would comply. In contrast, threatening to shoot an Alzheimer’s victim if her memory further deteriorated (or promising a reward if it improved) would  be pointless.

    The classic example of the power of contingency is the experience of American soldiers in Vietnam. In the early 1970s, military physicians in Vietnam estimated that between 10 percent and 25 percent of enlisted Army men were addicted to the high-grade heroin and opium of Southeast Asia. Deaths from overdosing soared. Spurred by fears that newly discharged veterans would ignite an outbreak of heroin use in American cities, President Richard Nixon commanded the military to begin drug testing. In June 1971, the White House announced that no soldier would be allowed to board a plane home unless he passed a urine test. Those who failed could go to an Army-sponsored detoxification program before they were re-tested.

    The plan worked. Most GIs stopped using narcotics as word of the new directive spread, and most of the minority who were initially prevented from going home produced clean samples when given a second chance. Only 12 percent of the soldiers who were dependent on opiate narcotics in Vietnam became re-addicted to heroin at some point in the three years after their return to the United States. Whereas heroin helped soldiers endure wartime’s alternating bouts of boredom and terror, most were safe once they were stateside. At home, they had different obligations and available rewards, such as their families, jobs, friends, sports, and hobbies. Many GIs needed heroin to cool the hot anger they felt at being sent to fight for the losing side by commanders they did not respect. Once home, their rage subsided to some extent. Also, heroin use was no longer normalized as it was overseas. At home, heroin possession was a crime and the drug was harder and more dangerous to obtain. As civilian life took precedence, the allure of heroin faded.

    We know the value of “contingencies.” Hundreds of studies attest to the power of carrots and sticks in shaping the behavior of addicted individuals. Carl Hart, a neuroscientist at Columbia University, has shown that when people are given a good enough reason to refuse drugs, such as cash, they respond. He ran the following experiment: he recruited addicted individuals who had no particular interest in quitting, but who were willing to stay in a hospital research ward for two weeks for testing. Each day Hart offered them a sample dose of either crack cocaine or methamphetamine, depending upon the drug they use regularly. Later in the day, the subjects were given a choice between the same amount of drugs, a voucher for $5 of store merchandise, or $5 cash. They collected their reward upon discharge two weeks  later. The majority of subjects choose the $5 voucher or cash when offered small doses of the drug, but they chose the drug when they were offered a higher dose. Then Hart increased the value of the reward to $20, and his subjects chose the money every time.

    One of my patients, I will call her Samantha, had been using OxyContin since 2011 when she was working in the kitchen at Little Caesar’s in downtown Ironton. The 20 mg pills belonged to her grandmother, whose breast cancer had spread to her spine. Samantha visited her grandma after work, watched TV with her, and went through the mail. She would also remove three or four pills per day from the massive bottle kept by the fancy hospital bed that Samantha’s brother moved into the living room. When Samantha’s grandmother died in 2016, so did the pill supply. “I just couldn’t bring myself to do heroin, and, anyway, I had no money for drugs,” Samantha said.

    When the pills were almost gone, Samantha drove to an old friend’s house, hoping that the friend would give her a few Oxy’s in exchange for walking Snappy, her arthritic chihuahua. “My friend wasn’t home, but her creepy boyfriend Dave answered the door and told me he’d give me some Oxy’s if I gave him a blow job.” Samantha was feeling the warning signs of withdrawal — jitteriness, crampy stomach, sweaty underarms. Desperate to avoid full blown withdrawal, she gave a minute’s thought to the proposition. “Then I felt revolted and I said no way and drove straight here because I knew I could start buprenorphine the same day,” she said.

    What of Samantha’s “hijacked” brain? When she stood before Dave, her brain was on fire. Her neurons were screaming for oxycodone. Yet in the midst of this neurochemical storm, at peak obsession with drugs, Samantha’s revulsion broke through, leading her to apply the “brakes” and come to our program. None of this means that giving up drugs is easy. But it does mean that an “addicted brain” is capable of making a decision to quit and of acting on it.

    On Tuesday nights, I co-ran group therapy with a wise social worker named John Hurley. In one group session, spurred by a patient sharing that he decided to come to treatment after spending some time in jail, the patients went around the room reciting what brought them to the clinic. Without exception, they said that they felt pressured by forces inside or outside themselves.

    “I couldn’t stand myself.”

    “My wife was going to leave me.”

    “My kids were taken away.”

    “My boss is giving me one more chance.”

    “I can’t bear to keep letting my kids down.”

    “I got Hep C.”

    “I didn’t want to violate my probation.”

    Ultimatums of these kinds were often the best things to happen to our patients. For other addicts, the looming consequences proved so powerful that they were able to quit without any professional help at all.

    The psychologist Gene Heyman at Boston College found that most people addicted to illegal drugs stopped using by about age thirty. John F. Kelly’s team at Massachusetts General Hospital found that forty-six percent of people grappling with drugs and alcohol had resolved their drug problems on their own. Carlos Blanco and his colleagues at Columbia University used a major national database to examine trends in prescription drug problems. Almost all individuals who abused or were addicted to prescription opioids also, at some point in their lives, had a mental disorder, an alcohol or drug problem, or both. Yet roughly half of them were in remission five years later. Given low rates of drug treatment, it is safe to say that the majority of remissions took place without professional help.

    These findings may seem surprising to, of all people, medical professionals. Yet it is well-known to medical sociologists that physicians tend to succumb to the “clinicians’ illusion,” a habit of generalizing from the sickest subset of patients to the overall population of people with a diagnosable condition. This caveat applies across the medical spectrum. Not all people with diabetes, for example, have brittle blood sugars — but they will represent a disproportionate share of the endocrinologist’s case load. A clinician might wrongly, if rationally, assume that most addicts behave like the recalcitrant ones who keep stumbling through the emergency room doors. Most do not. Granted, not everyone can stop an addiction on their own, but the very fact it can be done underscores the reality of improvement powered by will alone: a pathway to recovery rarely available to those with conventional illness.

    The second major difference between addiction and garden- variety disease is that addiction is driven by powerful feelings. Ask an alcoholic why she drinks or an addict why he uses drugs and you might hear about the pacifying effect of whisky and heroin on daunting hardship, unremitting self-persecution, yawning emptiness, or harrowing memories. Ask a patient with Parkinson’s disease, a classic brain disease, why he developed the neurological disorder and you will get a blank stare. Parkinson’s is a condition that strikes, unbidden, at the central nervous system; the patient does not consciously collude in bringing it about. Excessive use of a drug, by contrast, serves some kind of need, an inner pain to be soothed, a rage to be suppressed. It is a response to some sort of suffering.

    Memoirs offer portals into the drama of addiction. One of my favorites is Straight Life, by the master alto saxophonist Art Pepper. Self-taught on the instrument by the age of thirteen, Pepper endured a childhood of psychological brutality at the hands of a sadistic alcoholic father, an icicle of a grandmother, and an alcoholic mother who was fourteen years old when he was born and who did not hide her numerous attempts to abort him. “To no avail,” he writes. “I was born. She lost.” What preoccupied him as a child was “wanting to be loved and trying to figure out why other people were loved and I wasn’t.” Pepper’s self-loathing bubbled like acid in his veins. “I’d talk to myself and say how rotten I was,” he wrote. “Why do people hate you? Why are you alone?” At 23, after years of alcohol and pot, he sniffed his first line of heroin through a rolled up dollar-bill and the dark genie dissolved. He saw himself in the mirror. “I looked like an angel,” he marveled. “It was like looking into a whole universe of joy and happiness and contentment.”

    From that moment on, Pepper said, he would “trade misery for total happiness… I would be a junkie…I will die a junkie.” Indeed, he became a “lifelong dope addict of truly Satanic fuck-it-all grandeur,” in the words of his passionate admirer, the critic and scholar Terry Castle. He was in and out of prison for possession charges. Pepper lived without heroin for a number of years after attending Synanon, a drug-rehabilitation center in California, from 1969 to 1972 and was treated with methadone for a period in the mid-1970s. Eventually, though, he returned to drugs, mainly consuming massive amphetamine, and died from a stroke in 1982. He was 56.

    Addicts can appear to have everything: a good education, job prospects, people who love them, a nice home. They can be people who “are believed to have known no poverty except that of their own life-force,” to borrow the words of Joan Didion, and yet suffer greatly. The malaise is internal. Or they can be in dire circumstances, immiserated by their lives, moving through a dense miasma. “There was nothing for me here,” said one patient whose child was killed in a car accident, whose husband cheated on her, and who was trapped in her job as a maid in a rundown motel with an abusive boss. OxyContin made her “not care.” She reminded me of Lou Reed’s song “Heroin”:

    Wow, that heroin is in my blood
    And the blood is in my head
    Yeah, thank God that I’m good as dead
    Oooh, thank your God that I’m not aware
    And thank God that I just don’t care

    Pharmacologists have long classified opioid drugs as euphoriants, inducers of pleasure, described often as a feeling of a melting maternal embrace, but they could just as easily be called obliviants. According to the late Harvard psychiatrist Norman Zinberg, oblivion seekers yearned “to escape from lives that seem unbearable and hopeless.” Thomas De Quincey, 

    in Confessions of an English Opium Eater, which appeared in 1821, praised opium for keeping him “aloof from the uproar of life.” Many centuries before him Homer had likely referred 

    to it in the Odyssey when he wrote that “no one who drank it deeply…could let a tear roll down his cheeks that day, not even if his mother should die, his father die, not even if right before his eyes some enemy brought down a brother or darling son with a sharp bronze blade,” When the Hollywood screen-writer Jerry Stahl surveyed his life in 1995 in his memoir Permanent Midnight, he concluded that “everything, bad or good, boils back to the decade on the needle, and the years before that imbibing everything from cocaine to Romilar, pot to percs, LSD to liquid meth and a pharmacy in between: a lifetime spent altering the single niggling fact that to be alive means being conscious.” Drugs helped him to attain “the soothing hiss of oblivion.”

    According to ancient myth, Morpheus, the god of dreams, slept in a cave strewn with poppy seeds. Through the cave flowed the river Lethe, known as the river of forget-fulness, also called the river of oblivion. The dead imbibed those waters to forget their mortal days. Unencumbered by memory, they floated free from the aching sadness and discomforts of life.  The mythological dead share a kinship with opioid addicts, oblivion-seekers, and all their reality-manipulating cousins. The difference, mercifully, is that actual people can “un-drink” the numbing waters. Aletheia, truth, is a negation of lethe, the Greek word for forgetting. Recovery from addiction is a kind of unforgetting, an attempt to live in greater awareness and purpose, a disavowal of oblivion.

    Addiction is a cruel paradox. What starts out making life more tolerable can eventually make it ruinous. “A man may take to drink because he feels himself a failure,” said Orwell, “but then  fail  all the more completely because he  drinks.” The balm is a poison. Drugs that ease the pain also end up prolonging it, bringing new excruciations — guilt and grief over damage to one’s self, one’s family, one’s future — and thus fresh reason to continue. The cycle of use keeps turning. Ambivalence is thus a hallmark of late-stage addiction. The philosopher Harry Frankfurt speaks of the “unwilling addict” who finds himself “hating” his addiction and “struggling desperately…against its thrust.” This desperate struggle is what Samuel Taylor Coleridge, himself an opium addict, called “a species of madness” in which the user is torn between his current, anguished self who seeks instant solace and a future self who longs for emancipation from drugs. This explains why the odds of treatment drop out are high — over half after six months, on average. The syringe of Damocles, as Jerry Stahl described the vulnerability to relapse, dangles always above their heads. Many do not even take advantage of treatment when it is offered, reluctant to give up their short-term salvation. They fear facing life “unmedicated” or cannot seem to find a reason for doing so. My friend Zach Rhoads, now a teacher in Burlington, Vermont, used heroin for five years beginning in his early twenties and struggled fiercely to quit. “I had to convince myself that such effort was worth the trouble,” he said.

    Thomas De Quincey consumed prodigious amounts of opium dissolved in alcohol and pronounced the drug a “panacea for all human woes.” For Anthony Bourdain, heroin and cocaine were panaceas, defenses against the dark genie that eventually rose up and strangled him to death in 2018. But not all addicts have a dark genie lurking inside them. Some seek a panacea for problems that crush them from the outside, tribulations of financial woes and family strain, crises of faith and purpose. In the modern opioid ordeal, these are Americans “dying of a broken heart,” in Bill Clinton’s fine words. “They’re the people that were raised to believe the American Dream would be theirs if they worked hard and their children will have a chance to do better — and their dreams were dashed disproportionally to the population as the whole.” He was gesturing toward whites between the ages of 45 and 54 who lack college degrees — a cohort whose life-expectancy at birth had been falling since 1999. They succumbed to “deaths of despair,” a term coined by the economists Anne Case and Angus Deaton in 2015, brought on by suicide, alcoholism (specifically, liver disease), and drug overdoses. Overdoses account for the lion’s share. The white working class has been undermined by falling wages and the loss of good jobs which have “devastated the white working class,” the economists write, and “weakened the basic institutions of working-class life, including marriage, churchgoing, and community.”

    Looking far into the future, what so many of these low income, under-educated whites see are dark horizons. When communal conditions are dire and drugs are easy to get, epidemics can blossom. I call this dark horizon addiction. Just as dark genie addiction is a symptom of an embattled soul, dark horizon addiction reflects communities or other concentrations of people whose prospects are dim and whose members feel doomed. In Ironton, clouds started to gather on the horizon in the late 1960s. Cracks appeared in the town’s economic foundation, setting off its slow but steady collapse.

    Epidemics of dark horizon addiction have appeared under all earthly skies at one time or another. The London gin “craze” of the first half of the eighteenth century, for example, was linked to poverty, social unrest, and over-crowding. According to the historian Jessica Warner, the average adult in 1700 drank slightly more than a third of a gallon of cheap spirits over the course of a year; by 1729 it was slightly more than 1.3 gallons per capita, and hit 2.2 gallons in 1743. A century later, consumption had declined, yet gin was still “a great vice in England,” according to Charles Dickens. “Until you improve the homes of the poor, or persuade a half-famished wretch not to seek relief in the temporary oblivion of his own misery,” he wrote in the 1830s, “gin-shops will increase in number and splendor.”

    During and after the American Civil War, thousands of men needed morphine and opium to bear the agony of physical wounds. In his Medical Essays, the physician Oliver Wendell Holmes, Sr., a harsh critic of medication, excepted opium as the one medicine “which the Creator himself seems to prescribe.” The applications of opium extended to medicating grief. “Anguished and hopeless wives and mothers, made so by the slaughter of those who were dearest to them, have found, many of them, temporary relief from their sufferings in opium,” Horace B. Day, an opium addict himself, recorded in The Opium Habit in 1868. In the South, the spiritual dislocation was especially profound, no doubt explaining, to a significant degree, why whites in the postbellum South had higher rates of opiate addiction than did those in the North — and also, notably, one reason why southern blacks had a lower rate of opiate addiction, according to the historian David T. Courtwright. “Confederate defeat was for most of them an occasion of rejoicing rather than profound depression.”

    A similar dynamic was seen when Russia’s long-standing problem with vodka exploded during the political instability and economic uncertainty of the post-Communist era. The majority of men drank up to five bottles a week in the early 1990s. Back home, heroin was a symptom of ghetto life for millions of impoverished and hopeless Hispanics and blacks in the 1960s and 70s, followed by crack among blacks in the mid-80s. The rapid decline of manufacturing jobs for inner city men, writes the historian David Farber in his recent book Crack, “helps explain the large market of poor people, disproportionately African Americans, who would find crack a balm for their troubled, insecure, and often desperate lives.”

    Children raised by dark horizon parents often bear a double burden. Not only do they suffer from growing up with defeated people in defeated places where opportunities are stunted and boredom is crushing. Often they are casualties of their parents’ and their grandparents’ addictions. One of my patients, Jennifer, described herself as a “third generation junky.” Patches of acne clung to her cheeks, making her look younger than thirty. Her maternal grandmother managed well enough with an ornery husband who drank too much on weekends until he lost his job at a local casting plant in the 1970s and became a full-fledged alcoholic, bitter, aimless, and abusive to his wife. The grandmother worked cleaning motel rooms and began staying out late, using pills and weed. Jennifer’s mother, Ann, was the youngest in a household that had devolved into havoc.

    When Ann was sixteen, Jennifer was born. Not one reliable adult was around. “No one really cared if I went to school,” Jennifer recalls. No one urged her to succeed or expressed confidence in her. “I learned that when something bothered you, you got high.” Her mother, Ann, was aloof, Jennifer said, except for the stretch they were both in jail at the same time: she was 19, her mother was 42. “My mother was assigned to be the chaperone for my group of inmates,” Jennifer recalled. “She did my laundry and saved me extra food in jail. It was the only time she acted like a mom towards me.” Children raised in such homes are greatly disadvantaged. The absence of a steady protector in their lives often derails their developing capacity for tolerating frustration and disappointment, controlling impulses, and delaying gratification. They have difficulty trusting others, forming rewarding connections with others and they often see themselves as damaged and worthless. When adults around them do not want to work regularly, children cannot imbibe the habits of routine, reliability, and dependability. At worst, the cycle repeats itself, inflicting wounds across generations and communities as their collective disenchantment with the future mounts. Sociologists call this “downward social drift.”

    The germ theory of addiction: that is my term for one of the popular if misbegotten narratives of how the opioid crisis started. It holds that the epidemic has been driven almost entirely by supply — a surfeit not of bacteria or viruses, but of pills. “Ask your doctor how prescription pills can lead to heroin abuse,” blared massive billboards from the Partnership for a Drug-Free New Jersey that I saw a few years ago. Around that time, senators proposed a bill that would have limited physician prescribing. “Opioid addiction and abuse is commonly happening to those being treated for acute pain, such as a broken bone or wisdom tooth extraction,” is how they justified the legislation.

    Not so. The majority of prescription pill casualties were never patients in pain who had been prescribed medication by their physicians. Instead, they were mostly individuals who were already involved with drugs or alcohol. Yes, some actual patients did develop pill problems, but generally they had a history of drug or alcohol abuse or were suffering from concurrent psychiatric problems or emotional distress. It is also true, of course, that drug marketers were too aggressive at times and that too many physicians overprescribed, sometimes out of inexperience, other times out of convenience, and in some cases out of greed.

    As extra pills began accumulating in rivulets, merging with pills obtained from pharmacy robberies, doctor shopping, and prescription forgeries, a river of analgesia ran through various communities. But even with an ample supply, you cannot “catch” addiction. There must be demand — not for addiction, per se, but for its vehicle. My year in Ironton showed me that the deep story of drug epidemics goes well beyond public health and medicine. Those disciplines, while essential to management, will not help us to understand why particular people and places succumb. It is the life stories of individuals and, in the case of epidemics, the life story of places, that reveal the origins. Addiction is a variety of human experience, and it must be studied with all the many methods and approaches with we which we study human experience.

    Dark genies can be exorcised and dark horizons can be brightened. It is arduous work, but unless we recognize all the reasons for its difficulty, unless we reckon with the ambiguity and the elusiveness and the multiplicity of addiction’s causes, unless we come to understand why addicts go to such lengths to continue maiming themselves with drugs — compelled by dark genies, dark horizons, or both — their odds of lasting recovery are slim, as are the odds of preventing and reversing drug crises. The complexity of addiction is nothing other than the complexity of life.

    America in the World: Sheltering in Place

    I

    On the third week of America’s quarantine against the pandemic, a new think tank in Washington had a message for the Pentagon. “The national security state, created to keep us safe and guard our freedoms, has failed,” Andrew Bacevich, the president of the Quincy Institute for Responsible Statecraft, told viewers on a Skype video from home, interspersed with the sounds of sirens and images of emergency rooms. While microbes from China were mutating and coming to kill us, he preached, we were wasting our time hunting terrorists and projecting military power abroad. It was a sequitur in search of a point — as if America ever faces only one danger at a time. When the black plague struck Europe and Asia in the fourteenth century, it did not mean that Mongol hordes would no longer threaten their cities. Nor does the coronavirus mean that jihadists are not plotting terror or that Russia is not threatening its neighbors or that China is not devouring Hong Kong.

    His casuistry aside, Bacevich was playing to the resentments of Americans who sincerely believe that American foreign policy is driven by an addiction to war. For the first two decades of post-cold war politics, this argument was relegated to the hallucinations of the fringe. But no more. A new national consensus had started to form before the plague of 2020: that there are almost no legitimate uses for American military power abroad, that our wars have been “endless wars,” and that our “endless wars” must promptly be ended. On the subject of American interventionism, there is no polarization in this notoriously polarized country. There is a broad consensus, and it is that we should stay out and far away.

    The concept of “endless wars” has its roots in the middle of the twentieth century. In 1984, most famously, George Orwell depicted a totalitarian state that invents its own history to justify perpetual war between the superpowers to keep its citizens in a state of nationalist fervor. In American political discourse, the concept of a war without end was baked into the influential notion of “the manufacture of consent,” a notion manufactured by Noam Chomsky according to which the media teaches the American people to support or acquiesce in the nefarious activities of the military-industrial complex. But the “endless wars” that so many Americans wish to end today are not like the ones that Orwell imagined. Today Americans seek to end the war on terror, which in practice means beating back insurgencies and killing terrorist leaders in large swaths of the Islamic world. Orwell’s wars were endless because none of the world’s states possessed the power to win them. The war on terror, by contrast, endures because of a persistent threat to Western security and because weaker states would collapse if American forces left. The war on terror pits the American Gulliver against fanatical bands of Lilliputians. But the asymmetry of military power does not change the magnitude — or the reality — of the carnage that “stateless actors” can wreak.

    To get a feel for the new consensus on American quietism, consider some of the pre-pandemic politics surrounding the war in Afghanistan. In a debate during the presidential primaries, Elizabeth Warren insisted that “the problems in Afghanistan are not problems that can be solved by a military.” Her Democratic rivals on the stage agreed, including Joe Biden. This is also Donald Trump’s position. As Warren was proclaiming the futility of fighting for Afghanistan’s elected government, the Trump administration was negotiating that government’s betrayal with the Taliban. (And the Taliban was ramping up its violence while we were negotiating with it.) Before the coronavirus crisis, the Trump administration was spending a lot of its political capital on trying to convince skeptical Republican hawks that the planned American withdrawal would not turn Afghanistan into a haven for terrorists again, which of course is nonsense.

    The emerging unanimity about an escape from Afghanistan reflects a wider strategic and historical exhaustion. Despite the many profound differences between Trump and Obama, both presidents have tried to pivot away from the Middle East to focus on competition with China. (Obama never quite made the pivot.) Both presidents have also mused publicly about how NATO allies are “free riders” on America’s strength. And both presidents have shown no patience with the use of American military force. In 2012, even as the world was once again becoming a ferociously Hobbesian place, the Obama administration’s national defense strategy dropped the longstanding post-cold war goal of being able to win two wars in different geographical regions at once. (The Obama Pentagon seemed to think that land wars are a thing of the past and that we can henceforth make do with drones and SEALs.) Trump’s first defense strategy in 2018 affirmed the Obama formulation.

    Moreover, a majority of Americans agreed with their political leaders. A Pew Research poll in 2019 found that around sixty percent of all Americans did not believe it was worth fighting in Iraq, Syria, or Afghanistan. That percentage is even higher among military veterans. Indeed, Pew research polling since 2013 has found that more Americans than not believe that their country should stay out of world affairs. Hal Brands and Charles Edel, in their fine book The Lessons of Tragedy, point out that majorities of Americans still agreed in the late 2010s that America should possess the world’s most powerful military, and supported alliances, and favored free trade, but they conclude that many Americans are now resistant to the “sacrifices and trade-offs necessary to preserve the country’s post-war achievements.”

    All of that was before covid19 forced most of the country to “shelter in place.” In truth, sheltering-in-place has been the goal of our foreign and national security policy for most of a decade. And it will be much harder to justify a continued American presence in the Middle East, west Asia, Africa and even the Pacific after Congress borrowed trillions of necessary dollars for paycheck protection and emergency small business loans. In addition to all of the older muddled arguments for retreat, there will now be a strong economic case that the republic can no longer afford its overseas commitments, as if foreign policy and national security are ultimately about money. In other words, there are strong indications that the republic is undergoing a profound revision of its role in leading and anchoring the international order that it erected after World War II. The days of value-driven foreign policy, of military intervention on humanitarian grounds, and even of grand strategy, may be over. Should every terror haven, every failed state, every aggression against weak states, and every genocide be America’s responsibility to prevent? Of course not. But should none of them be? America increasingly seems to think so. We are witnessing the birth of American unexceptionalism, otherwise known as “responsible statecraft.”

     

    II

    At the end of the cold war, the spread of liberal democracy seemed inevitable. The Soviet Union had collapsed, and with it the communist governments of the Eastern European countries it dominated. China had momentously made room for a market in its communist system, a strange state-sponsored capitalism that brought hundreds of millions of people out of subsistence poverty. In the West, juntas and strongmen toppled and elected governments replaced them. In every region except for the Middle East and much of Africa, the open society was on the march.

    One of the first attempts to describe the thrilling new moment was a famous, and now infamous, essay by Francis Fukuyama. In 1989, in “The End of History?,” he surveyed a generation that saw the collapse of pro-American strongmen from Spain to Chile along with the convulsions behind the Iron Curtain and concluded that the triumph of liberalism was inevitable. (He has since revised his view, which is just as well.) His ideas provided the intellectual motifs for a new era of American hegemony. “The triumph of the West, and the Western idea, is evident first of all in the total exhaustion of viable systematic alternatives to western liberalism,” Fukuyama wrote. What he meant, in his arch Hegelian way, was that the age of ideological conflict between states was over. History was teleological and it had attained its telos. Fukuyama envisioned a new era in which great power wars would be obsolete. He did not predict the end to all war, but he did predict that big wars over competing ideologies would be replaced by a more mundane and halcyon kind of competition. The principled struggles of history, he taught, “will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”

    Fukuyama’s predictions were exhilarating in 1989 because the consensus among most intellectuals during the Cold War had been that the Soviet Union was here to stay. Early theorists of totalitarianism such as Hannah Arendt and Carl Friedrich had portrayed the Soviet state as an unprecedented and impermeable juggernaut that was terrifyingly strong and durable. The hero of Orwell’s dystopia, the dissident Emmanuel Goldstein, resisted Big Brother but was never a real threat to the state. In the Brezhnev era, analysts of the Soviet Union began to notice that the juggernaut was crumbling from within and had lost the ideological allegiance of its citizens, even as its military and diplomatic adventures beyond its borders continued. Building on this increasingly realistic understanding of the failures of the communist state, Fukuyama observed that totalitarian systems were overstretched and brittle. The West could exhale.

    Not everyone agreed. Samuel Huntington argued that conflict between great powers would remain because identity, not ideology, is what drives states to make war. While it was true that communism was weakening after the collapse of the Soviet Union, other illiberal forces such as religious fundamentalism and nationalism remained a threat to the American-led liberal world order. The hope that China or Iran could be persuaded to open their societies by appealing to prosperity and peace ignored that most nations were motivated not by ideals, but by a shared sense of history and culture. Leon Wieseltier similarly objected that the end of the Soviet Union and its empire would release ethnic and religious and tribal savageries, old animosities that were falsely regarded as antiquated. He also observed that the concept of an “end of history” was borrowed from the very sort of totalitarian mentality whose days Fukuyama believed were over. The worst fiends of the twentieth century justified their atrocities through appeals to history’s final phase; the zeal required for their enormous barbarities relied in part on a faith that these crimes are advancing the inevitable march of history. For Wieseltier, there is no final phase and no inevitable march, and the liberal struggle is endless. “To believe in the end of history,” he wrote, “you must believe in the end of human nature, or at least of its gift for evil.”

    As international relations theories go, “The End of History” was like a medical study that found that ice cream reduced the risk of cancer. Fukayama’s optimistic historicism instructed that the easiest choice for Western leaders was also the wisest. Why devise a strategy to contain or confront Russia if it was on a glide path to democratic reform? Why resist American industrial flight to China if that investment would ultimately tame the communist regime and tempt it to embrace liberalism?

    Every president until Trump believed that it was possible to lure China and Russia into the liberal international order and attempted to do so. Instead of preparing for a great power rivalry, American foreign policy sought to integrate China and Russia into global institutions that would restrain them. Bill Clinton and George W. Bush expanded NATO, but they also invited Russia into the Group of 7 industrialized nations. Clinton, Bush, and Obama — the latter liked to invoke “the rules of the road” — encouraged Chinese-American economic interdependence. Until Obama’s second term, the United States did next to nothing to stop China’s massive theft of intellectual property. Until June 2020, Chinese corporations could trade freely on U.S. stock exchanges without submitting to the basic accounting rules required of American companies. The assumption behind these Panglossian views of China and Russia was that democratic capitalism was irresistible and the end of communism marked the beginning of a new era of good feelings. (Communism never ended in China, of course.) And it was certainly true that trade with China benefitted both economies: Chinese and American corporations prospered and American consumers enjoyed cheaper consumer goods.

    This is not to say that there were no bouts of dissent. In his presidential campaign in 1992, Bill Clinton attacked George H. W. Bush for his capitulation to China after the uprising at Tiananmen Square. And even though Clinton did not alter the elder Bush’s approach to China during his presidency, there was a lively debate about China’s human rights abuses in the 1990s. Clinton expanded NATO, something the elder Bush opposed, but he and later George W. Bush and Barack Obama did little to push back against Russia’s own regional adventures and aggressive behavior. Consider that no serious U.S. war plan for Europe was developed between the end of the Cold War and 2014, the same year that Russia invaded Ukraine and eventually annexed Crimea, and five years after Russia invaded and occupied the Georgian provinces of South Ossetia and Abkhazia. We preferred to look away from Russia’s forward movements — with his cravenness about Syria, Obama actually opened the vacuum that Russia was happy to fill — just as we preferred to look away from the growing evidence of China’s strategic ambitions and human-rights outrages. We were reluctant to lose those good feelings so soon after we acquired them.

    None of this meant that American presidents would not use force or wage war after the collapse of the Soviet Union. They did. But they did not engage in great power wars. The first Bush saved Kuwait from Saddam Hussein and saved Panama from the lesser threat of Manuel Noriega. Clinton intervened in the Balkans to stop a genocide and launched limited air strikes in the Middle East and Afghanistan. In the aftermath of September 11, George W. Bush waged a war on terror and toppled the tyrannies that held Iraq and Afghanistan captive. Obama intervened reluctantly and modestly and ineffectively in Libya; he withdrew troops from Iraq only to send some of them back; and he presided over a “surge” in Afghanistan, even though its announcement was accompanied by a timetable for withdrawal. Trump has launched no new wars, but he has killed Iran’s most important general and the architect of its campaign for regional hegemony, and he has launched strikes on Syrian regime targets in response to its use of chemical weapons, though his strikes have not added up to a consistent policy. But even as optimism about world order has become less easy to maintain, even as the world grows more perilous in old and new ways, the American mood of retirement, the inclination to withdrawal, has persisted. Fukuyama, who acknowledged that the threat of terrorism would have to be met with force, has remarked that our task is not “to answer exhaustively the challenges to liberalism promoted by every crackpot messiah around the world.” But what about the genocides perpetrated by a crackpot messiah (or a rational autocrat)? And what about answering great power rivals? At the time, to be sure, we had no great power rivals. We were living in the fool’s paradise of a “unipolar” world.

    *

    Bill Clinton came to the presidency from Little Rock without a clear disposition on the use of military force. He was at times wary of it. He pulled American forces out of Somalia after a militia downed two American helicopters. In his first term he dithered on the Balkan wars and their atrocities, favoring a negotiation with Serbia’s strongman Slobodan Milosevic. He did nothing to stop Rwanda’s Hutu majority from slaughtering nearly a million Tutsis for three months in the spring and summer of 1994. He was more focused than any of his predecessors or successors on brokering a peace between Israelis and Palestinians. Over time, of course, he evolved, but how the world suffers for the learning curve of American presidents! Clinton punished Saddam Hussein’s defiance of U.N. weapons inspectors. He bombed suspected al Qaeda targets in Sudan and Afghanistan after the bombings of American embassies in Africa in 1998. He prevented Milosevic from cleansing Kosovo of Albanians and helped push back Serb forces from Bosnia.

    Clinton was a reluctant sheriff, to borrow Richard Haass’ phrase. In his first term he was unsure about using American force abroad. By the end of his second term, he had come to terms with the responsibilities of American power. “The question we must ask is, what are the consequences to our security of letting conflicts fester and spread?,” Clinton asked in a speech in 1999. “We cannot, indeed, we should not, do everything or be everywhere. But where our values and our interests are at stake, and where we can make a difference, we must be prepared to do so.” He was talking about transnational threats and rogue states. In his second term, Clinton took a keen interest in biological weapons and pandemics. This meant using military power to prevent the proliferation of weapons of mass destruction and deter terrorists. As Madeleine Albright, Clinton’s second secretary of state, memorably put it, America was the world’s “indispensable nation.”

    Yet Clinton’s activism did not extend to Russia or China. He helped to expand the NATO alliance, but also secured debt forgiveness for the Russian federation and used his personal relationship with Russian president Boris Yeltsin to reassure him that NATO’s expansion was no threat to Moscow. Clinton also reversed his campaign promise on China and granted it most favored nation status as a trading partner, paving the way for the economic interdependence that Trump may be in the process of unraveling today. At the time, Clinton explained that “this decision offers us the best opportunity to lay the basis for long-term sustainable progress on human rights and for the advancement of our other interests with China.” This reflected the optimism of 1989-1991. What other model did China have to emulate, but our own? Allow it to prosper and over time it will reform.

    When Clinton left office, the consensus among his party’s elites was that his foreign policy mistakes were errors of inaction and restraint. Clinton did nothing to prevent the genocide in Rwanda. He waited too long to intervene in the Balkans. It seemed that Americans had gotten over their inordinate fear of interventions. Why had it taken Clinton so long? There was an activist mood in Washington before the attacks of September 11. And after hijacked commercial planes were turned into precision missiles and the towers fell, the sense that America needed to do more with its power intensified.

    *

    In the Bush years, American foreign policy fell first into the hands of neoconservatives. For their critics, they were a cabal of swaggering interlopers who twisted intelligence products and deceived a dim president into launching a disastrous war. In fact they were a group of liberals who migrated to the right and brought with them an intellectual framework and appreciation for social science that was absent from the modern conservative movement. In foreign policy they dreaded signs of American weakness or retreat, and in 1972 supported Scoop Jackson against George McGovern in the Democratic primaries. As that decade progressed, the wary and disenchanted liberals migrated to the former Democrat Ronald Reagan. In Reagan, they found a president who despised Soviet communism as much as they did.

    In the 1990s, a new generation of neocons wanted to seize the opportunity of American primacy in the world after the Soviet Union’s collapse. As Irving Kristol observed, “With power come responsibilities, whether sought or not, whether welcome or not. And it is a fact that if you have the kind of power we now have, either you will find opportunities to use it, or the world will discover them for you.” In that spirit, the neoconservatives of the 1990s advocated an activist foreign policy. They argued that the United States should help to destabilize tyrannies and support democratic opposition movements. They were not content with letting history take its course; they wanted to push it along in the direction of freedom. Their enthusiasm for an American policy of democratization was based on both moral arguments and strategic arguments.

    The focus in this period was Iraq. Neoconservatives had rallied around legislation known as the Iraq Liberation Act that would commit the American government to train and to equip a coalition of Iraqi opposition groups represented in the United States by Ahmad Chalabi, a wealthy Iraqi political figure who was trained as a mathematician in the United States. For the first half of the 1990s, the CIA funded Chalabi’s Iraqi National Congress, but he had a falling out with the agency. The Iraq Liberation Act was a way to save the opposition group by replacing a once covert intelligence program with one debated openly in Congress. It should be noted that Chalabi’s initial plan was not to convince America to invade Iraq, but to secure American training and equipment to build a rebel army comprised of Iraqis to topple Saddam Hussein. Clinton allowed the legislation to pass in 1997, but his government never fully implemented it.

    George W. Bush ironically ran his campaign in 2000 with the promise of a humble foreign policy. Condoleezza Rice memorably declared at the Republican convention that America cannot be the world’s 911. Not long afterward, 9/11 was the event that forced Bush to renege on his promise. Three days after that attack, Congress voted to authorize what we know today as the war on terror: the “endless wars” had begun. Over the last nineteen years, that authorization has justified a global war against a wide range of targets. Bush used it as the legal basis for strikes on terrorists in south Asia. Obama used it to justify his military campaign against the Islamic State, when it was a battlefield enemy of al Qaeda’s Syrian branch. And while every few years some members of Congress have proposed changes to the authorization, these efforts have yet to succeed. Today many progressives believe the war on terror deformed America into an evil empire, patrolling the skies of the Muslim world with deadly drones, blowing up wedding parties in Afghanistan, torturing suspected terrorists and aligning with brutal thugs. Even Obama has not escaped this judgment. Some of these are fair criticisms. The war on terror was indeed a war. Innocent people died. At the same time, the other side of the ledger must be counted. Since 9/11, there have been no mass-casualty attacks by foreign terrorists inside our borders. On its own terms, from the rather significant standpoint of American security, this “endless war” has produced results.

    In the first years of the war on terror, the pacifist left had little influence over the national debate. A better barometer of the country’s mood was a column, published a month before the Iraq War, by Charles Krauthammer. He denounced what he said was Clinton’s “vacation from history,” and asked whether “the civilized part of humanity [will] disarm the barbarians who would use the ultimate knowledge for the ultimate destruction.” Those words, and many others like them, helped to frame the rationale for the American invasion of Iraq. Note that Krauthammer did not write that Clinton’s vacation from history was his failure to prepare for China’s rise and Russia’s decline. It was his failure to prevent the arming of smaller rogue states and terrorist groups. Krauthammer was still living in Fukuyama’s world. And so was Bush. In his first term, Bush not only failed to challenge Russia or China, he sought to make them partners in his new global war. Bush famously remarked that he had looked into the eyes of Vladimir Putin and found a man he could trust. (“I was able to get a sense of his soul.”) Bush’s government would also designate a Uighur separatist organization as a terrorist group, giving cover to the persecution of that minority. The world learned in 2018 that China had erected a new Gulag in western China that now imprisons at least a million Uighurs.

    China and Russia did not support Bush’s Iraq war. Many Democrats did. In 2002, a slim majority of Democrats in the House opposed a resolution to authorize it, but in the Senate, 29 out of 50 Democrats voted for it. Most significant, every Democrat with presidential aspirations — from Hillary Clinton to Joe Biden — voted for the war, a vote for which they would later apologize. At the time of that vote, the ambitious Democrats who supported it did not know that opposition to that war would define their party for years to come. Neither did the establishment Democrats who opposed it. Al Gore, speaking at the Commonwealth Club of San Francisco, explained his opposition to the war: “If we go in there and dismantle them — and they deserve to be dismantled — but then we wash our hands of it and walk away and leave it in a situation of chaos, and say, ‘That’s for y’all to decide how to put things back together now,’ that hurts us.” Gore was not concerned that America may break Iraq, he was acknowledging that it was already broken. Nor was he worried about an “exit strategy.” He worried that if America went to war in Iraq under a Republican president, the war may not be endless enough. America may leave too soon.

    The Iraq war was also opposed by a group of international relations theorists who advocated for what is known as foreign policy realism. Unlike Fukuyama, the realists do not think it matters how a state chooses to organize itself. All states, according to the realists, pursue their own survival, or their national interest. Thirty-three prominent realists purchased an advertisement in the New York Times in 2002 urging Bush not to invade Iraq. They argued that the coming war would distract America from the campaign against al Qaeda and leave it in charge of a failed state with no good options to leave. It is worth noting that neither the pacifist left nor the foreign policy realists argued before the war that Saddam Hussein had no weapons of mass destruction, the liquidation of which was Bush’s justification for the war. Both camps warned instead that an American invasion of Iraq could prompt the tyrant to use the chemical and biological weapons that everyone agreed he was concealing. As the professors wrote in their open letter, “The United States would win a war against Iraq, but Iraq has military options — chemical and biological weapons, urban combat — that might impose significant costs on the invading forces and neighboring states.” The argument was that removing Saddam Hussein would further destabilize the Middle East.

    Over the course of 2003, it became clear that the casus belli for Operation Iraqi Freedom — Saddam’s refusal to come clean on his regime’s weapons of mass destruction — was wrong. The teams of American weapons inspectors sent into the country could not find the stockpiles of chemical weapons or the mobile bio-weapons labs. The Bush administration sought to portray this error as an intelligence failure, which was largely correct. And so the war’s unanticipated consequences, some of them the result of American error, eclipsed the fact that Iraqis had drafted a constitution and were voting for their leaders. In America, a great popular anger began to form, not only against the Iraq war but more generally against American interventionism. The Democrats became increasingly eager to take political advantage of it. Talk of American hubris proliferated. Progressives were growing wary of the institutions of national security, particularly the intelligence agencies.

    Republicans under Bush were also divided between an embrace of the president’s own idealism to make Iraq a democracy and the unsentimental realism of his vice president, who darkly warned after 9/11 that the war against terror would have to be fought in the shadows. Bush’s own policies were inconsistent. Sometimes he pressured dictator allies to make democratic reforms, but he also empowered those same dictators to wage war against jihadists with no mercy. In Israel, Bush supported legislative elections that resulted in empowering Hamas in Gaza. (That was in 2006, the last time Palestinians voted for their leaders.) By the end of Bush’s second term, however, great power competition had re-emerged. While America was preoccupied with the Muslim world, Russia invaded the former Soviet Republic of Georgia. Bush did what he could. He sent humanitarian supplies to Tbilisi packed on U.S. military aircraft. He tried to rally allies to support a partial ban on weapons sales to Moscow. But Russia had the good fortune of timing its aggression just as the world’s financial markets collapsed. It was also lucky that the next American president would be Barack Obama.

    *

    Barack Obama had been a state senator in Illinois during the run up to the Iraq War, when his primary rival, Hillary Clinton, was a U.S. senator. She voted for the war. He gave a speech opposing it. At the time of the election, in a political party incensed by the Iraq war, Obama’s speech in Chicago in 2002 functioned as a shield: he may have lacked Clinton’s experience, but at least he did not support Bush’s war. Back in 2002, though, Obama’s speech was barely noticed. The Chicago Tribune news story led with Jesse Jackson’s speech and made no mention of the ambitious state senator. When Obama was at the lectern, he had two distinct themes. First, he wanted the protestors to know that he, too, understood the evil of neoconservatism. “What I am opposed to is the cynical attempt by Richard Perle and Paul Wolfowitz and other armchair, weekend warriors in this administration to shove their own ideological agendas down our throats,” he said. At the same time, Obama rejected the apologies for tyrants common on the hard left. Of Saddam, he said, “He is a brutal man. A ruthless man. A man who butchers his own people to secure his own power.” But the young Obama did not think that Saddam threatened American interests. Echoing Fukuyama’s optimism, he declared that “in concert with the international community he can be contained until, in the way of all petty dictators, he falls away into the dustbin of history.”

    Obama’s patience with history, with its dustbins and its arcs, turned out to be, well, endless. His Chicago speech should have been a warning for the left wing of the Democratic Party that over time it would be disappointed by his presidency. As Obama said, he was not against war. (The tough-minded Niebuhrian speech that he delivered in Oslo when he accepted his ridiculous Nobel Prize underscored his awareness of evil in the world.) He was merely against dumb wars — or as he later put it, “stupid shit.” He had come into office when the world was growing more dangerous, and he chose to respond to these dangers with careful and scholarly vacillations. He wanted the American people to know that he was thoughtful. The most salient characteristics of his foreign policy were timidity and incoherence, and a preference for language over action.

    Thus, Obama withdrew American forces from Iraq in 2011, only to send special operators back to Iraq in 2014, after the Islamic State captured the country’s second largest city. He “surged” forces in Afghanistan in his first term, but fired the general he chose to lead them, and spent most of his administration trying, and failing, to withdraw them. He spoke eloquently about the disgrace of Guantanamo, but never closed it. He declassified a series of Justice Department memos that made specious legal arguments to allow the CIA to torture detainees, but his Justice Department never prosecuted the officials responsible, as many in his base wanted. He sided with peaceful protestors in Egypt in 2011 at the dawn of the Arab Spring and urged Hosni Mubarak to step down, but after Egypt elected an Islamist president, the military toppled him in a coup thirteen months later and Obama declined to impose sanctions. He did manage to reach a narrow deal with Iran to diminish, but not demolish, its nuclear weapons program. By this time Iran was on a rampage in the Middle East, and the windfall that its economy received from the nuclear bargain would be reinvested in its own proxy wars in Syria, Iraq and Yemen. The deal alienated America’s traditional allies in the Middle East and brought Israel closer to its Arab rivals.

    The most spectacular failure of Obama’s foreign policy, of course, was Syria. After the Arab Spring, Syrians demanded the same democratic freedoms that they saw blooming in Tunisia and briefly in Egypt. Obama supported them, at first. But the tyrant was watching: Bashar al-Assad had learned from what he considered the mistakes of Mubarak and Ben Ali. Assad was also fortunate that his patrons were Russia and Iran, who also lived in fear of popular uprisings. So began the Syrian civil war that to this day rages on. That war has flooded Europe and Turkey with refugees, with dire political consequences, and threatened for a few years in the middle of the 2010s to erase the borders established after World War I for the Middle East.

    It is not the case that Obama did absolutely nothing to support the Syrian opposition. In 2012, he approved a covert program known as Timber Sycamore, in which the CIA endeavored to build up an army of “moderate rebels” against Assad. The plan was always flawed. Obama did not want American forces to fight inside Syria and risk an open clash with Iranian and Russian forces who were on the side of the Assad regime. (Obama was reluctant to offend the Russians and he was actively seeking d.tente with the Iranians.) America clung to its passivity as Syria’s civil war and Iraq’s embrace of Shiite majoritarian rule created the conditions for the emergence of the Islamic State. A few years later, Obama authorized a Pentagon program to arm and support a largely Kurdish army fighting the Islamic State. With the help of American air power, the Kurds and U.S. special forces eventually smashed the “caliphate” during Trump’s first term in office.

    Artlessly and in accord with his principles, Obama painted himself into a corner. He called on Assad to leave, but he never used American power to assist with that mission. Obama also warned of consequences if Assad used chemical weapons, which he called a “red line.” In 2013, when Assad crossed this line, Obama threatened air strikes against Assad’s regime. The moment of truth— about Syria, about American interventionism — had arrived. Obama punted. He gave a bizarre speech in which he asserted that he had the constitutional prerogative to strike Syria without a resolution from Congress but was asking Congress to authorize the attack anyway. In his swooning memoir of the Obama White House, Ben Rhodes recalls that the president told him, “The thing is, if we lose this vote it will drive a stake through the heart of neoconservatism — everyone will see they have no votes.” Never mind the heart of Bashar al Assad! Rhodes continues: “I realized then that he was comfortable with either outcome. If we won authorization, he’d be in a strong position to act in Syria. If we didn’t, then we would potentially end the cycle of American wars of regime change in the Middle East.”

    The episode broaches the early roots of the bipartisan consensus against “endless war.” When the resolution came up for a vote, it barely got out of the Senate Foreign Relations Committee. As the Senate debated, Republican hardliners began to wobble. “Military action, taken simply to save face, is not a wise use of force,” said Senator Rubio. “My advice is to either lay out a comprehensive plan using all of the tools at our disposable that stands a reasonable chance of allowing the moderate opposition to remove Assad and replace him with a stable secular government. Or, at this point, simply focus our resources on helping our allies in the region protect themselves from the threat they and we will increasingly face from an unstable Syria.” In other words, Rubio would not support a modest air strike to impose some costs on a breach of an important international norm because it did not go far enough. The result of this twisted reasoning, and of the failure of the resolution, was the emboldening of Assad. Finally, at the last minute, Obama was saved by Assad’s most important patron. Russian foreign minister Sergei Lavrov and Secretary of State John Kerry quickly patched together a plan whereby Syria, for the first time, would declare its chemical weapons stockpiles and allow international inspectors to get them out of the country. Over time, the deal proved worthless. Assad would gas his people again and again, eroding what was once a powerful prohibition on the use of chemical weapons in the twenty-first century. But if the deal did nothing to end the misery of Syria, it did a lot to end the misery of Obama. In 2013, Obama portrayed the bargain as a triumph of diplomacy, which it was — for Putin.

    One of the first foreign policy priorities for Obama after his election was to mend relations with Moscow. This was called the “reset.” Obama was most exercised by transnational threats: climate change, arms control, fighting terrorism, Ebola. He wanted Russia to be a partner. And Russia wanted recognition that it was still a great power.

    After Obama folded on his “red line” in Syria, Putin made his move. Russian forces invaded Ukraine in 2014 to stop a democratic revolution and eventually annexed Crimea. Obama imposed a series of economic sanctions on Russian industries and senior officials, but he declined to arm Ukraine’s government or consider any kind of military response. (He worried more about escalation than injustice.) His administration’s advice to Kiev was to avoid escalation. The following year Obama did not challenge Russia when it established airbases inside Syria. He still needed the Russians for the Iran nuclear deal. By 2016, when the U.S. intelligence community was gathering evidence that Russians were hacking the Democratic National Committee and Hillary Clinton’s campaign, Obama’s White House waited until after the election to punish Moscow. Three weeks before the next president would take the oath of office, Obama announced the expulsion of thirty-five spies and modest sanctions on Russia’s intelligence services. It was a fine example of “responsible statecraft.”

    *

    The thoughtful incoherence of Barack Obama was succeeded by the guttural anarchy of Donald Trump. It was nearly impossible to discern from Trump’s campaign what his actual foreign policy would be if he won. His ignorance of international affairs was near total. He simultaneously pledged to pull America out of the Middle East and to bomb ISIS indiscriminately. He could sound like Michael Moore one minute, thundering that George W. Bush lied America into the Iraq War, and in the next minute like a Stephen Colbert imitation of a right-wing neanderthal, claiming that Mexico was deliberately sending its rapists into our country. And yet there was a theme in Trump’s hectoring confusion. He hearkened back to a very old strain of American politics. One could see it in his slogan “America First,” a throwback to the isolationism of Charles Lindbergh in the 1930s. When Trump asked mockingly what America was getting from its interventions in the Middle East or the protection its troops provided Europe through the NATO alliance, he was unknowingly channeling Senator Robert Taft and his opposition to the Marshall Plan. Past presidents, Republicans and Democrats, understood that the small upfront cost of stationing troops overseas in places such as Korea or Bahrain paid much greater dividends by deterring rivals and maintaining stability. Military and economic aid was a small price to pay for trade routes and open markets. But Trump rejected all of this.

    As president, Trump’s foreign policy has not been altogether catastrophic. (That is faint praise, I know.) He has used force in constructive flashes, such as the drone strike that killed Qassem Suleimani or the air strikes against Syrian landing strips after the regime gassed civilians. He never pulled America out of NATO as he said he would, though he declined to say publicly that America would honor the mutual defense commitments in the treaty’s charter. He pulled out of Obama’s nuclear deal with Iran, a deal whose merits were always a matter of controversy. He began to reverse the spending caps imposed during Obama’s presidency on the Pentagon’s budget. On China, the Trump administration has begun aggressively to target Beijing’s thievery and espionage and takeover of international institutions.

    Most consistently, Trump’s foreign policy has been marked by an amoral transactionalism. Modern presidents of both parties have made bargains with tyrants, but they did so sheepishly, and often they appended talk of human rights to their strategic accommodations. Trump was different. He went out of his way to pay rhetorical tribute to despots and authoritarians who flattered him — Kim Jong Un, Vladimir Putin, Xi Jinping, Viktor Orban, Jair Bolsonaro. When Trump’s presidency began, senior advisers such as General James Mattis and General H.R. McMaster tried to soften, and at times to undermine, his appetite to renounce American leadership in the world. McMaster made the president sit through a power-point presentation about life in Afghanistan before the Taliban to persuade him of the need for a small military surge there. After Trump abruptly announced the withdrawal of the small number of American forces in Syria, his advisers persuaded him that some should stay in order to protect the oil fields. And so it went until most of the first cabinet was pushed out in 2018 and 2019. The new team was more malleable to Trump’s instincts. Trump’s new secretary of state, Mike Pompeo, empowered an envoy to negotiate an American withdrawal from Afghanistan with the Taliban, without including the Afghan government, our ally, in the talks. Instead of undermining Trump’s push to leave the Iran nuclear deal, as James Mattis and Rex Tillerson had done, the president’s new team kept escalating sanctions.

    Trump was erratic. Never has foreign policy been so confusing to anyone outside (and to some inside) the White House. Trump would impetuously agree with heads of state to major policy changes before the rest of his government could advise him of his options. Since Trump shares his internal monologue with the world on twitter, these lunges became policies, until he would later reverse them just as fitfully. To take one example: the sequence of tweets that announced Trump’s deal in 2019 with Turkey to pull American support for its Kurdish allies in Syria had real consequences, even though Trump would later reverse himself. As the Turkish military prepared to enter an autonomous Kurdish region of Turkey, the Kurdish fighters who had bled to defeat ISIS were forced to seek protection from Russia, Iran, and Bashar al Assad.

    During that crisis, Trump tweeted about one of his favorite themes: “The endless wars must end.” For the first fifteen years of the post-9/11 era, that kind of talk would have been heresy for Republicans. Despite a few outliers inside the party like Ron Paul and Rand Paul, the party of Bush and Reagan supported what it called a “long war,” a multi-generational campaign to build up allies so they could defeat terrorists without American support. Until very recently, Republicans understood that as frustrating as training local police in Afghanistan and counter-terrorism commandos in Iraq often can be, the alternative was far worse, both strategically and morally. The same was true of American deployments during the Cold War. To this day there are American troops in South Korea and Germany, in part because their very presence deterred adversaries from acting on their own aggressive or mischievous impulses. But Trump disagreed. And he echoed a growing consensus. “No more endless wars” is the new conventional wisdom.

     

    III

    The Quincy Institute for Responsible Statecraft was founded in 2019 as a convergence of opposites, with money from George Soros’ Open Society Foundation and the Koch brothers. There was one thing about which the opposites agree, and that is the end of American primacy, and consequent activism, in the world. The new think tank hopes to mold the wide but inchoate opposition to “endless wars” into a coherent national strategy.

    On the surface, the Quincy Institute presents itself in fairly platitudinous terms. “The United States should respect established international laws and norms, discourage irresponsible and destabilizing actions by others, and seek to coexist with competitors,” its website says. “The United States need not seek military supremacy in all places, at all costs, for all time.” That boilerplate sounds like the kind of thing one would hear in the 2000s from what were then known as the netroots: wars of choice are bad, international law is good. But there is an important distinction. The progressives who obsessed over the neoconservatives in the Bush years argued the ship of state had been hijacked. The Quincy Institute is arguing that the institutions it once sought to protect from those ideological interlopers were themselves in on the heist. The problem is not the distortion of our foreign policy by foreign interests. The problem is the system that created our foreign policy in the first place.

    Consider this passage by Daniel Bessner on Quincy’s website: “While there are national security think tanks that lean right and lean left, almost all of them share a bipartisan commitment to U.S. ‘primacy’ — the notion that world peace (or at least the fulfillment of the “national interest”) depends on the United States asserting preponderant military, political, economic, and cultural power. Think tanks, in other words, have historically served as the handmaidens of empire.” Bessner is echoing an idea from Stephen Walt, the Harvard professor who is also a fellow at the institute. At the end of The Hell of Good Intentions, which appeared in 2018, Walt called for a “fairer fight within the system,” and recommended establishing a broader political movement and the creation of new institutions — a think tank? — to challenge what he perceives as the consensus among foreign policy elites to favor a strategy of liberal hegemony. American primacy in the world he deemed to be bad for America and bad for the world.

    The Quincy Institute hired the perfect president for such a program. A retired Army colonel and military historian who lost his son in the Iraq War, Andrew Bacevich has emerged as a more literate and less sinister version of Smedley Butler. That name is largely forgotten today, but Butler was a prominent figure in the 1930s: a retired Major General who, after his service to the country, declared that “war is a racket” and that his career as a Marine amounted to being a “gangster for capitalism.” Butler later admitted that he was approached by a cabal to lead a military coup against President Roosevelt, but he remains to this day a hero of the anti-war movement. In 2013, in Breach of Trust, Bacevich presented Butler as a kind of dissident: “He commits a kind of treason in the second degree, not by betraying his country but calling into question officially sanctioned truths.” In this respect, Butler is the model for other retired military officers who dare to challenge official lies. Not surprisingly, Breach of Trust reads like the military history that Howard Zinn never wrote. It is a chronicle of atrocities, corruption, and government lies. Like Bacevich’s other writings, it is a masterpiece of tendentiousness.

    More recently, Bacevich has sought to recast the history of the movement to prevent Roosevelt from entering World War II, known as America First. He has acknowledged that America was correct to go to war against the Nazis, but still he believes that the America Firsters have gotten a bad rap. Until Donald Trump, the America First movement was seen as a cautionary tale and a third rail. When Pat Buchanan tried to revive the term in the 1980s and 1990s, there was bipartisan outrage. After all, America First was led by Charles Lindbergh, an anti-Semite and an admirer of the Third Reich. Bacevich acknowledges this ugly provenance. And yet he chafes at Roosevelt’s judgment that Lindbergh’s movement was promoting fascism. “Roosevelt painted anti-interventionism as anti-American, and the smear stuck,” Bacevich wrote in 2017 in an essay in Foreign Affairs charmingly called “Saving America First.”

    Bacevich imparts a grain of truth. The America First movement was largely a response to the unprecedented horrors of World War I, in which armies stupidly slaughtered each other and chemical weapons were used on a mass scale. And the war was sparked by miscalculations and secret alliances between empires and smaller states in Europe: it lacked the moral and strategic purpose of defeating the Nazis and the Japanese fascists. It is quite understandable that two decades after World War I ended, many Americans would be reluctant to fight its sequel. But Bacevich goes a bit further. In his Foreign Affairs essay, he instructed that “the America First Movement did not oppose Jews; it opposed wars that its members deemed needless, costly, and counterproductive. That was its purpose, which was an honorable one.” But was it honorable? While it is true that in the 1930s major newspapers did a terrible job in covering the Third Reich’s campaign against Jews and other minorities, those persecutions were hardly a secret. Nazi propaganda in the United States was openly anti-Semitic. The war weariness of post-World War I America does not confer nobility on America First’s cause. In a recent interview Bacevich became testy when asked about that remark. “Come on now,” he said. “I think that the anti-interventionist case was understandable given the outcome of the First World War. They had reason to oppose U.S. intervention. And, again, let me emphasize, their calculation was wrong. It’s good that they lost their argument. I do not wish to be put into a position where I’m going to make myself some kind of a defender for the people who didn’t want to intervene against Nazi Germany.” Good for him.

    That exchange tells us a lot about the Quincy Institute. The think tank’s foreign policy agenda and arguments echo the anti-interventionism of the 1930s. Most of its scholars are more worried about the exaggeration of threats posed by America’s adversaries than the actual regimes doing the actual threatening. In May, for example, Rachel Esplin Odell, a Quincy fellow, complained that Senator Romney was overstating the threat of China’s military expansion and unfairly blaming the state for the outbreak of the coronavirus: “The great irony of China’s military modernization is that it was in large part a response to America’s own grand strategy of military domination after the Cold War.” In this, of course, it resembled most everything else.

    The institute has hired staff that come out of the anti-neoconservative movement of the 2000s. Here we come to a delicate matter. The anti-neoconservatives of that era flirted with and at times embraced an IR sort of anti-Semitism: the obsession with Israel and its influence on American statecraft. Like the America Firsters, the anti-neoconservatives worry about the power of a special interest — the Jewish one — dragging the country into another war. A few examples will suffice. In 2018, Eli Clifton, the director of Quincy’s “democratizing foreign policy” program, wrote a post for the blog of Jim Lobe, the editor of the institute’s journal Responsible Statecraft, that three Jewish billionaires — Sheldon Adelson, Bernard Marcus, and Paul Singer — “paved the way” for Trump’s decision to withdraw from Obama’s Iran nuclear deal through their generous political donations. It is certainly fair to report on the influence of money in politics, but given Trump’s well-known contempt for the Iran deal, Clifton’s formulation had an odor of something darker.

    Then there is Trita Parsi, the institute’s Swedish-Iranian vice president, who is best known as the founder of the National Iranian American Council, a group that purports to be a non-partisan advocacy group for Iranian-Americans but has largely focused on softening American policy towards Iran. In 2015, as the Obama administration was rushing to finish the nuclear deal with Iran, his organization took out an ad in the New York Times that asked, “Will Congress side with our president or a foreign leader?” a reference to an upcoming speech before Congress by the Israeli prime minister Benjamin Netanyahu. The National Iranian American Council’s foray into the dual loyalty canard is ironic considering that Parsi himself has been a go-between for journalists and members of Congress who seek access to Mohammad Javad Zarif, Iran’s foreign minister.

    This obsession with Israeli influence in American foreign policy is a long-standing concern for a segment of foreign policy realists, who believe that states get into trouble when the national interest is distorted by domestic politics — an affliction that is particularly acute in democratic societies which respect the rights of citizens to make their arguments to the public and to petition the government and to form lobbies. The most controversial of the realists’ scapegoating of the domestic determinants of foreign policy was an essay by Stephen Walt and John J. Mearsheimer (both Quincy fellows) that appeared in the London Review of Books in 2005. It argued that American foreign policy in the Middle East has been essentially captured by groups that seek to advance Israel’s national interest at the expense of America’s. “The thrust of US policy in the region derives almost entirely from domestic politics, and especially the activities of the ‘Israel Lobby,’” they wrote. “Other special-interest groups have managed to skew foreign policy, but no lobby has managed to divert it as far from what the national interest would suggest, while simultaneously convincing Americans that US interests and those of the other country — in this case, Israel — are essentially identical.”

    Walt and Mearsheimer backed away from the most toxic elements of their essay in a subsequent book. The essay sought to explain the Iraq War as an outgrowth of the Israel lobby’s distortion of American foreign policy. The book made a more modest claim about the role it plays in increasing the annual military subsidy to Israel and stoking American bellicosity to Israel’s rivals like Iran. They also took pains to denounce anti-Semitism and acknowledge how Jewish Americans are particularly sensitive to arguments that present their organized political activity as undermining the national interest. Good for them. But the really important point is that events have discredited their claims. The all-powerful “Israel Lobby” was unable to wield its political influence to win the fight against Obama’s Iran deal. It was not able to stop Obama’s public pressuring of Israel to accept a settlement freeze. Decades earlier, it had not been able to thwart Reagan’s sale of AWACs to the Saudis. Anyone who believes in an omnipotent AIPAC is looking for conspiracies.

    *

    Walt himself, and the Quincy Institute, now has a much more ambitious target: the entire foreign policy establishment. This is the central thesis of The Hell of Good Intentions — that the machinery of American foreign policy is rigged. It will always favor a more activist foreign policy, a more dominant military and liberal hegemony. All the pundits, generals, diplomats and think tank scholars in Washington are just too chummy with one another. A kind of groupthink sets in. (This never happens at the Quincy Institute.) The terms of foreign policy debate are narrowed. And analysts who seek an American retrenchment from the world are shunted aside.

    To prove this point, Walt spends several pages observing how former government officials land jobs at prestigious think tanks and get invited to speak at fancy dinners. The result is that no one is ever held to account for their mistakes, while the courageous truth-tellers are ignored and isolated. (At times the book reads like a very long letter by a spurned friend asking why he never got an invitation to last month’s retreat at Aspen.)

    To illustrate this desperate problem, Walt turns to the annual conference for the World Affairs Councils of America. He ticks off speakers from past years—Susan Glasser, Vali Nasr, Paula Dobriansky — and observes, “These (and other) speakers are all dedicated internationalists, which is why they were invited.” So whom does Walt want the World Affairs Councils of America to invite? “Experts with a more critical view of U.S. foreign policy, such as Andrew Bacevich, Peter Van Buren, Medea Benjamin, Glenn Greenwald, Jeremy Scahill, Patrick Buchanan, John Mueller, Jesselyn Radack, or anyone remotely like them.”

    There is so much to be said about all of these figures. Patrick Buchanan’s ugly isolationist record is well known. But consider, at the other end of the ideological spectrum, Medea Benjamin. She is the founder of an organization called Code Pink, known mostly for disrupting public meetings, which last year briefly took control of the Venezuelan embassy in Georgetown to prevent representatives of the country’s internationally recognized interim anti-Maduro government from taking over. A group of American anti-imperialists were defending the prerogatives of a dictator who had sold off his country’s resources to China and Russia while his people starved. People like Benjamin are not dissidents. They are stooges.

    In this way the hard-nosed centrist post-Iraq realists converge with the radicals of the left even as they converge with the radicals of the right. This is realism in the style not of Henry Kissinger but of Noam Chomsky. As in Chomsky, the aggression of America’s adversaries is explained away as responses to American power. And as in Chomsky, the explanation often veers into apologies for monsters. Consider “Why the Ukraine Crisis is the West’s Fault,” an essay by Mearsheimer in Foreign Affairs in 2014. There he argues that the expansion of NATO and the European Union, along with American democracy-promotion, created the conditions for which the Kremlin correctly assessed that its strategic interests were threatened in Ukraine. And after street demonstrations in Kiev resulted in the flight of the Ukrainian president, Viktor Yanukovych, to Russia, Putin had little choice but to snatch Crimea from his neighbor. “For Putin,” the realist writes, “the illegal overthrow of Ukraine’s democratically elected and pro-Russian president — which he rightly labeled a ‘coup’ — was the final straw.” Of course the heroic agitation of the Maidan was about as much of a coup as the Paris commune of 1871. But like Putin, Mearsheimer argues that this “coup” in Ukraine was supported by Washington. His evidence here is that the late Senator John McCain and former assistant secretary of state Victoria Nuland “participated in antigovernment demonstrations,” and that an intercepted phone call broadcast by Russia’s propaganda network RT found that Nuland supported Arseniy Yatsenyuk for prime minister and was positive about regime change. “No Russian leader would tolerate a military alliance that was Moscow’s mortal enemy until recently moving into Ukraine,” Mearsheimer writes. “Nor would any Russian leader stand idly by while the West helped install a government there that was determined to integrate Ukraine into the West.”

    What Mearsheimer leaves out of his essay is that Yanukovych campaigned for the presidency of Ukraine on a promise to integrate his country into the European Union, an entirely worthy goal. But he violated his pledge with no warning, and under Russian pressure; and his citizens became enraged. Nor does Mearsheimer tell his readers about the profound corruption discovered after Yanukovych fled. Ukrainians did not rise up because of the imperialist adventures of Victoria Nuland or the National Endowment for Democracy. They rose up because their elected president tried to bamboozle them by promising to join Europe only to join Russia. Mearsheimer also makes no mention of the Budapest memorandum of 1994, in which Russia, America, and the United Kingdom gave security assurances to Ukraine to protect its territorial integrity in exchange for relinquishing its Soviet-era nuclear weapons. The fact that Putin would so casually violate Russia’s prior commitments should give fair-minded observers reason to fear what else he has planned. But Mearsheimer is not bothered by Putin’s predations. Putin, Mearsheimer writes, knows that “trying to subdue Ukraine would be like swallowing a porcupine. His response to events there has been defensive, not offensive.”

    Mearsheimer’s excuses for Putin and his failure to grasp the meaning of Ukraine’s democratic uprising in 2014 illuminate a weakness in his broader theory of international relations. In Mearsheimer’s telling, the only meaningful distinction between states is the amount of power they wield. States, he writes in his book The Great Delusion, “are like balls on a billiard table, though of varying size.” He goes on to say that “realists maintain that international politics is a dangerous business and that states compete for power because the more power a state has, the more likely it is to survive. Sometimes that competition becomes so intense that war breaks out. The driving force behind this aggression is the structure of the international system, which gives states little choice but to pursue power at each other’s expense.” This is not a novel idea. Thucydides relates what the Athenians told the Melians: “the strong do what they can and the weak suffer what they must.” For Mearsheimer, it does not matter that twenty years before its invasions of Crimea and Ukraine Russia had pledged to respect and protect Ukraine’s territorial integrity. Russia was strong and Ukraine was weak. Russia’s perception of the threat of an enlarged European Union mattered, whereas the democratic choice of Ukrainians did not. Realists are not moved by democratic aspirations, which are usually domestic annoyances to high strategy. Nor are they bothered by the amorality of their analysis of history.

    As for American behavior around the world, the Thucydidean framework describes it, but — unlike Russian behavior — does not extenuate it. For the Quincy intellectuals, there is no significant difference between America and other empires. America is not exceptional. It is only a larger billiard ball. It stands, and has stood, for nothing more than its own interests. But this equivalence is nonsense. Important distinctions must be made. When France booted NATO’s headquarters out of Paris in the middle of the Cold War, Lyndon Johnson did not order an army division to march on Paris. Trump’s occasional outbursts aside, America does not ask countries that host military bases to pay tribute. After toppling Saddam Hussein, America did not seize Iraq’s oil. Compare this to the Soviet Union’s response to a dockworkers’ strike in Poland, or for that matter to the Dutch East India Company. These realists do not acknowledge the value of preserving the system of alliances and world institutions that comprise the American-led world order, or the fact that they have often enriched and secured Americaʼs allies, and at times even its adversaries. In this respect they are not only anti-interventionist, they are also isolationists, in that they believe that the United States, like all other states, naturally and in its own best interest stands alone.

    All of this is emphatically not to say that the American superpower has always acted with prudence, morality, and benevolence. There have been crimes, mistakes, and failures. There have also been national reckonings with those crimes, mistakes, and failures. No nation state has ever not abused its power. But behind these reckonings lies a larger historical question. Has America largely used its power for good? A great deal depends on the answer to that question. And the answers must be given not only by Americans but also by peoples around the world with whom we have (or have not) engaged. The valiant people on the streets of Tehran in 2009 who risked their lives to protest theocratic fascist rule shouted Obama’s name — were they wrong? About Obama they were certainly wrong: while they were imploring him for help he was brooding about American guilt toward Mossadegh. But were they wrong about America? And the Ukrainians in the Maidan, and the Egyptians in Tahrir Square, and the Kurds, and the women of Afghanistan, and the masses in Hong Kong, and the Guaido movement in Venezuela, and the Uighurs in their lagers — why have they all sought American assistance and intervention? Perhaps it is because they know that the American republic was founded on a sincere belief that the freedom enjoyed by its citizens is owed to all men and women. Perhaps it is because they have heard that the United States created, and stood at the helm, of a world order that has brought prosperity to its allies and its rivals, and even sometimes came to the rescue of the oppressed and the helpless. The case can certainly be made that America in its interventions damaged the world — the anti-interventionists make it all the time — but the contrary case is the stronger one. And contrary to the anti-interventionists, there are many ways to use American power wisely and decisively: the choice is not between quietism and shock and awe. No, the people around the world who look to us are not deluded about our history. They are deluded only about our present.

    American exceptionalism was not hubris. It was a statement of values and a willingness to take on historical responsibility. Nor was it in contradiction to our interests, though there have been circumstances when we acted out of moral considerations alone. It goes against the mood of the day to say so, but we must recover the grand tradition of our modern foreign policy. It is not remotely obsolete. Reflecting on the pandemic last spring, Ben Rhodes declared in The Atlantic, very much in the spirit of his boss, that the crisis created an opportunity to reorient America’s grand strategy: “This is not simply a matter of winding down the remaining 9/11 wars — we need a transformation of what has been our whole way of looking at the world since 9/11.” Rhodes said that he still wants America to remain a superpower. He proposed new national projects to fight right-wing nationalism, climate change, and future pandemics — all excellent objectives. He also questioned why America’s military budget is several times larger than its budget for pandemic preparedness or international aid. But what if the world has not entirely changed, pandemic and all? What if the world that awaits us will be characterized by great power rivalry and persistent atrocities? What if corona does not retire Westphalia?

    If you seek to know what the world would look like in the absence of American primacy, look at the world now. Hal Brands and Charles Edel make this point well in The Lessons of Tragedy: “It is alluring to think that progress can be self-sustaining, and that liberal principles can triumph even if liberal actors are no longer preeminent. To do so, however, is to fall prey to the same ahistorical mindset that so predictably precedes the fall.” And so the first task of those seeking to counter American unexceptionalism is to resist the urge to believe that the past is entirely over, and to reject wholesale the old ends and the old means, and therefore to scale back America’s commitments to allies and to decrease the military budget. Even when we are isolationist we are not isolated. There are threats and there are evils, and whatever should be done about them it cannot be that we should do little or nothing about them. We need to become strategically serious.

    It was as recently as 2014 that Obama dismissed ISIS as a junior varsity team, and even he was forced to reconsider his narrative that the killing of Osama bin Laden was the epitaph for the 9/11 wars, when a more virulent strain of Islamic fascism emerged in the Levant. In the summer of 2014, he sent special operation forces back to Iraq and began the air power campaign against ISIS that continued through 2019. Would ISIS have come into being if America had kept a small force inside of Iraq after 2011 and continued to work quietly with Iraq’s government to temper its sectarian instincts against the Sunni minority? It is impossible to know. What is known, though, is that in 2011 American officers and diplomats on the ground who had worked with Iraq’s security forces warned that without some American presence in the country, there was a risk that the army would collapse; and it did. This same cautionary lesson also applies to Afghanistan. No serious person should trust the Taliban’s promise that it will fight against al Qaeda if it were to take back power. And while it is true that the Afghan government is corrupt and often hapless, foreign policy consists in weighing bad and worse options. The worse option for Afghanistan is a withdrawal that leaves al Qaeda’s longstanding ally a fighting chance to consolidate power and turn the country again into a safe haven of international terrorism and again oppress its people. This is not idle speculation.

    The continuing battle against terrorism, which is a continuing threat, must not blind us, as it did George W. Bush, to the new era of great power rivalry. Americans must surrender the pleasant delusion that China and Russia will mature into responsible global stakeholders, or that outreach to Iran will temper its regional ambitions. In this respect Fukuyama was wrong and Huntington and Wieseltier were right. The pandemic has shown how China hollows out the institutions of the world order that so many had hoped would constrain and tame them. After prior pandemics, the United States invested more in its partnership with China and the World Health Organization, reasoning that as China industri alized it needed assistance to track new diseases before they were unleashed on the rest of the world. That system failed in late 2019 and 2020 not because China lacked the public health infrastructure to surveil the coronavirus. It failed because China is a corrupt authoritarian state that lied about the threat and punished the journalists, doctors, and nurses who tried to warn the world about it. This suppression of the truth cost the rest of the world precious time to prepare for what was coming. It turns out that states are not just billiard balls of varying sizes. If China were an open society, it would not have been able to conceal the early warnings. The nature of its regime is an important reason why covid19 was able to mutate into a global pandemic.

    As former Soviet dissidents or Serbian student activists can attest, tyrannies appear invincible right up to the moment they topple. This does not mean that America should always use its power to speed this process along. Nor does this mean that America should lead more regime change wars like Iraq. The best outcome for countries such as Iran, China, and Russia is for its own citizens to reclaim their historical agency and take back their societies and their governments from their oppressors. But when moments arise that reveal fissures and weaknesses in the tyrant’s regime, when there are indigenous democratic forces that are gaining ground, America must intensify and assist them. This is a matter of both strategy — the friendship of peoples is always better than the friendship of regimes — and morality. When opportunities for democratic change emerge in the world, the wiser strategy is to support the transition and not save the dictator. Again, this is not a license to invade countries or foment military coups. It is rather a recognition that any arrangements America makes with despots will at best be temporary. America’s true friends are the states that share its values. But the triumph of the open society is not at all preordained. It requires historical action, a rejection of narcissistic passivity, in an enduring struggle. This historical action can take many forms, and it is not imperialism. It is the core of the republic’s historical identity. It is responsible statecraft.

    Ancient Family Lexicon, or Words and Loneliness

    “Whoever knows the nature of the name… knows the nature of the thing itself, ” Plato observed in his Cratylus. To know is a complex verb, difficult but rich. According to the dictionary, it means “to have news of a thing,” “to know that it exists or what it is.” In classical languages, the concept of knowing was linked with being born. Thus by coming into the world others have “news” about us: their recognition of us is part of our birth.

    Knowing the roots of the words at the basis of human relationships permits us to revive a world in which individuals existed as men and women or boys and girls with no middle ground. I will explain what that means. The ancestors of these appellations (woman, girl, man, boy) denoted a particular way of being that subsequent cultures have lost. As the meaning of the words changed, the beings themselves changed. Back then, before these semantic developments, it was understood that the condition of boyhood was synonymous with immaturity, and the divide between childhood and adulthood had to be put to the test of life. Moreover, youth and old age were not personal categories but attitudes of soul and mind. What follows is a sort of Indo-European family lexicon, and a portrait of a lost world.

    Mother
    The word comes from the Indo-European mater, formed by the characteristically childish elementary root ma– and the suffix of kinship –ter. In Greek it is mētēr, in Latin mater, in Sanskrit mātar, in Armenian mayr, in Russian mat, in German Mutter, in English mother, in French mère, in Italian, Spanish and Portuguese madre, in Irish máthair, in Bosnian majika.

    Father
    The word comes from the Indo-European pater, formed by the elementary root pa- and the suffix of kinship –ter. In Greek it is patèr, in Latin pater, in Sanskrit pitar, in ancient Persian pita, in Spanish, Italian and Portuguese padre, in French père, in German Vater, in English father.

    These terms are so ancient, so primordial that they have survived the history of languages   and the geography of peoples. Since they were first uttered, these words have consistently been among the first spoken by human beings. They are solid words, like a brick house, like a mountain. It is our fathers and our mothers who teach us first to name things. It is natural that a child should first articulate ma– or pa-. There is no child who does not seek to be loved and held, who is not in need of care and protection from a mother and father. And we never forget these words; we hold them inside ourselves all the way to the end. Studies on Alzheimer’s and senile dementia patients who have spoken a second language throughout their lives, a language different from that of their country of origin, show that they refer to dear ones using their original language. Native language. Mother-tongue.

    Human
    The classical etymology of the word man — meaning a human being — comes from the Latin homo, which dates back to the Indo-European root of humus, “earth,” a result of a primor-dial juxtaposition, perhaps even opposition, between mortal creatures and the gods of heaven. In the Bible, the Creator infuses earth with soul, creating the human compound. In French the term became homme, in Spanish hombre, a root that disappears in the Germanic languages, where we have man in English and Mann in German. The usage may now seem archaic, but it contains a universal idea.

    The Greek ànthrōpos has a disputed etymology. According to some, it is linked to the words anō, “up,” athréo, “look,” and òps, “eye,” a very fine combination of roots that indicates the puniness of men faced with the immensity of the divine and bound to raise their eyes to heaven from the ground. According to others, it is a descendent of the term anèr, “male,” “husband,” corresponding to the Latin vir. In both cases, the condition of “adult man” is colored by the concepts of strength, energy, ardor — of overcoming childhood through tests of courage, which reverberate in the Latin and Greek words vis and andreìa.

    Thus we have the universal concept of a human being who is small, humble, tied to the earth on which she has her feet firmly planted until the day of her death but not entirely material, puny but bent towards heaven – and also strong, therefore heroic, because she has succeeded in enlarging herself. In order to transition from girlhood to womanhood and from boyhood to manhood, one must pass a test. Through this test — or tests: the trials of a human life — girls and boys prove the measures of their strength, tenacity, and courage and in so doing become adults. Once the test is past, their nature itself is forever altered as their name is changed — no middle ground from girl to woman, from boy to man.

    Son, Daughter
    “Son” is connected with the Latin filius, “suckling,” linked to the root fe-, “sucking,” an affective and infantile term typical of the Indo-European –dhe, “to suckle,” which is found today in some Germanic languages as in the English word daughter or in the Bosnian one dijete, “boy.”

    The further we move away from the linguistic essence, from the primeval universality of the Indo-European roots, the more complicated things become, and the more the words grow apart and differ from Romance languages   to Germanic ones. The notion of “boy” or “girl” as adolescents still unprepared for adult life does not surface until the fourteenth century. This concept is a foreign loan that dates back to the late Middle Ages and derives from the Arabic raqqās, meaning “gallop,” or “courier,” or more specifically “boy who carries letters,” a term of Maghrebian origin probably spread from Sicily through port exchanges in the Mediterranean, which was so rich in Arabisms. (We may note that this etymology has been made irrelevant by the conditions of modern work, in which many adults are treated as boys who carry letters, that  is, are employed in infantilizing jobs that do not make full use of their adult skills.)

    Young, Old
    “Young” is a very pure and powerful word, and an imprecise one, not tied to a registry concept, in the same way that “old” is not. It clearly comes from the Indo-European root yeun-, from which the Sanskrit yuvā, the Avestan yavan-, the French jeune, the English young, the Latin iuvenis, the Spanish joven, the Portuguese jovem, the Romanian juve, the Russian junyj,  the Lithuanian jánuas, the German jung. “Young” is the calf or foal tenaciously striving to balance on thin and trembling legs, trying and trying again, falling ruinously to the ground until  it stands up, bleeding and covered with straw — but ready to go, to walk, to wander. Youth is strength, a drive, an arrow already fired.

    At the opposite extreme of the life cycle is the old, the elderly, which means worn out, weary, weak, too tired to move, to go further — like a car worn down by too many roads, a car that suddenly stops, the engine melted. Elderly is the worn sole of a shoe that has walked too far. It is the hands of the elderly, like cobwebs that have caught too much wind in life. This idea comes from the Latin vetulus, a diminutive of vetus, which means “used,” “worn out,” “old.” In French it is called vieil, in Spanish viejo, in Portuguese velho, in Romanian vechi. Old age is an attitude and not an age, it means stopping, even surrender. The string of the bow collapsed, the quiver empty. 

    Love
    Love is a pledge, as the etymology shows. The notion of betrothal, the ideas of bride and bridegroom, derive from the Latin sponsum and sponsam, from the past participle of the verb spondeo, which means “to promise,” corresponding to the Greek spèndō. In French it is called époux and épouse, in Spanish and Portuguese esposo, esposa. The original meaning of those words lay in the idea of the indissolubility of the promise of love. Once made, it cannot be revoked. The trust and the faith expressed in the promise were so sacred that they were celebrated by the couple with a libation to the gods.

    In the Romance languages, however,   the meaning of that promise has slipped into the future, to the rite that has yet to happen, in the word fiancé, which derives from fides in Latin, which means “faith.” It is this faith in the promise of love, in its futurity, that gives strength to lovers such as Renzo and Lucia, made immortal by Alessandro Manzoni in I promessi sposi, who did everything possible to fulfill that promise of love contained, primordially, in the definition of “betrothed.”

    Mom.

    As I mentioned, the word comes from the Indo-European root ma-, a universal utterance of affection, which has as its basis in the elementary sequence ma-ma. This childish word has identical counterparts in all Indo-European languages, a sound of affection that extends beyond borders in the welter of different languages around the world.

    Memory is often full of italicized passages, experiences that remain fresh despite the passage of time, but sometimes deletions overshadow the italics. For a long time 

    I had forgotten the sound of the word mom. I could not say it anymore because I had not said it out loud for over fifteen years. I had even stopped thinking it.

    Stabat mater, “the mother stood” next to the son, reads a thirteenth-century religious poem attributed to Jacopone da Todi, which later became universal in the Christian liturgy to indicate the presence of the sad mother next to the suffering son. Once, beside me, the daughter, there stood my mother. We celebrated our birthday on the same day, she and I: born premature, I was, as long as we both lived, her birthday present. When I was a child we always had a double party for the “women,” as my father called us. Since she died, every birthday of mine has been cut in half. And since then I have never been sure of exactly how old I am.

    Every January I get closer and closer to the age my mother was when she died. Meanwhile, like the turtle in the paradox of Zeno, I move further and further away from that lost, skinny, lonely girl who was between the third and the fourth year of high school when her mother died of a cancer as swift as a summer: she fell ill in June and passed in September, on the first day of school. For years I never told anyone of my early loss, it was one of my surgical choices. The silence gave me relief from the empty words of the others: poor girl, so young. I discovered a new space inside me, a sorrow that I did not know before and could now explore, unseen, unheard. I was an orphan.

    It seems impossible to admit it now, like all the admissions of the “imperfect present perfect” that we are, but there was a 

    long period in which I practically stopped talking. I am fine was the only sentence in my stunted girlish vocabulary. Not until I was seventeen did I begin to understand the value that the ancients attributed to words — and I began to respect them in silence with an uncompromising loyalty, learning to say little and to keep almost everything quiet.

    After high school I moved to Milan, enrolled at the university, and started a new life, which I call my second one. For years I never said anything to the people I met, to my friends, to my boyfriends, about my mother’s death. As a daughter I was mute. Anyway, almost nobody ever asked me. My silence was unchallenged. And then, with the publication of my first book, in which I shared my passion for ancient Greek, my third life began — my linguistic life, the era of saying — the advent of the words that I use to make everything real, especially death.

    I remember the exact moment that my verbal mission, my reckoning with mortality through language, started. I was presenting my book to the students in a high school in Ostuni when, at question time, a sixteen-year-old boy asked me, with the frankness of those who believe that I must know the most intimate things in the world because I wrote a book on Greek grammar, “Why in Greek is a human being also called brotòs, or destined to die?” “Because death is part of life,” I said, almost without thinking about it. I was disconcerted by the rapidity of my response: I already knew the answer, even if I had not read it in any book or treatise. I reminded myself that I had no need of a book to know this. She had died; I had lived it. And so on that day I reclaimed the first word that I uttered in my life, like so many of the women and men who have come and will come into the world and have gone and will go out of it. They 

    gave it back to me, those high school boys. I started to say mom again.    

    My mother, mine, who went away a long time ago and whom I resemble so much, the one who taught me my first words.

    The ancients believed that there was a perfect alignment between the signifier and the signified, between word and meaning, between name and reality, owing to the power of naming, to the descriptive force of a word to denote a thing.

    The Greek adjective etymos means “true,” “real,” from which the word “etymology” was later derived. It was coined by the Stoic philosophers to define the practice of knowing the world through the origin of the words that we use — the words that makes us what we are. I fell in love with the strange study of etymology in high school, and never gave up trying to understand the world according to it, to squeeze what surrounds me out of the language that surrounds me — notwithstanding my friends’ teasing that I cannot say anything without a reference to Greek or Latin.

    Many centuries later, taking up a thought of Justinian, Dante remarked in the Vita Nuova that nomina sunt consequentia rerum, “names are consequences of things” — that is, words follow things, they are upon them, they adhere to them, they reveal reality. Reality’s debt to language is very great. Words are the gates to what is. And to what is not: the opposite is also true, that if something has no name, or is not articulated in thought or speech, then it is not there. Silence about a thing does not mean that it is not real, but without a name and without words it is unrecognized and so, in a sense, not here, not present, now and now and now again, among us.

    Much that cannot now be said was once certainly said, about things that were once here but are gone, about a reality that has been lost. Dust.

    Two years ago I read an article in The New York Times that left me with such uneasiness that I was prompted to look more deeply inside myself and the people around me. The journalist declared that these first years of the new millennium are the “era of anxiety.” “The United States of Xanax,” he called the present era in his country’s history, after the most famous pill among the anxiolytics, whose percentage of diffusion in the population, including children, is in the double digits, and whose cost at the local pharmacy is slightly higher than the price of an ice cream and slightly less than a lunch at McDonald’s. Depression — that disease of the soul that until the twenties of the last century was considered as incurable, as inconsolable, as its name, melancholia — is today no longer fashionable, said the Times. It has been usurped. The years of bewilderment in the face of the abyss sung about by Nirvana — and which led to the suicide of Kurt Cobain — are over. Instead we suffer from a different kind of disease, an anxiety that makes us disperse ourselves busily, and scatter ourselves in the name of efficiency, so as not to waste time but instead  to manage it frantically. And as we strive not to lose time, we lose ourselves.

    The author of the article cited the case of Sarah, a 37-year-old woman from Brooklyn working as a social media consultant who, after having informed a friend in Oregon that she was going to visit her over the weekend, was seized by worry and fear when her friend did not reply immediately to her email. A common experience, perhaps: how many times do we fear that we have hurt a loved one without knowing exactly how? Is such worry a sincere concern about the other, or is it a  narcissistic, self-focused guilt? How often are we out of breath as if we were running when in fact we are standing still?

    But Sarah took her worry to an uncommon extreme. Waiting for the answer that was slow to arrive and that presaged her worst fear, she turned to Twitter and her 16,000 followers, tweeting, “I don’t hear from my friend for a day — my thought, they don’t want to be my friend anymore,” adding the hashtag “#ThisIsWhatAnxietyFeelsLike.” Within a few hours, thousands of people all over the world followed her example, tweeting what it meant for them to live in a state of perpetual anxiety, prisoners of a magma of indistinct, inarticulate emotions. At the end of the day, Sarah received a response from her friend: she had simply been away from her house and had not read the email. She would be more than happy to meet her, she had been hoping to see her for so long. A few days later Sarah remarked without embarrassment to journalists who were intrigued by the viral phenomenon: “If you are a human being who lives in 2017 and you are not anxious, there is something wrong with you.”

    Is that really so? Must we surrender to this plague of anxiety? Are we supposed to forget what we know — that friendship is measured in presence and memory, and not in the rate of digital response or the speed of reply? Are we required to infect our most significant relationships with the spirit of highly efficient customer service? Is it a personal affront if a loved one or a friend allows herself half a day to live her life before attending to us? Have we so lost the art of patience that we must be constantly reassured that we have not been abandoned? Are we living out of time, out of our time, if we do not agree to be prisoners of anxiety? Must we conform and surrender and live incompletely, making others around us similarly incomplete?

    I think not. It is perverse to regard anxiety as an integral and indispensable part of our life and our contemporaneity. It is difficult to admit, especially when we are unhappy, but we come into the world to try to be happy. And to try to make others happy. 

    Sarah may have suffered from an anxiety disorder, a serious illness that required appropriate treatment, or perhaps, as she later admitted, she simply felt guilty because, too busy with her work, she had not communicated with her friend for months and was now embarrassed about her absence, about suddenly making herself heard. When we abdicate the faculty of speech, we can only reconstruct the thoughts and feelings of others by means of clues. Often we interpret them incorrectly. Silence confuses us.

    I was once like that. There was a time when anyone could read the words senza parole — “speechlessness” — on my wrist. It was the expression that I got tattooed on my skin when I lost my mother : I can’t say a word, I don’t want to speak. It was my first tattoo, an indelible warning whenever someone held out his hand to help me. I pushed away from everyone after my mother died, especially from myself. I even dyed my hair black so as not to see in the mirror a reflection which resembled the mother I no longer had.

    But “speechlessness” is now the word I hate most, because I understood later, much later, that the words you need to say are always available to you, and you have to make the effort to find them. Just as Plato said, words have the power to create, to form reality — real words, which have equally real effects on our present. As Sarah’s sad story reveals, the absence of words is the absence of reality. Without words there is no life, only anxiety, only malaise.

    I covered up that tattoo in Sarajevo, a few days before my first book was published, because I had finally found my words. When people smile at the black ink stain that wraps my right wrist like a bracelet, I smile too, because only I know what is underneath, the error that was stamped on my flesh that I have now stamped out. How much life was born after the muzzle was destroyed!

    Whatever production of ourselves we stage, there will always be a little detail — a precarious gesture, a forced laugh, an uncertainty, an imbalance — that exposes the inconsistency between what we are doing and what we really want to do.

    We are not films, there is no post-production in life, and special effects lose their luster quickly. We are perpetually a first version, opera prima, drafts and sketches of the tragedy or comedy of ourselves, as in that moment at sunset in Syracuse or Taormina when the actors entered the scene to begin the show.

    Today we all live entangled in a bizarre situation. We have the most immense repository of media in human history and we no longer know what or how or with whom to communicate. I am convinced that we have never before felt so alone. The reason is not that we are silent. Quite the contrary. We talk and talk and talk, until talking exhausts us. But the perpetual cacophony allows us to ignore that we communicate little of substance. We tend to say the bare minimum, to speak quickly and efficiently, to abbreviate, to signal, to hide, to be always easy and never complex. We seem, simultaneously, afraid of being misunderstood and afraid of being understood. The human act of saying has become synthetic, a constant pitch, a transactional practice borrowed from business in which we must persuade our interlocutors in just a few minutes to commit everything they have. Our speech is an advertisement, a performance. Joy is a performance, pain is a performance — and a speedy one. If we do not translate our sentiments into slogans and cliches, graphics and “visualizations,” if we do not express ourselves in the equivalents of summaries, slides, and abstracts, if our presentation of our feelings or our ideas exceed a commonly accepted time limit (reading time: three minutes), then we fear that nobody will have the patience to listen to us.

    We have swapped the infinity of our thoughts for the stupid finitude of 280 characters. We send notices of our ideas and notifications of our feelings, rather like smoke signals. Is there anything more like a smoke signal than Instagram stories, which are similarly designed to disappear? 

    Brevity is now the very condition of our communication. We behave like vulgar epigrammatists, electronically deforming the ancient art of Callimachus and Catullus. We condense what we have to say into each of the many chats on which we try desperately to make ourselves heard by emoticons and phrases and acronyms shot like rubber bullets that bounce here and there as in an amusement park. We refuse subordinate clauses, the complicated verbal arrangement — appropriate for the complexity of actual ideas and feelings — known as hypotaxis, fleeing from going hypò, or “below” the surface, and preferring instead to remain parà, or “next,” on the edge of the parataxis, the list of the things and people we love.

    We refuse to know each other and in the meantime we all talk like oracles.

    It is a fragile paradox, which should be acknowledged without irony (that hollow armor) and which demands love rather than bitter laughter: the less we say about ourselves, the more we reveal about ourselves. Only we do it in a skewed, precarious way. And we do it deceptively, even treasonously.

    Our brevity is only a postponement of what sooner or later will be expressed, but in a twisted way. Surely others have observed the tiny breakdowns, the personal explosions that plague any person forced to live in a perpetual state of incompleteness. Have you never seen someone who, finding herself without words, ends up screaming and madly gesticulating? Everywhere we end up sabotaging the image of perfection that we impose on ourselves with small, miserable, inhuman actions. An unjustified fit of anger on a train: a wrong seat, a suitcase that doesn’t fit, a crying baby, a dog, an insult at the traffic light, and suddenly we are hurling unrepeatable shrieks out the window before running away like thieves. Or perhaps you have observed another symptom of this unhealthy condition: anxious indecision — an unnerving slowness to order at the restaurant, you choose, I don’t know, I’m not sure, maybe yes, of course not, in front of a bewildered waiter, while we collapse as if the course of our whole life depended on the choice of a pizza. 

    Once upon a time, revolutions were unleashed to obtain freedom from a master. Today the word “revolution” is thrown around in political discourse, but in our inner lives it makes us so afraid that we prefer to oppress ourselves, to renounce the treasures of language and the strengths they confer. And so silence has become our master, imprisoning us in loneli-ness. A noisy silence, a busy loneliness. The result is a generalized anxiety that, when it explodes, because it always explodes sooner or later, makes us ashamed of ourselves. 

    When we give our worst to innocent strangers, we would like immediately to vanish, to erase the honest image of ourselves unfiltered. We tell ourselves that is only what we did there — on the subway at rush hour when an old lady cluttered us with her shopping bags, or in the line at the post office, annoyed because we lost our place while we were fiddling with the phone or with a post on Facebook in which we commented on something about which we do not care and about which we have nothing to say because there is nothing to say about it. That is not who we really are. It was a mistake. It was not representative — or so we tell ourselves.

    If we are ashamed, if we want to disappear after these common eruptions, it is for all that we have not done, for all that we have not said, to these strangers and to others we have encountered before. By remaining silent, or by speaking only efficiently, before the spectacle of life, without calling anything or anyone by name, without relishing descriptions, not only do we not know things, as Plato warned, but we do not even know ourselves.

    Who are we, thanks to our words?

    Futilitarianism or To the York Street Station

    Wednesday, April 8th…a date etched in black for socialists and progressives, marking the end of a beautiful fantasy. It was on that doleful day that Senator Bernie Sanders — acknowledging the inevitable, having depleted his pocketful of dreams — announced the suspension of his presidential campaign. It was the sagging anticlimax to an electoral saga that came in like a lion and went out with a wheeze. For months the pieces had been falling into place for Sanders to secure the Democratic nomination, only to fall apart in rapid slow motion on successive Super Tuesdays, a reversal of fortune that left political savants even more dumbstruck than usual. Taking to social media, some of Sanders’ most fervent and stalwart supporters in journalism, punditry, and podcasting responded to the news of his withdrawal with the stoical grace we’ve come to expect from these scarlet ninja. Shuja Haider, a high-profile leftist polemicist who’s appeared in the Guardian, The Believer, and the New York Times, tweeted: “Well the democratic party just officially lost the support and participation of an entire generation. Congratulations assholes.” (On Twitter, commas and capital letters are considered optional, even a trifle fussy.) Will Menaker, a fur-bearing alpha member of the ever popular Chapo Trap House podcast (the audio clubhouse of the self-proclaimed “dirtbag left”), declared that with Bernie out of the race, Joe Biden, “has his work cut out for him when it comes to winning the votes of a restive Left that distrusts and dislikes him. It’s not impossible if he starts now by sucking my dick.” Others were equally pithy.

    It fell upon Jacobin, the neo-Marxist quarterly and church of the one true faith, to lend a touch of class to the valedictory outpourings. Political admiration mingled with personal affection as it paid homage to the man who had taken them so far, but not far enough. On its website (the print edition is published quarterly) it uncorked a choral suite of tributes, elegies, and inspirational messages urging supporters to keep their chins up, their eyes on the horizon, their gunpowder dry, a song in their hearts: “Bernie Supporters, Don’t Give Up,” “We Lost the Battle, but We’ll Win the War,” “Bernie Lost. But His Legacy Will Only Grow.” In this spirit, the magazine’s editor and founder, Bhaksara Sunkara, author of The Socialist Manifesto: The Case for Radical Politics in an Era of Extreme Inequality, conducted a postmortem requiem on YouTube with his Jacobin comrades processing their grief and commiserating over their disappointment. Near the end of the ceremony, Sunkara declared that Bernie’s legacy would be as a moral hero akin to Martin Luther King, Mother Jones, and Eugene V. Debs. Which offered a measure of bittersweet consolation, but was not what Sunkara had originally, thirstily desired. “I wanted him to be fucking Lenin. I wanted him to take power and institute change.” But the Bernie train never reached the Finland Station, leaving the Jacobins cooling their heels on the platform and craning their necks in vain. 

    Politically and emotionally they had banked everything on him. “Socialism is the name of our desire,” Irving Howe and Lewis Coser had famously written, and for long fallow seasons that desire lay slumbrous on the lips until awakened by Bernie Sanders, the son of Jewish immigrants from Poland, the former mayor of Burlington, Vermont, the junior senator of that state, and lifelong champion of the underdog. Where so many longtime Washington figures had been led astray by sinecures, Aspen conferences, and unlimited canapes, Sanders had been fighting the good fight for decades without being co-opted by Georgetown insiders and neoliberal think tanks, like a protest singer who had never gone electric. He might not be a profound thinker or a sonorously eloquent orator (on a tired day he can sound like a hoarse seagull), and his legislative achievement may be a bit scanty, but his tireless ability to keep pounding the same nails appealed to youthful activists that had come to distrust or even detest the lofty cadences of Barack Obama now that he was gone from office and appeared to halo into Oprah-hood. Eight years of beguilement and what had it materially gotten them? grumbled millennials slumped under student debt and toiling in unpaid internships. What Bernie lacked in movie-poster charisma could be furnished by Jacobin, which emblazoned him as a lion in winter.

    So confident was Jacobin that the next great moment in history was within its grasp that in the winter of 2019 it devoted a special issue to the presidency of Bernie Sanders, whose cover, adorned with an oval portrait of Sanders gazing skyward, proclaimed: “I, President of the United States and How I Ended Poverty: A True Story of The Future.” Subheads emphasized that this was not just an issue of a magazine, a mere collation of ink and paper, it was the beginning of a crusade — a twenty-year plan to remake America. Avengers, assemble! At the public launch of the “I, President” issue, Sunkara rhetorically asked, “Is there a point in spending all day trying to explain, like, the Marxist theory of exploitation to some 18-year-old? Yes! Because that kid might be the next Bernie Sanders.” 

    Alas, Jacobin made the mistake of counting their red berets before they were hatched, and now the issue is fated to become a collector’s item, a poignant keepsake of what might have been. Had Sanders remained in the race and won the presidency, Jacobin would have been as credited, identified, and intimately associated with the country’s first socialist administration as William F. Buckley, Jr.’s National Review was with Ronald Reagan’s. Jacobin could have functioned as its ad hoc brain trust, or at least its nagging conscience. From that carousel of possibilities the magazine instead finds itself reckoning with the divorce of its socialist platform from its standard bearer, facing the prospect of being just another journal of opinion jousting for attention. No longer ramped up as a Bernie launch vehicle, Jacobin must tend to the churning ardor for grand-scale structural change and keep its large flock of followers from straying off into the bushes, which is not easy to do after any loss, no matter how noble. “In America, politics, like everything else, tends to be all or nothing,” Irving Howe observed in Socialism and America. And after working so hard on Bernie’s behalf, it’s hard to walk away with bupkis. 

    Jacobin possesses a strong set of jaws, however. It will not be letting go of its hold in the marketplace of ideas anytime soon. For better or ill, it will continue to set the tone and tempo on the left even in the absence of its sainted gran’pop. Since initiating publication in 2010, Jacobin has established itself as an entrepreneurial success, a publishing sensation, and an ideological mothership. It has built up its own storehouse of intellectual capital, an identifiable brand. Taking its name and sabre’d bravado from the group founded by Maximilien Robespierre that conducted the French Revolution’s Reign of Terror (an early issue featured an IKEA-like guillotine on the cover, presumably for those fancying to stage their own backyard beheadings — “assembly required,” the caption read), Jacobin located a large slumbering discontent in the post-Occupy Wall Street/Great Recession stagnancy among the educated underemployed and gave it a drumbeat rhythm and direction.

    From the outset the magazine exuded undefeatable confidence, the impression that history with a capital H was at its back. Its confidence in itself proved not misplaced. Where even before the coronavirus most print magazines were on IV drips, barely sustainable and in the throes of a personality crisis, Jacobin’s circulation has grown to 40,000 plus (more than three times that of Partisan Review in its imperious prime); it has sired and inspired a rebirth of socialist polemic (Why Women Have Better Sex Under Socialism, The ABCs of Socialism, Why You Should Be a Socialist, and the forthcoming In Defense of Looting), and helped recruit a young army of activists to bring throbbing life to Democratic Socialists of America, whose membership rolls as of late 2019 topped 56,000, with local chapters popping up like fever blisters. 

    The editorial innovation of Sunkara’s Jacobin was that it tapped into animal spirits to promote its indictments and remedies, animal spirits normally being the province of sports fans, day traders, and bachelorette parties but not of redistributionists, egalitarians, and social upheavers. Even its subscription form is cheeky: “The more years you select, the better we can construct our master plan to seize state power.” Although the ground game of socialism was traditionally understood as a conscientious slog — meetings upon meetings, caucusing until the cows come home, microscopic hair-splitting of doctrinal points — Jacobin lit up the scoreboard with rhetoric and visuals that evoked the heroic romanticism of revolution, history aflush with a red-rose ardor. The articles can be dense and hoarse with exhortations (“we must build…,” “we must insist…” we must, we must), the writing unspiced by wit, irony, and allusion (anything that smacks of mandarin refinement), and the infographics more finicky than instructive, but the 

    overall package has a jack-in-the-box boing!, a kinetic aesthetic that can be credited to its creative director, Remeike Forbes. Not since the radical Ramparts of the 1960s, designed by Dugald Stermer, has any leftist magazine captured lightning in a bottle with such flair. 

    Effervescence is what sets Jacobin apart from senior enterprises on the left such as The Nation, Dissent, New Left Review, and that perennial underdog Monthly Review, its closest cousin being Teen Vogue, Conde Nast’s revolutionary student council fan mag — the Tiger Beat of glossy wokeness. When not extolling celebrity styling (“Kylie Jenner’s New Rainbow Manicure Is Perfect for Spring”), Teen Vogue posts junior Jacobin tutorials on Rosa Luxemburg and Karl Marx, whose “writings have inspired social movements in Soviet Russia, China, Cuba, Argentina, Ghana, Burkina Faso, and more…” (most of those movements didn’t pan out so well, but they left no impact on Kylie’s manicure). 

    Jacobin recognized that hedonics are vital for the morale and engagement of the troops, who can’t be expected to keep chipping away forever at the fundament of the late-capitalist, post-industrial, Eye of Sauron hegemon. No longer would socialists be associated with aging lefties in leaky basements cranking the mimeograph machine and handing out leaflets on the Upper West Side — socialism now had a hip new home in Brooklyn where the hormones were hopping and bopping pre-corona. “‘Everybody looks fuckin’ sexy as hell,’” shouted [Bianca] Cunningham, NYC-DSA’s co-chair. ‘This is amazing to have everybody here looking beautiful in the same room, spreading the message of socialism.’” So recorded Simon van Zuylen-Wood in “Pinkos Have More Fun,” his urban safari into the dating-mating, party-hearty socialist scene for New York magazine.

    In the middle of the dance floor I ran into Nicole Carty, a DSA-curious professional organizer I also hadn’t seen since college, who made a name for herself doing tenant work after Occupy Wall Street. (DSA can feel like a never-ending Brown University reunion.) “Movements are, yeah, about causes and about progress and beliefs and feelings, but the strength of movements comes from social ties and peer pressure and relationships,” Carty said. “People are craving this. Your social world intersecting with your politics. A world of our own.”

    Jacobin’s closest companion and competitor in the romancing of the young and the restless is The Baffler, founded in 1988, at the height of the Reagan imperium, allowed to lapse in 2006, revived from cryogenic slumber in 2010, and going strong ever since. Both quarterlies publish extensive and densely granulated reporting and analytical pieces on corporate greed, treadmill education, factory farming, and America’s prison archipelago, though The Baffler slants more essayistic and art-conscious, a Weimar journal for our time. The chief difference, however, is one of temperament and morale. Where Jacobin, surveying the wreckage and pillage, holds out the promise that the cavalry is assembling, preparing to ride, The Baffler often affects a weary-sneery, everything-sucks, post-grad-school vape lounge cynicism, as if the battle for a better future is a futile quest — the game is rigged, the outcome preordained. “Forget it, Jake, it’s Chinatown.” 

    The Bafflerʼs bullpen of highly evolved futilitarians leans hard on the words “hell” and “shit” to register their scorn and disgust at the degradation of politics and culture in our benighted age by rapacious capital with the complicity of champagne-flute elitists and the good old dumb-ox American booboisie. It’s Menckenesque misanthropy (minus Mencken’s thunder rolls of genius) meets Bladerunner dystopia with a dab of Terry Southern nihilism, and it’s not entirely a warped perspective — the world is being gouged on all sides by kleptocratic plunder. But The Baffler offers mostly confirmation of the system’s machinations, the latest horrors executed in fine needlepoint, no exit from the miasma. Each issue arrives as an invitation to brittle despair. 

    Jacobin, by contrast, acts as more of an agent of transmutation, a mojo enhancer for the socialist mission. This is from “Are You Reading Propaganda Right Now?” by Liza Featherstone, which appeared in its winter 2020 issue:

    One of the legacies of the Cold War is that Americans assume propaganda is bad. While the term “propaganda” has often implied that creators were taking a manip-ulative or deceptive approach to their message — or glossing over something horrific, like World War I, the Third Reich, or Stalin’s purges — the word hasn’t always carried that baggage. Lenin viewed propaganda as critical to building the socialist movement. In his 1902 pamphlet What Is to Be Done?, it’s clear that his ideal propaganda is an informative, well-reasoned argument, drawing on expertise and information that the working-class might not already have. That’s what we try to do at Jacobin.

    It is worth asking how much these excitable Leninists actually know about their Bolshie role model. Did they notice Bernie’s response to Michael Bloomberg’s use of the word “communist” to describe him at one of the debates? He called it “a cheap shot.” Say what you will about Sanders, but he recoiled at the charge. He, at least, is familiar with Lenin’s work.

    Jacobin’s mistake was to think it could play kingmaker too. In It Didn’t Happen Here: Why Socialism Failed in the United States, Seymour Martin Lipset and Gary Marks delineated the unpatchable differences between “building a social movement and establishing a political party,” or, in this case, taking over an existing one. (As Irving Howe cautioned, “You cannot opt for the rhythms of a democratic politics and still expect it to yield the pathos and excitement of revolutionary movements.”) Political parties represent varied coalitions and competing interests, requiring expediency, horse trading, and tedious, exhausting staff work to achieve legislative ends. Lipset and Marks: “Social movements, by contrast, invoke moralistic passions that differentiate them sharply from other contenders. Emphasis on the intrinsic justice of a cause often leads to a rigid us-them, friend-foe orientation.” 

    The friend-foe antipathy becomes heightened and sharpened all the more in the Fight Club of social media, where the battle of ideas is waged with head butts and low blows. In print and online, Jacobin wasn’t just Sanders’ heraldic evangelist, message machine, and ringside announcer (“After Bernie’s Win in Iowa, the Democratic Party Is Shitting Its Pants” — actual headline), it doubled as the campaign’s primary enforcer, methodically maligning and elbowing aside any false messiah obstructing the road to the White House, ably assisted by the bully brigade of “Bernie Bros” and other nogoodniks who left their cleat marks all across Twitter. Excoriation was lavished upon pretenders who had entered the race out of relative obscurity and momentarily snagged the media’s besotted attention, such as Texas’ lean and toothy Beto O’Rourke, whose campaign peaked when he appeared as Vanity Fair’s cover boy and petered out from there (“Beto’s Fifteen Minutes Are Over. And Not a Moment Too Soon,” 

    wrote Jacobin’s Luke Savage, signing the campaign’s death certificate). 

    Pete Buttigieg received a more brutal hazing, ad hominemized from every angle. Jacobin despised him from the moment his Eddie Haskell head peeped over the parapet — that this Rhodes scholar, military veteran who served in Afghanistan, and current mayor of South Bend, Indiana had written a tribute to Bernie Sanders when he was in high school only made him seem more fishily Machiavellian in their minds. A sympathetic, personally informed profile by James T. Kloppenberg in the Catholic monthly Commonweal portrayed Buttigieg as a serious, driven omnivore of self-improvement, but in Jacobin he barely registered as a human being, derided as “an objectively creepy figure” by Connor Kilpatrick (“That he is so disliked by the American public while Sanders is so beloved…should hearten us all”), and roasted by Liza Feather-stone for being so conceited about his smarts, an inveterate showoff unlike you-know-who: “Bernie Sanders, instead of showing off his University of Chicago education, touts the power of the masses: ‘Not Me, Us.’ The cult of the Smart Dude leads us into just the opposite place, which is probably why some liberals like it so much.” 

    There was no accomplishment of Buttigieg’s that Jacobin couldn’t deride. Buttigieg’s learning Norwegian (he speaks eight languages) to read the novelist Erlend Loe would impress 

    most civilians, but to Jacobin it was more feather-preening, and un-self-aware besides: “Pete Buttigieg’s Favorite Author Despises People Like Him,” asserted Ellen Engelstad with serene assurance in one of the magazine’s few stabs at lit crit. Even Buttigieg’s father — the renowned Joseph Buttigieg, a professor of literature at Notre Dame who translated Antonio Gramsci and founded The International Gramsci Society — might have washed his hands of this upstart twerp, according to Jacobin. By embracing mainstream Democratic politics, “Pete Buttigieg Just Dealt a Blow to His Father’s Legacy,” Joshua Manson editorialized. The American people, Norwegian novelists, the other kids in the cafeteria, Hamlet’s ghost — the message was clear: nobody likes you, Pete! Take your salad fork and go home!

    Buttigieg may have betrayed his Gramscian legacy but it was small beans compared to the treachery of which another Sanders rival was capable. In “How the Cool Kids of the Left Turned on Elizabeth Warren,” Politico reporter Ruairi Arrieta-Kenna chronicled Jacobin’s spiky pivot against Elizabeth Warren, that conniving vixen. Arrieta-Kenna: “It wasn’t so 

    long ago that you could read an article in Jacobin that argued, ‘If Bernie Sanders weren’t running, an Elizabeth Warren presidency would probably be the best-case scenario.’ In April, 

    another Jacobin article conceded that Warren is ‘no socialist’ but added that ‘she’s a tough-minded liberal who makes the right kind of enemies,’ and her policy proposals ‘would make this country a better place.’” Her platform and Sanders’ shared many of the same planks, after all. 

    Planks, schmanks, the dame was becoming a problem to the Jacobin project, cutting into Bernie’s constituency and being annoyingly indefatigable, waving her arms around like a baton twirler. Warren needed to be sandbagged to open a clear lane for Bernie. Hence, “in the pages of Jacobin,” Arrieta-Kenna wrote, “Warren has gone from seeming like a close second to Sanders to being a member of the neoliberal opposition, perhaps made even worse by her desire to claim the mantle of the party’s left.” The J-squad proceeded to work her over with a battery of negative stories headlined “Elizabeth Warren’s Head Tax Is Indefensible,” “Elizabeth Warren’s Plan to Finance Medicare for All Is a Disaster,” and “Elizabeth Warren Is Jeopar-dizing Our Fight for Medicare for All,” and warned, quoting Arrieta-Kenna again, “that a vote for Warren would be ‘an unconditional surrender to class dealignment.’” When Warren claimed that Sanders had told her privately that a woman couldn’t defeat Donald Trump and declined to shake Bernie’s hand after the January 14 Democratic debate, she completed the arc from valorous ally to squishy opportunist to Hillary-ish villainess. Little green snake emojis slithered from every cranny of Twitter at the mention of Warren’s name, often accompanied by the hashtag #WarrenIsASnake, just in case the emojis were too subtle. Compounding her trespasses, Warren declined to endorse Sanders after she withdrew from the race, blowing her one shot at semi-redemption and a remission of sins. Near the end of Jacobin’s YouTube postmortem, Sunkara expressed sentiments that seemed to be universal in his cenacle: “Fuck Elizabeth Warren,” he explained, “and her whole crew.”

    Once Buttigieg and Warren dropped out of serious contention, the sole remaining obstacle was Joe Biden, whom Jacobin considered a paper-mache relic in a dark suit loaned out from the prop department and seemingly incapable of formulating a complete sentence, much less a coherent set of policies — an entirely plausible caricature, as caricatures go. Occasion-ally goofy and even surreal in his off-the-cuff remarks, Biden doesn’t suggest deep reserves of fortitude and gravitas. In February 2020, Verso published Yesterday’s Man: The Case Against Joe Biden by Jacobin staff writer Branko Marcetic, its cover photograph showing an ashen Biden looking downcast and abject, as if bowing his weary head to the chopping block of posterity. But on the first Super Tuesday, the Biden candidacy, buoyed by the endorsement by the formidable James Clyburn and the resultant victory in South Carolina, rose from the dusty hallows and knocked Sanders’ sideways. It was the revenge of the mummy, palpable proof that socialism may have been in vogue with the media and the millennials but rank and file Democrats, especially those of color, weren’t interested in lacing up their marching boots. For them, the overriding imperative was not Medicare for All or the Green New Deal but denying Donald Trump a second term and the opportunity to reap four more years of havoc and disfigurement. In lieu of Eliot Ness, Joe Biden was deemed the guy who had the best shot of taking down Trump and his carious crew. 

    For a publication so enthralled to the Will of the People and the workers in their hard-won wisdom, it’s remarkable how badly Jacobin misread the mood of Democratic voters and projected its own revolutionary ferment on to it — a misreading rooted in a basic lack of respect for the Democratic Party, its values, its history, its heroes (apart from FDR, since Sanders often cited him), its institutional culture, its coalitional permutations — all this intensified with an ingrained loathing for liberalism itself. From its inception, Jacobin, like so many of its brethren on the Left, has displayed far more contempt and loathing for liberals, liberalism, and the useless cogs it labels “centrists” than for the conservatives and reactionaries and neo-fascists intent on turning the country into a garrison state with ample parking. It has a softer spot for hucksters, too. It greeted libertarian blowhard podcaster Joe Rogan’s endorsement of Sanders as a positive augury — “It’s Good Joe Rogan Endorsed Bernie. Now We Organize” — and published a sympathetic profile of the odious Fox News host Tucker Carlson. This has been its modus operandi all along. In a plucky takedown of the magazine in 2017 called “Jacobin Is for Posers,” Christopher England noted, “It can claim two issues with titles like ‘Liberalism is Dead,’ and none, henceforth, that have shined such a harsh light on conservatism.” For Jacobin, liberalism may be dead or playing possum but it keeps having to be dug up and killed again, not only for the exercise but because, England writes, “conservatism, as its contributors consistently note, can only be defeated if liberalism is brought low.” Remove the flab and torpor of tired liberalism and let the taut sinews of the true change-maker spring into jaguar action. 

    Which might make for some jungle excitement, but certainly goes against historical precedent. “In the United States, socialist movements have usually thrived during times of liberal upswing,” Irving Howe wrote in Socialism and America, cautioning, “They have hastened their own destruction whenever they have pitted themselves head-on against liberalism.” Tell that to Jacobin, which either didn’t learn that lesson or considered it démodé, irrelevant in the current theater of conflict. With the Democratic Party so plodding and set in its ways, a rheumy dinosaur that wouldn’t do the dignified thing and flop dead, the next best thing was to occupy and replenish the host body with fresh recruits drawn from young voters, new voters, disaffected independents, blue-collar remnants, and pink-collar workers. Tap into this vast reservoir of idealism and frustration to unleash bottoms-up change and topple the status quo, writing fini to politics as usual. Based on 2016 and how strongly Sanders ran above expectations, this wasn’t a reefer dream.

    The slogan for this campaign was “Not Me. Us,” and it turned out there were a lot fewer “us” this time around. “Mr. Sanders failed to deliver the voters he promised,” wrote John Hudak, a deputy director and senior fellow at the Brookings Institution, analyzing the 2020 shortfall. “Namely, he argued that liberal voters, new voters, and young voters would dominate the political landscape and propel him and his ideas to the nomination. However, in nearly every primary through early March, those voters composed significantly smaller percentages of the Democratic electorate than they did in 2016.” It wasn’t simply a matter of Sanders competing in a more crowded field this time, Hudak reported. In the nine primaries after Warren’s withdrawal, when it became a two-person race, “Mr. Sanders underperformed his 2016 totals by an average of 16.0%, including losing three states that he won in 2016 (Idaho, Michigan, and Washington).” How did Jacobin miss the Incredible Sanders Shrinkage of 2020? 

    It became encoiled in its own feedback loop, hopped up on its own hype. “Twitter — a medium that structurally encourages moral grandstanding, savage infighting, and collective action — is where young socialism lives,” van Zuylen-Wood had observed in “Pinkos Have More Fun,” and Twitter, to state the obvious, is not the real world, but a freakhouse simulacrum abounding with trolls, bots, shut-ins, and soreheads. Jacobin and its allies so dominated online discourse that they didn’t comprehend the limits of that dominance until it hit them between the mule ears. They fell victim to what has come to be known as Cuomo’s Law, which takes its name from the New York gubernatorial contest in 2018 between Andrew Cuomo and challenger Cynthia Nixon, a former cast member of Sex and the City and avowed democratic socialist. On Twitter, Nixon had appeared the overwhelming popular favorite, Cuomo the saturnine droner that no one had the slightest passion for. But Cuomo handily defeated Nixon, demonstrating the disconnect between online swarming and actual turnout: ergo, Cuomo’s Law. 

    Confirming Cuomo’s Law, Joe Biden probably had less Twitter presence and support than any of the other major candidates, barely registering on the radar compared to Sanders, and yet he coasted to the top of the delegate count until the coronavirus hit the pause button on the primary season. Sanders’ endorsement of Biden in a joint livestream video on April 13th not only conceded the inevitable but delivered a genuine moment of reconciliation that caught many off-guard, steeped in the residual rancor of 2016. Whatever his personal disappointment, Sanders seems to have made peace with defeat and with accepting a useful supporting role in 2020; he refuses to dwell in acrimony. The same can’t be said about many of the 

    defiant dead-enders associated with Jacobin, who, when not rumor-mongering about Biden’s purported crumbling health, cognitive decline, incipient dementia, and basement mold, attempted to kite Tara Reade’s tenuous charges of sexual harassment and assault at the hands of Biden into a full-scale Harvey Weinstein horror show, hoping the resultant furor would dislodge Biden from the top of the ticket and rectify the wrong done by benighted primary voters. For so Jacobin had written and so it was said: “If Joe Biden Drops Out, Bernie Sanders Must Be the Democratic Nominee.” 

    Like Norman Thomas, the longtime leader of the Socialist Party in America, Bernie Sanders bestowed a paternal beneficence upon the left that has given it a semblance of unity and personal identity. He is the rare politician one might picture holding a shepherd’s crook. The problem is that identification with a singular leader is an unsteady thing for a movement to lean on. Long before Thomas died in 1968, having run for the presidency six times, the socialist movement had receded into gray twilight, upstaged by the revolutionary tumult on campuses and in cities. Jacobin is determined to make sure history doesn’t reprise itself once Sanders enters his On Golden Pond years. Preparing the post-Bernie stage of the socialist movement, a pair of Jacobin authors, Meagan Day and Micah Uetricht, collaborated on Bigger Than Bernie: How We Go from the Sanders Campaign to Democratic Socialism (Verso), a combination instruction manual and inspirational hymnal.

    The duo doesn’t lack for reasons to optimize the upside for the ardent young socialists looking to Alexandria Ocasio-Cortez as their new scoutmaster. The coronavirus crisis has laid bare rickety infrastructure, the lack of preparedness, near-sociopathic incompetence, and widespread financial insecurity that turned a manageable crisis into a marauding catastrophe, making massive expansion of health coverage, universal basic income, and debt relief far more feasible propositions. The roiling convulsions following the death of George Floyd once again exposed the brutal racism and paramilitarization of our police forces. A better, more humane future has never cried out more for the taking. But there is a catch: it can be seized only in partnership with liberal and moderate Democrats, no matter how clammy the clasping hands might be, no matter how mushy the joint resolutions, and this will be galling for Jacobin’s pride and vocation, making it harder for them to roll out the tumbrils with the same gusto henceforth. The magazine, after conducting introspective postmortems (“Why the Left Keeps Losing — and How We Can Win”) and intraparty etiquette lessons (“How to Argue with Your Comrades”), finds itself feeling its way forward, with the occasional fumble. When Bhaskar Sunkara announced on Twitter that he intends to cast his presidential vote for Green Party candidate Howie Hawkins (who he?), one of those showy public gestures that leaves no trace, he received pushback from fellow comrades in The Nation (“WTF Is Jacobin’s Editor Thinking in Voting Green?”) and elsewhere. Clarifying his position in The New York Times, where clarifications learn to stand up tall and straight, Sunkara assured the quivering jellies who read the opinion pages that “contrary to stereotypes, we are not pushing a third candidate or eager to see Mr. Trump’s re-election. Instead we are campaigning for core demands like Medicare for All, saving the U.S. Postal Service from bipartisan destruction, organizing essential workers to fight for better pay and conditions throughout the coronavirus crisis and backing down-ballot candidates, mostly running on the Democratic ballot line… Far from unhinged sectarianism, this is a pragmatic strategy.”

    Jacobin pragmatism? This is a historical novelty. By November we will know if they are able to make it to the altar without killing each other. It’s hard to settle once you’ve had a taste of Lenin.

    Night Thoughts

    Long ago I was born.
    There is no one alive anymore
    who remembers me as a baby.
    Was I a good baby? A
    bad? Except in my head
    that debate is now
    silenced forever.
    What constitutes
    a bad baby, I wondered. Colic,
    my mother said, which meant
    it cried a lot.
    What harm could there be
    in that? How hard it was
    to be alive, no wonder
    they all died. And how small
    I must have been, suspended
    in my mother, being patted by her
    approvingly.
    What a shame I became
    verbal, with no connection
    to that memory. My mother’s love!
    All too soon I emerged
    my true self,
    robust but sour,
    like an alarm clock.

    Mahler’s Heaven and Mahler’s Earth

    Gustav Mahler: the face of a man wearing glasses. The face attracts the attention of the viewer: there is something very expressive about it. It is a strong and open face, we are willing to trust it right away. Nothing theatrical about it, nothing presumptuous. This man wears no silks. He is not someone who tells us: I am a genius, be careful with me. There is something energetic, vivid, and “modern” about the man. He gives an impression of alacrity: he could enter the room any second. Many portraits from the same period display men, Germanic and not only Germanic men, politicians, professors, and writers, whose faces disappear stodgily into the thicket of a huge voluptuous beard, as if hiding in it, disallowing any close inspection. But the composer’s visage is naked, trans-parent, immediate. It is there to speak to us, to sing, to tell us something.

    I bought my first recording of Gustav Mahler many decades ago. At the time his name was almost unknown to me. I only had a vague idea of what it represented. The recording I settled on was produced by a Soviet company called Melodiya — a large state-owned (of course) company which sometimes produced great recordings. There was no trade in the Soviet Union and yet the trademark Melodya did exist. It was the Fifth Symphony, I think — I’ve lost the vinyl disc in my many voyages and moves — and the conductor was Yevgeny Svetlanov. For some reason the cover was displayed in the store window for a long time; it was a modest store in Gliwice, in Silesia. Why the display of Mahler’s name in this provincial city which generally cared little for music? 

    It took me several days before I decided to buy the record. And then, very soon, when I heard the first movement, the trumpet and the march, which was at the same time immensely tragic and a bit joyful too, or at least potentially joyful, I knew from this unexpected conjunction of emotions that something very important had happened: a new chapter in my musical life had opened, and in my inner life as well. New sounds entered my imagination. At the same time I understood — or only intuited — that I would always have a problem distinguishing between “sad” and “joyful,” both in music and in poetry. Some sadnesses would be so delicious, and would make me so happy, that I would forget for a while the difference between the two realms. Perhaps there is no frontier between them, as in the Schengen sector of contemporary Europe.

    The Fifth Symphony was my gateway to Mahler’s music. Many years after my first acquaintance with it, a British conductor told me that this particular symphony was deemed by those deeply initiated in Mahler’s symphonies and Mahler’s songs as maybe a bit too popular, too accessible, too easy. “That trumpet, you know.” “And, you know, then came Visconti,” who did not exactly economize on the Adagietto from the same symphony in the slow, very slow shots in Death in Venice, where this music, torn away from its sisters and brothers, the other movements, came to serve a mass-mystical, mass-hys-terical cultish enthusiasm, floating on the cushions of movie theaters chairs. Nothing for serious musicians, nothing for scholars and sages…. But I do not agree. For me the Fifth Symphony remains one of the living centers of Gustav Mahler’s music and no movie will demote it, no popularity will diminish it, no easily manipulated melancholy in a distended Adagietto will make me skeptical about its force, its freshness, its depth. 

    As for that trumpet: the trumpet that I heard for the first time so many years ago had nothing to do with the noble and terrifying noises of the Apocalypse. It was nothing more than an echo of a military bugle — which, the biographers tell us, young Gustav must have heard almost every week in his small Moravian town of Jihlava, or Iglau in German, which was the language of the Habsburg empire, where local troops in their slightly comic blue uniforms would march in the not very tidy streets to the sounds of a brass orchestra. Yet there was nothing trivial or farcical about this almost-a-bugle trumpet. It told me right away that in Mahler’s music I would be exposed to a deep ambivalence, a new complication — that the provincial, the din of Hapsburgian mass-culture, will forever pervade his symphonies. This vernacular, this down-to-earth (down to the cobblestones of Jihlava’s streets) brass racket, always shadows Mahler’s most sublime adagios. 

    The biographical explanation is interesting and important, but it is not sufficient. An artist of Mahler’s stature does not automatically or reflexively rely on early experiences for his material. He uses them, and transposes them, only when they fit into a larger scheme having to do with his aesthetic convictions and longings. The strings in the adagios seem to come from a different world: the violins and the cellos in the adagios sound like they are being played by poets. But then in the rough scherzo-like movements we hear the impudent brass. From the clouds to the cobblestones: Mahler may be a mystical composer, but his mysticism is tinged with an acute awareness of the ordinary, often trite environment of all the higher aspirations.

    His aesthetic convictions and longings: what are they? Judging from the music, one thing seems to be certain: this composer is looking for the high, maybe for the highest that can be achieved, for the religious, for the metaphysical — and yet he cannot help hearing also the common laughter of the low streets, the unsophisticated noise of military brass instruments. His search for the sublime never takes place in the abstract void of an inspiration cleansed of the demotic world which is his habitat. Mahler confronts the predicament well known to many artists and writers living within the walls of modernity but not quite happy with it, because they have in their souls a deep yearning for a spiritual event, for revelation. They are like someone walking in the dusk toward a light, like a wanderer who does not know whether the sun is rising or setting. They have to decide how to relate to everything that is not light, to the vast continent of the half trivial, half necessary arrangements of which the quotidian consists. Should they ignore it, or attempt to secede from it? But then what they have to say will be rejected as nothing more than lofty rhetoric, as something artificial, as unworldly in the sense of unreal. They will be labeled “reactionary” or, even worse, boring. Anyway, aren’t they to some degree made from the same dross that they are trying to overcome, to transcend? 

    And yet if they attach too much importance to it, if they become mesmerized by what is given, by the empirical, then the sheer weight of the banality of existing conditions might crush them, flatten them to nothingness. The dross, right. But let us be fair about modernity: it has plenty of good things as well. It has given us, among other things, democracy and electricity (to paraphrase Lenin). Any honest attitude toward modernity must be extremely complex. Modernity, for better and worse, is the air we breathe. What is problematic for some artists and thinkers is modernity’s anti-metaphysical stance, its claim that we live in a post-post-religious world. Yet there are also artists and thinkers who applaud modernity precisely for its secularism and materialism, like the well-known French poet who visited Krakow and during a public discussion of the respective situations of French poetry and Polish poetry said this: “I admire many things in present- day Polish poetry, but there is one thing that makes me uneasy — you Polish poets still struggle with God, whereas we decided a long time ago that all that is totally childish.” 

    To be sure, they — the anti-moderns, as Antoine Compagnon calls them — may also become too bitter and angry, so that their critique of the modern world can go too far and turn into an empty gesture of rejection. In his afterword to a collection of essays by Gerhard Nebel — the German conservative thinker, an outsider, once a social-democrat, always an anti-Nazi, after World War II a marginal figure in the intellectual landscape of the Bundesrepublik, a connoisseur of ancient Greek literature, someone who saw dealing with die Archaik as one of the remedies against the grayness of the modern world — Sebastian Kleinschmidt presents such a case. He admires the many merits of Nebel’s writing, his vivid emotions, his intolerance of any routine, of any Banausentum or life lived far away from the appeal of the Muses, his passionate search for the real as opposed to the merely actual — but he is skeptical of Nebel’s overall dismissal of modern civilization, since it is too sweeping to be persuasive, too lacking in nuances and distinctions. Perhaps we can put the problem this way: there is no negotiation involved, no exchange, no spiritual diplomacy.

    When coping with modernity, with those aspects of it which insist on curbing or denying our metaphysical hunger, we must be not only as brave as Hector but also as cunning as Ulysses. We have to negotiate. We need to borrow from modernity a lot: since we encounter it every day, how could we avoid being fed and even shaped by it? The very verb “to negotiate” is a good example of the complexity of the situation. It comes from from negotium, from the negation of otium. Otium is the Latin word for leisure, but for contemplation too. Thus the verb to negotiate denotes a worldly activity that tacitly presupposes the primacy of unworldly activities (because the negation comes second, after the affirmation).

    In French, le négoce means commerce, business. We can add to it all the noise of the market and the parliament. When we negotiate, we have no otium. But it is also possible to negotiate in order to save some of the otium. We can negate otium for a while but only in order to return to it a bit later, once it has been saved from destruction. As I say, we must be cunning. 

    By the way, the notion of otium that gave birth to the verb “to negotiate” is not a marginal category, something that belongs only to the annals of academia, to books covered by dust. For the Ancients it was a central notion and a central activity, the beginning and the end of wisdom. And even now it plays an important role in a debate in which the values of modernity are being pondered: those who have problems with the new shape of our civilization accuse it of having killed otium, of having produced an infinity of new noises and activities which contribute to the end of leisure, to the extermination of contemplation. 

    But can we discuss Mahler’s music along with poetic texts by, say, Yeats and Eliot, along with the other manifestoes of modernism? Talking about music in a way that makes it seem like philosophy or a philosophical novel, a kind of Zauberberg for piano and violin, is certainly flawed. Questions are methodically articulated in philosophy and, though never fully answered, they wander from one generation to another, from the Greeks to our contemporaries. Does art need such questions? Does music need them? The first impulse is to say no, art has nothing to do with this sort of intellectual inquiry. Isn’t pure contemplation, separated from any rational discourse, the unique element of art, both painting and music, and perhaps poetry as well? 

    But maybe pure contemplation does not need to be so pure. We do not know exactly how it works (another question!), but we do know that art always takes on some coloring from its historic time, from the epoch in which it is created. Art obviously has a social history, and earthly circumstances. And yet impure contemplation is still contemplation. Let us listen for a minute to the words of a famous painter, an experienced practitioner — to Balthus in his conversations with Alain Vircondelet, which were conducted in the last years of the painter’s life:

    Modern painting hasn’t really understood that painting’s sublime, ultimate purpose — if it has one — is to be the tool or passageway to answering the world’s most daunting questions that haven’t been fathomed. The Great Book of the Universe remains impenetrable and painting is one of its possible keys. That’s why it is indubitably religious, and therefore spiritual. Through painting, I revisit the course of time and history, at an unknown time, original in the true sense of the word. That is, something newly born. Working allows me to be present on the first day, in an extreme, solitary adventure laden with all of past history.

    How fascinating: a great painter tells us that in his work he used not only his eye and his hand but also his reason, his philosophical mind; that when he painted he felt the presence of great questions. Even more: he tells us that the pressure of these questions was not inconsequential, that it led him to spirituality. We know that Mahler, in a letter to Bruno Walter, also mentioned the presence of great questions and described his state of mind while being in contact with the element of music in this way: “When I hear music, even when I am conducting, I often hear a clear answer to all my questions — I experience clarity and certainty.”

    Certainly, the questions that sit around a painter or a composer like pensive cats are very different from those which besiege a philosopher. Do they require a response? Here is one more authority: in a note serving as a preface to the publication of four of his letters about Nietzsche, Valery remarked that “Nietzsche stirred up the combativeness of my mind and the intoxicating pleasure of quick answers which I have always savored a little too much.” The irony of it: “the intoxicating pleasure of quick answers” in a thinker who, as we know, was so proud of his philosophizing with a hammer. Of course, this one sentence comprises in a nutshell the entire judgment that mature Valéry passed on Nietzsche — the early temptation and the later rejection of such a degree of “the combativeness of the spirit.” And it confirms our intuition: the questions that accompany art, painting, music, and poetry cannot be answered in a way similar to debates in philosophy seminars, and yet they are an invisible and inaudible part of every major artistic exertion.

    In a way, Mahler’s doubleness of approach seems completely obvious; the brass and the strings attend each other, and need each other, in the complex patterns of his symphonies. I have read that in his time he was accused by many critics of triviality in his music. They claimed that his symphonies lacked the dignity of Beethoven’s symphonies, the depth of great German music. What they ferociously attacked as trivial is probably the thing that I admire so much in Mahler’s music — the presence of the other side of our world, the inclusion of its commonness and its coarseness, of the urban peripheries, of village fairs, of the brass — the quotation of provincial life, of public parades and military marches, almost like in Nino Rota’s scores for Fellini. Very few among Mahler’s contemporaries were able to see the virtue of it.

    The charge of triviality also had anti-Semitic undertones and followed in the footsteps of Wagner’s accusation, in his “Judaism in Music,” that Jewish composers were not able to develop a deep connection with the soul of the people, and were limited to the world of the city only, gliding slickly on the surface. Jewish composers apparently could not hear the song of the earth, argued such critics. How wonderful, then, that Mahler triumphed in his own Song of the Earth! Jewish composers were accused — among the many sins of which they were accused — of introducing modern elements into their music. Never mind that one of the principal modernizers of Western music was Wagner himself. 

    I have yet to understand why Mahler has for so long, from the very beginning, been so overwhelmingly important for me, so utterly central to the evolution of my soul. Once, in speaking with some American friends, I asked them who “made” them, in the sense of a master, a teacher, un maître à penser, and the reason was I wanted to tell them that Gustav Mahler made me. It was an exaggeration, I know, and a bit precious. I had other masters as well. And yet my statement was not false. Did it have to do only with the sonorities of his symphonies, with the newness of his music, the unexpected contrasts and astonishing passages swinging between the lyric to the sardonic? Was it the formal side uniquely? For many years I resisted the temptation to translate my deep emotional bond to his music — the deep consonance between Mahler’s work and my own search in the domain of poetry — into intellectual terms, maybe fearing that too much light shed on it would diminish its grip on my imagination. I still hold this superstitious view, but I also suspect that there may be some larger intellectual benefit to be gained from an exploration of my obsession.

    For everyone who has a passionate interest in art and in ideas, sooner or later a problem arises. When we look for truth and try to be honest, when we try as a matter of principle to avoid dogmatism and any sort of petrification, any blind commitment to this or that worldview, we are, it seems, necessarily condemned to deal with shards, with fragments, with pieces that do not constitute any whole — even if, consciously or not, we strive for the impossible “whole.” But then if we also harbor a love for art — and it is not at all unusual to have these two passions combined in a single individual — a strange tension appears: in art we deal with forms which, by definition, cannot be totally fragmentary. To be sure, at least since the Romantic moment we have been exposed to fragments, and accustomed to fracture, in all kinds of artistic enterprises, from music and poetry to painting — but even these fragments tend to acquire a shape. If we juxtapose them with the “truth fragments,” with Wittgensteinian scraps of philosophical results, an integrated pattern is created by virtue of some little embellishment, by a sleight of hand; a magician is at work who tends to forget the search for truth because the possibility of a form, a more or less perfect form, suddenly attracts him more strongly than the shapelessness of a purely intellectual assessment. 

    These two dissimilar but related hunts, one for truth, one for form, are not unlike husky dogs pulling a sled in two slightly different directions: they are sometimes able to achieve an almost-harmony. The sled fitfully moves forward, but at other times the competing pressures threaten to endanger the entire expedition. So, too, are our mental hunts and journeys, forever hesitating between a form that will allow us to forget the rather uncomfortable sharpness of truth and a gesturing for truth that may make us forget the thrill of beauty and the urge to create, at least for the time being.

    This brings us back to Mahler. The doubleness in his music that I have described may be understood as reflecting the ambiguity of the double search for truth and form. Mahler was a God-seeker who recognized the ambivalence of such a quest in art. He was torn between the search for the voluptuousness of beauty and the search for the exactness of truth.

    Hartmut Lange, a German writer living in Berlin, a master of short prose, told me once that Mahler’s Song of the Earth, which he listens to all the time and adores in a radical way, “is God.” I was taken aback. The deification of this almost-symphony, which I also ardently admire, made me feel uneasy. But I find it more than interesting that this great music can be associated with, and even called, God. This suggests a quasi-religious aspect of the music, and even a sober secularist cannot escape at times placing the work within the circle nearing the sacred.

    Among the many approaches to the sacred we may distinguish two: one which consists in searching, in a quest, and is conducted in a climate of uncertainty and even doubt, and another which proclaims a kind of sureness, a positive certainty, a eureka-like feeling that what was sought has been found. In our tormented and skeptical time it is not easy to find examples of such a positive and even arrogant attitude, at least not within serious culture. Among the great modern poets and writers only few were blessed by certainty. Even the great Pascal had his doubts, and so much earlier. Gustav Mahler belongs to the seekers, not the finders. The quest is his element, and doubt is always near.

    It is true for both poetry and music: whenever one approaches an important work, one is much more outspoken when it comes to discussing the elements within it that will yield to the intellectual or even dialectical categories that the reader or listener cherishes. The other ingredients, especially those that represent pure lyricism and thus are at the very heart of the work in question, are hardly graspable, at least in words. What can we say? It is beautiful, it pierces my soul, or some other platitude of the sort. Or we can just sigh to signal our delight. Sighing, though, is not enough; it is too inarticulate, and in print it evaporates altogether. This is the misery of writing about art: the very center of it remains almost totally ineffable, and what can be rationally described is rather a frame than the substance itself. 

    A frame that enters into dialogue with its period, with its cultural and historical environment, can be much better described than the substance of a symphony or a painting. The nucleus of a work, or of an artist’s output, is less historical, less marked by the sediments of time, and therefore mysterious. It is also more personal, more private. This is certainly the case with Mahler’s music, whose very core constitute those lyric movements, those endless ostinati that we find everywhere, first in his songs, in Lieder eines fahrenden Gesellen and the other lieder, then in his symphonies, and supremely in their adagios, and then finally in the unsurpassable Lied von der Erde. And the Ninth Symphony! I don’t have in mind only the final Adagio but also the first movement, the Andante comodo, which displays an incredible vivacity and, at the same time, creates an unprecedentedly rich musical idiom — a masterful musical portrayal of what it means to be alive, with all the quick changes and stubborn dramas, the resentments and the raptures, that constitute the exquisite and weary workshop of the mind and the heart.

    But let us not forget, when we celebrate the lyric sections, the sometimes simple melodies, and the long ostinati, let us not forget all the intoxicating marches, the half sardonic, half triumphant marches that originated in a small Moravian town but then crossed the equator and reached the antipodes. These marches give Mahler’s music its rhythm, its vigor, its muscle. There is nothing wan in Mahler’s compositions, nothing pale on the order of, say, Puvis de Chavannes; instead they display, even in their most tender and aching passages, an irreversible vitality. The marches propel the music and give it its movement, its strolls and dances and strides. The “vulgar” marches convey the mood of a constant progression, maybe even of a “pilgrim’s progress.” Nothing ever stagnates in Mahler compositions, they are on the move all the time. 

    It’s unbecoming to disagree with someone who was a great Mahler connoisseur and also contributed enormously to the propagation of his work, but it is hard to accept Leonard Bernstein’s observation that the funeral marches in Mahler‘s symphonies are a musical image of grief for the Jewish God whom the composer abandoned. The problem is not only that there is scant biographical evidence for such an interpretation. More importantly, the marches are more than Bernstein says they are. They represent no single emotion. Instead they oscillate between mourning and bliss and thus stand (or walk or dance) high above any firm monocausal meaning.

     In the Song of the Earth, it is the sixth and last movement, der Abschied, the Farewell, that crowns Mahler’s entire work. Musicologists tell us that its beauty consists mainly in the combination of a lyrical melodic line with the rich chromaticism of the orchestra. But obviously such an observation can barely render justice to the unforgettable charm of this sensual music which unwillingly bids farewell to the earth; we hear in this work the tired yet ecstatic voice of the composer who knew how little life was left to him. Perhaps only in Rilke’s 

    Duino Elegies can we find an example of a similar seriousness in embracing our fate, an instance of a great artist finally abolishing any clear distinction between sadness and joy.  

    There is a fine poem written in the early 1980s by the Swedish poet and novelist Lars Gustafsson. It is called “The Stillness of the World Before Bach” and it caught the attention of many readers. Here is part of it:

    There must have been a world before
    the Trio Sonata in D, a world before the A minor partita,
    but what kind of a world?
    A Europe of vast empty spaces, unresounding,
    everywhere unawakened instruments,
    where the Musical Offering, the Well-Tempered Clavier
    never passed across the keys.
    Isolated churches
    where the soprano line of the Passion
    never in helpless love twined round
    the gentler movements of the flute […]

    [translated into English by Philip Martin]

    Of course there were many voices and many composers before Bach, and not at all “a Europe of vast empty spaces.” What would Palestrina, Gabrielli, and Monteverdi say? What would the monks say who created and developed Gregorian chant? Still, in Gustafsson’s poem we immediately recognize some deeper truth. I imagine that in a similar poem in which Gustav Mahler would replace Johann Sebastian Bach, the poet would describe not “a Europe of vast empty spaces” but rather a Europe of cities, great and small ones, of empty Sunday streets, of empty parks, of waiting rooms.

    The Mahler gesture resembles in some respect the Bach achievement, but it is very different too. Bach was a genius of synthesis, who appeared after centuries of the development of Western art and on this fertile soil built a great edifice of music. There is less synthetic energy in Mahler’s creation; the significance of his work seems to reside in its spiritual implication. Mahler, more than any of his contemporaries, tries to graft onto this lay world of ours a religious striving, to convey a higher meaning to a largely meaningless environment without ever forgetting or concealing the obvious features of a secular age.

    The Sludge

    I was never more hated than when I tried to be honest….
    I’ve never been more loved and appreciated than when I tried
    to “justify” and affirm someone’s mistaken beliefs; or when
    I tried to give my friends the incorrect, absurd answers they
    wished to hear. In my presence they could talk and agree with
    themselves, the world was nailed down, and they loved it.
    They received a feeling of security.

    RALPH ELLISON, INVISIBLE MAN

    One Friday afternoon, in a carpeted alcove off the main sanctuary of my school, a Jewish school in the suburbs of Philadelphia, my class collected in a circle as we did every week. A young, liberally perfumed Israeli woman in a tight turtleneck sweater read to us from a textbook about the exodus from Egypt. I asked her why our ancestors had been enslaved to begin with, and then wondered aloud whether it was because only former slaves can appreciate freedom. I remember the feeling of the idea forming in my very young mind, and the struggle to articulate it. Clumsily, with a child’s vocabulary, I suggested to my teacher that Jewish political life began with emancipation, and that this origin ensured that gratitude to God would be the foundation of our national identity. Could that have been God’s motivation? I don’t remember her answer, only her mild bemusement, and my impression that she did not have the philosophical tools or the inclination to engage with the question. I was left to wonder on my own about the nature of slavery, the distant memories that undergird identity, and God’s will; without a teacher, without a framework. I was by myself with these questions. 

    Of course, we were not gathered in that schoolchildren’s circle to study philosophy. We were studying the Biblical tale not in order to theorize about the nature of slavery and freedom, or to acquire a larger sense of Jewish history, but because it was expected of us, and every other grade in the school, this and every week since the school’s founding, to study the weekly portion of the Torah, because that is what Jewish students in a Jewish school of that denomination do. I had mistaken a social activity for an intellectual one. The norms of a community demanded this conversation of us, because otherwise the community would be suspect. People would whisper that graduates of our school lacked the capacity for full belonging within their particular Jewish group, because we had failed to receive the proper training in membership. The overarching objective of our education was initiation. The prayers that we were taught to say before and after eating, and upon waking up in the morning, and going to the bathroom, and seeing a rainbow, and on myriad other quotidian occasions, served the same purpose. These were not theological practices; we were not taught to consider the might and creative power of the God whom we were thanking — the meanings of what we recited, the ideas that lay beneath the words. We uttered all those sanctifying words because it was what our school’s permutation of the Jewish tradition taught Jews to do. We were performing, not pondering. 

    Divine commandments were the sources and accoutrements of our liturgies and rituals. But we lingered much longer over the choreography than over the divinity. The substance of our identity was rules, which included the recitation of certain formulas for certain concepts and customs. And our knowledge of the rules, how or whether we obeyed them, would signal what sort of Jews we were. The primary purpose of this system was to provide talismans that we could use to signal membership. In the context of my religious education, the meaning of the symbols was less important than how I presented them. Badges were more central than beliefs. The content of the badges — the symbols and all the concomitant intellectual complications — was left alone. Marinating within that culture inculcated in me an almost mystical reverence for my religion and for its God because it placed them in a realm outside of reason. I could not interrogate them: holiness is incommensurate with reason. Without the indelible experience of that schooling in anti-intellectualism, the beauties and intoxicants of tradition would be inaccessible to me. Even now, when I witness expressions of fine religious faith, I am capable of recognizing and honoring them because of that early training.

    The anti-intellectualism had another unwitting effect: the indifference of my community to the cerebral and non-communal dimensions of the way we lived meant that I could develop my own relationship with them.  Since they were unconcerned with the aspects of religious life to which I most kindled, I was free to discover them independently. They didn’t care what I thought, so I set out to think. In this manner I began to acquaint myself with fundamental human questions, to feel my way around and develop the rudiments of ideas about morality, slavery, love, and forgiveness. My academic syllabi were rife with references to these themes, but they were rarely discussed directly. They were like so many paintings on the wall: we would walk by them a hundred times a day and never stop and look. As children we became comfortable in their presence, but we did not exactly study them together, so I studied them alone, without the commentaries that would harden them into a catechism.  

    In a certain ironic sense, I was lucky. When someone is taught to think about fundamental human questions within a group, her conception of those themes will be shaped by the group. The goal of that sort of group study, perhaps not overtly articulated but always at work, would be to initiate her into a particular system of particular people, to provide her with a ready-made attitude and a handy worldview, to train her to think and speak in the jargon of that worldview, and to signal membership within the company of those who espouse it.

    If language is a condition of our thoughts, it is also a source of their corruption. Thinking outside a language may be impossible, but thought may take place in a variety of vocabularies, and the unexamined vocabularies, the ones that we receive in tidy and dogmatic packages, pose a great danger to clear and critical thinking. My good fortune was that I was not socialized philosophically. My religious tradition was not presented to me as a philosophical tradition. I was not inducted into a full and finished vernacular that would dictate or manipulate how I would think. And I was young enough not to have become so sensitive to political or cultural etiquettes that they would inhibit or mitigate independent reflection and study. The space in my head into which I retreated to think was built and outfitted mainly by me, or so it felt; and there, in that detached and unassisted space, I became accustomed to the looming awareness that these themes were too complicated for me to really understand (an awareness which provoked an ineradicable distrust for communal ideological certainties). Yet this did not diminish my desire to spend time there. My relationship with my burgeoning ideas felt privileged, the way a child feels playing with plundered high heels or lipstick without the context to understand the social significations that those instruments may one day carry. If I misunderstood them, if they baffled me, there was no reason to be embarrassed. My sense of possibility was large and exciting, because it was unburdened by the adult awareness that convictions have social consequences by which they may then be judged. 

    My limited field of human experience — the people I knew, the fictional and historical figures to which I had been introduced — comprised all the materials with which I could conduct my solitary musings. I studied the rhythms and tendencies of human interactions. I watched the way that other people responded to each other, the way they held themselves when they were alone or in society. This stock of knowledge informed how I thought people in general do and ought to behave. (My theory of slavery and emancipation was a product of this discipline: for example, I noted that I got anxious for recess when in school but bored by endless freedom on the weekend or vacation. We appreciate freedom when we are enslaved: is that what Scripture wanted me to understand? Well, that was consistent with my experience.) My inquiries were catalyzed and sustained by pure curiosity about human beings and in retrospect they seem to have been relatively untainted by my community’s biases. Perhaps I am idealizing my beginnings, but I really do have the memory of an open mind and a pretty level playing field. Like the adolescent heroines in Rohmer’s films, I genuinely wanted to know how people are so I could figure out how I should be.

    The effects of this solitary and informal mental life were permanent. Having developed the space in my head independent of a received blueprint, my intellectual methods would always be fundamentally unsocialized. Despite the external pressures, I have never successfully unlearned these attitudes. I don’t doubt that there were many influences from my surroundings, from my community and my culture, that I was absorbing without recognizing them, but still I felt significantly on my own and, as I say, lucky. But I was also quite lonely. The loneliness intensified as I got older and my family became more religious. The high school that I attended was much more traditional than my earlier schools had been. There were more rules, endless esoteric rituals and cultural habits that I had to learn in order to convince myself and others that I was one of them, that I belonged there. I failed often. There was so much that I didn’t know, and, more to the point, there was something about the weather around me that perpetually exposed my difference. No matter how hard I tried to remake myself into a member, to dismantle and rebuild the space in my head, everyone could sense that the indoctrination was not taking. I recited the script with a foreign accent. 

    In a flagrant, chronic, and no doubt annoying manifestation of otherness, I would badger my teachers and peers for reasons and explanations. Why were we — I was a “we” now – obeying all these rules? I was not in open revolt: I sensed that our tradition was rich and I was eager to plumb the treasures that I had been bequeathed. But it seemed a gross dereliction to obey the laws without considering their purpose. My intentions were innocent, perhaps even virtuous, but my questions were discomfiting anyway. Even now I often recall a particularly representative afternoon. A group of girls in my grade were discussing the practice called shmirat negiah, the strict observance of physical distance between the sexes, which prohibits men and women who are not related from touching one another. I wondered: Why had the rule been written to begin with? When did Jews begin to enforce it? What kind of male-female dynamic did it seek to cultivate? Did such emphatic chasteness in preparation for marriage help or harm that union? These were reasonable questions, except that in a context of orthodoxy they could be received as subversive. A girl I admired — a paragon of membership — complained that the practice made her awkward and scared of men, and that she could not understand why her mother enforced it. “Why don’t you just ask your mother why she thinks you ought to do it?” I finally asked. “Because,” she sighed, “she’ll just tell me that I have to because that is what Jews do.” My mind recoiled. Why on earth would a mother shirk the opportunity (and the responsibility) to help her child grapple with such an important question? Why wouldn’t she consider the law itself a catalyst for conversations about such primary themes? Yet even as I asked myself these questions, I knew the answer. Membership mattered more than meaning.

    But surely that attitude did not govern all human communities. This could not be all there was. Somewhere, I assumed, there were institutions in which people directly addressed the ideas I wondered about on my own. Somewhere there were groups in which the exploration of meaning was an essential feature of membership. In the secular world, which I naively called “the real world,” I imagined intellectual camaraderie would be easier to find. Surely secular people, when they talk about justice, sex, mercy, and virtue, must be interested in seriously engaging those themes. In the real world, surely, there would be no orthodoxies, and people would have no reason to incessantly analyze one another’s behaviors in order to grant or deny them legitimacy. They would not spread petty rumors about neighbors failing to uphold the code or refuse to eat at the tables of those who were not exactly like them, as the worst members of my origin bubble did. They would not, forgive me, cancel each other.

    Of course I was wrong. As it turns out, the secular world also has liturgies, dogmas, ostracisms, and bans. It, too, hallows conformity. It has heretics, and it even has gods: they just don’t call them that. In college I discovered the temples of the progressives, the liberals, the conservatives, and more. Each has a vernacular of its own, comprised of dialects and rituals which serve to establish membership, welcome members, and turn away outsiders. In this realm of proud secularity, my religious upbringing proved unexpectedly useful. It had prepared me to identify the mechanisms of group power, and the cruel drama of deviance and its consequences. (What is cancellation, if not excommunication?) It turned out that all too often in the real world, the open world, the democratic world, the enlightened world, when people talk about fundamental human questions they are far more interested in signaling membership and allegiance than in developing honest answers to them. 

    It is true that many of these questions are hard to answer. The intensity with which people hold convictions belies their complexity. Independent and critical reasoning is not for the faint of heart, and the length and difficulty of the search may eventually make skeptics or cynics of them. It is much simpler to memorize a script, and to establish a quasi-mystical allegiance to ones politics. Holiness is incommensurate with reason, remember. Still, the demands of a nuanced politics are not, I think, why people are reluctant to wrestle with ideas on their own. There are advantages to wholesale worldviews and closed systems. They provide something even more alluring than conviction: solidarity. They are a cure not only for perplexity but also for loneliness. A group with which to rehearse shared dogma, and to style oneself in accordance with the aesthetic that those dogma evoke: this is not a small thing. Thus the answer to a philosophical or moral question becomes…community. We choose our philosophy on the basis of our sociology. This is a category mistake — and the rule by which we live.  

    In a different world, most people would readily admit ignorance or doubt about political or cultural subjects the same way that my young peer would have had no reason to refrain from hugging friends of the opposite gender if Jewish custom did not forbid it. If their group ignored the subject, so would they. Most would not be ashamed of their confusion because intellectual confusion is not a common fear. But isolation is. We dread marginality more than we dread error. After all, the social costs of idiosyncrasy or independence are high. We fear finding ourselves at our screens, watching others retweet or like or share one another’s posts without a cohort of our own in which to do the same. Who does not wish to be a part of a whole? (Identity politics is the current name for this cozy mode of discourse.) In my experience, when most people talk about politics, they are largely motivated by this concern, which compromises the integrity of these conversations. They disguise a social discourse as an intellectual discourse.

    I call this phony discourse the sludge. The sludge is intellectual and political kitsch. It is a shared mental condition in which all the work of thinking has already been done for us. It redirects attention away from fundamentals by converting them into accessories, into proofs of identity, into certificates of membership.

    In a sludge-infected world, in our world, if someone were to say, “that fascist presides over a hegemonic patriarchy,” her primary purpose would be to communicate to her interlocutor that she is woke, trustworthy, an insider, an adept, a spokesperson, an agent of a particular ideology, proficient in its jargon. She would also be indicating the denomination of progressivism to which she subscribes, thus erecting the ideological boundaries for the conversation. If someone else were to say, of the same person, that he is a “cosmopolitan” or a “globalist” or a “snowflake” she would be doing the same thing in a different vernacular. (They would both use the terms “liberal” and “neoliberal” as slurs, probably without a firm sense of what either one means.) In the context of these two conversations, whether or not the individual in question is a snowflake or a fascist is as good as irrelevant. The subject of the conversation is just an occasion for manifesting group solidarity. Righteousness is an accoutrement of the code. In fact, asking either person to justify the assumptions inherent in her statement would be as irregular as asking me to justify my faith in God after witnessing me thank Him for the apple I am about to eat. She would answer with her equivalent of “that’s just what Jews do.” In both these cases, belonging is prior to belief. 

    The effect of sludge-infected language is that quite often the focal point of debates about politics or philosophy is not at all the literal subject at hand. Members are conditioned 

    to present as if they care about the substance of a particular ideology. Learning to present as if you care about something is very different from learning to actually care about something. Caring is difficult, it is a complicated and time-consuming capacity which requires discipline, openness, and analysis. This is not a trivial point. Imagine a sludge-infected romantic relationship (or just look around you) — if, instead of taking a close and patient interest in her lover’s needs, a woman simply asked herself, “What are the kinds of things that people who are in love do?,” and having done those things, considered herself well acquitted of these duties and therefore in love. She may tell him that she loves him, and she may be loving or supportive in a generic kind of way, but she will not really know him. Details about his inner life, about his insecurities and his demons, will not interest her. Romantic success, for her, would be to appear from the outside as if they have created a successful partnership. She will have treated love programmatically, in accordance with the expectations of her social context. Who her lover is when he is not playing the role she has assigned to him will remain mysterious. When tragedy strikes, they will be forced to recognize that they do not know or trust each other. 

    Sludge-infected politics are similarly behavioral and unsettling. Practitioners exploit opportunities for genuine expressions of devotion as occasions to signal membership. Consider the effect of the sludge on antiracism. Suppose we were taught to present as antiracists rather than to seriously consider the imperatives of antiracism (or, again, just look around you). Antiracism (like feminism, like Zionism, like socialism, like isms generally) is difficult to cultivate and strengthen. It requires work and must be consciously developed. It is the result of many individual experiences and sacrifices, highs and lows, of sustained and thoughtful interest and introspection. If we consider ourselves acquitted of  our responsibility to antiracism merely by posting #handsup-dontshoot at regular intervals on social media, perhaps garnering a host of likes and followers, the duties of an honest and reflective antiracism will remain unacknowledged (and the sentiment to which that slogan refers will be cheapened). Our antiracism would be not internal but external, not philosophical but stylistic.

    If a person is a dedicated antiracist, over the years she will come to better appreciate the enormity of the battle against racism. She will develop the minute concerns and sensitivities of a veteran. She will realize that the world is not made up only of friends and enemies. She will know that sometimes, in order to do good, one must work alongside unlikely allies, and that purists are incapable of effecting sustainable change. The very language she uses to discuss her mission will be informed by this knowledge. Indeed, it would strike her as shabby and disloyal to regurgitate common slogans when speaking about the specific, discomfiting realities of which she has intimate knowledge and which she is serious about mitigating. She will choose more precise and shaded words, her own words, careful words. The novice will listen to her and think, “I would never have thought about it that way.” If, by contrast, a person is motivated by the pressure to appear as a loyal soldier, she will never gain this wisdom. Her concerns will be only about the rituals, the liturgies, and the catechisms of a particular politics, however just the cause. Outsiders will recognize her language from Twitter or Facebook or other digitized watering holes, and of course they will ratify it, but she will have gained all that she ever really sought: admiration and affirmation.

    In this manner, movements that purport to exist in service to certain values may perpetuate a status quo in which those values, demanding and taxing, are named but never seriously treated. We ignore them, and pretend — together, as a community — that we are not ignoring them. Every time a self-proclaimed “n-ist” presents as an “n-ist,” every time a tweet or a post racks up a hundred likes in service to that presentation, she can tell herself she has fulfilled the responsibilities of her “n-ism” and so she will not feel further pressure to do so. 

    Consider two examples. First, a college student with two thousand followers on Instagram who attends every Black Lives Matter protest armed with placards, and who posts regularly about white privilege and the guilty conscience of white America. Suppose this woman’s antiracism manifests itself primarily as a crippling guilt in the face of systemic inequity from which she benefits: her service to antiracism is not nonexistent, or merely “performative,” since she does force her followers to think about uncomfortable subjects (though it is quite likely that her followers already agree with her, but never mind), and she does contribute to the increasing awareness that these injustices must be named and reckoned with now.

    It is good that our college student marched. But compare her to a white septuagenarian who has moved into an increasingly gentrifying neighborhood, who is well off and even a member of the American elite, who has the cell phone numbers of more than a few important people. She has never once felt guilty for having been born into power and privilege. She is not a marcher. Now imagine that this woman, out of mere decency, involves herself in the everyday lives of her black neighbors (something which most people like her fail to do). She is who they turn to when forced to confront a system which she can manipulate for them, which they cannot navigate without her. She is the one they call when, say, one of their sons is unjustly arrested (again), or when the school board threatens to cut the district’s budget (again), because they trust that she will work tirelessly on their behalf. She learns over time, through direct experience, about the blisters and lacerations of racism, and about how to preempt and treat them. Owing to her skin color and her tax bracket, she, like our college student, profits from systemic inequity, but, unlike our college student, she takes regular and concrete actions to help the disadvantaged. Her actions are moral but not ideological. She is not a tourist in the cause and the cause is not a flex of her identity. Yet she is regularly in the trenches and she is alleviating hardship. 

    Which of these women has more ardently and effectively fought against racism? I have no objection to activism, quite the contrary, but it must be constantly vigilant against declining into the sludge. (Of course neither the good neighbor nor the tweeting marcher are engaged, strictly speaking, in politics; at the very least they both must also vote.) Sludge-like discourse is not a new phenomenon, of course — prior to the mass revulsion at the murder of George Floyd there was the convulsion known as #MeToo, which exposed some terrible abuses and established some necessary adjustments but was mired in the sludge and the culture of holy rage. And there is another historical revolution to consider: in all the centuries of thought distorted by community, there has never been a greater ally and amplifier of this phenomenon than the new technology. It is uncannily ideal for such shallowness and such conformism, and the best place to go to prove your purity. Owing to it, the sludge has become unprecedentedly manic and unprecedentedly ubiquitous. For all its reputation as an engine for loneliness and isolation, the internet is in fact the perfect technology of the herd. Consider Twitter, the infinitely metastazing home of the member-ships and the mobs. For demagogues and bigots and liars and inciters it has solved once and for all the old problem of the transmission and distribution of opinion. The echo-chambers of the righteous progressives and the righteous reactionaries exist side by side in splendid defiance of one another, drunk on themselves, on their likes, retweets, shares, and followers (the latter a disarmingly candid appellation). All these echo chambers — these braying threads — are structurally identical. Authority is granted to those with the highest numbers. The xenophobic “influencer” with the most followers is granted power for precisely the same reason, and according to the same authority, as the justice warrior with the most followers. And followers are won according to the same laws in all realms: those who are proficient in the vernacular, who can convince others that they are full members, that they understand the code and its implications best, they are the ones to whom the like-minded flock. The priests of one temple wrathfully say, “You are sexist” and those of another wrathfully say “You are un-American” in the same way members of my old community would wrathfully say, “You are a sinner.” It all means the same thing: get out. 

    The sludge does not govern all discourse in America, but a horrifying amount of our “national conversation” is canned. And instead of discussing actual injustices we have endless conversations about how to discuss such things. What can be said and what cannot be said? Why talk about slavery when you can talk about the 1619 Project? Why talk about the nuances and ambiguities endemic to any sexual encounter when you can talk about #MeToo? Why complicate the question for yourself when you can join the gang? Every time we choose one of these options over the other, we demonstrate what kind of knowledge matters to us most.

    And one of the most pernicious effects of this degradation of our discourse occurs in our private lives — in personal relationships. Increasingly in conversations with friends I recognize a thickening boundary, a forcefield that repels us from the highly charged subject of our discussion. We bump up against it and decide not to go there, where integrity and trust would take us. At the point of impact, when honesty collides with membership and shrinks away, I sometimes feel as if I am being pushed back not just from the subject matter but also from the friend herself. She begins to speak in a pastiche of platitudes borrowed from the newsletters clogging her (and my) inbox. I don’t seem to be talking to her anymore, I can’t get through to her own thoughts, to her own perspective — which, I stubbornly insist, lies somewhere beneath the slogans and the shorthands. All too often I find myself following suit. Neither one of us is willing to express our respective curiosities and anxieties on matters related to politics. We just bat the keywords around and pretend we are really in dialogue with each other. He declares that the world will end if Biden is elected, she declares that the world will end if Trump is elected, and I am expected not to ask “Why?” Instead I am being invited to join him or to join her, and the more hysterically, the better.

    Once this parameter, this border wall, has been erected, taking it down would require a troublesome break from social convention. One of us would have to be disruptive, even impolite, to pull us out of the sludge-slinging which prohibits intellectual and verbal independence. And so usually we carry on within those boundaries, interacting as representatives of a cohort or a movement, not as intellectually diligent citizens with a sense of our own ignorance and an appetite for what the other thinks. We become paranoid about discursive limits. Ever present in our conversation is the danger that if one of us deviates from the etiquette, the other will accuse her of being offensive, or worse. The wages of candor are now very high. We have made our discourse too brutal because we are too delicate.

    So we obey the rules in which we have trained ourselves, and look for safety in numbers. We invoke the authority of dogma, hearsay, and cliche. We substitute popularity for truth. We quote statistics like gospel, without the faintest sense of their veracity, as if numbers can settle moral questions. We denounce the character of people we have not met simply because others — in a book group, a twitter thread, a newspaper column, or a mob — say they are no good. The actual interpretation of concepts such as climate change or race or interventionism is less significant than the affiliations that they denote. And when the conversation is over, we are where we were when it began, left to shibboleths and confirmed, as Lionel Trilling once complained about an earlier debasement, in our sense of our own righteousness. But this must not be the purpose of conversation, public or private. It is disgraceful to treat intellectual equals as if they cannot be trusted with our doubts. It is wrong to celebrate freedom of thought and freedom of speech and then think and speak unfreely. “Polarization” is just another name for this heated charade. In an open society, in American society, one should not be made to feel like a dissident for speaking one’s own mind.

    Abolition and American Origins

    The turbulent politics of the present moment have reached far back into American history. Although not for the first time, the very character of the ideals expressed in the Declaration of Independence and the Constitution have been thrown into question by the hideous reality of slavery, long before and then during the founding era and for eighty years thereafter; and then by slavery’s legacy. In this accounting, slavery appears not as an institution central to American history but as that history’s essence, the system of white supremacy and economic oligarchy upon which everything else in this country has been built, right down to the inequalities and injustices of today.

    More than forty years ago, when a similar bleak pessimism was in the air, the pioneering African American historian Benjamin Quarles remarked on that pessimism’s distortions. The history of American slavery could never be properly grasped, Quarles wrote, “without careful attention to a concomitant development and influence — the crusade against it,” a crusade, he made clear, that commenced before the American Revolution. Quarles understood that examining slavery’s oppression without also examining the anti-slavery movement’s resistance to it simplifies and coarsens our history, which in turn coarsens our own politics and culture. “The anti-slavery leaders and their organizations tell us much about slavery,” he insisted — and, no less importantly, “they tell us something about our character as  a nation.” 

    If we are to speak about the nation’s origins, we must get the origins right. As we continue to wrestle with the brutal, and soul-destroying power of racism in our society, it is essential that we recognize the mixed and mottled history upon which our sense of our country must rest. In judging a society, how do we responsibly assess its struggle against evil alongside the evil against which it struggles? With what combination of outrage and pride, alienation and honor, should we define our feelings about America?    

    On November 5, 1819, Elias Boudinot, the former president of the Continental Congress, ex-U.S. Congressman, and past director of the U.S. Mint, wrote to former President James Madison, enclosing a copy of the proceedings of a meeting held a week earlier in Trenton, New Jersey, opposing the admission of Missouri to the Union as a slave state. The crisis over Missouri — which would lead to the famous Missouri Compromise the following year — had begun in the House of Representatives in February, but Congress had been out of session for months with virtually no sign of popular concern. In late summer, Boudinot, who was 79 and crippled by gout, mustered the strength to help organize a modest protest gathering in his hometown of Burlington, long a center of anti-slavery. The far larger follow-up meeting in Trenton was truly impressive, a “great Assemblage of persons” that included the governor of New Jersey and most of the state legislature. The main speaker, the Pennsylvania Congressman Joseph Hopkinson, who was also a member of the Pennsylvania Abolition Society, had backed the House amendment that touched off the crisis, and his speech in Trenton, according to one report, “rivetted the attention of every auditor.” Boudinot, too ill to travel to the state capital, agreed nevertheless to chair a committee of correspondence that wrote to dozens of prominent men, including ex-President Madison, seeking their support. 

    If Madison ever responded to Boudinot’s entreaty, the letter has not survived, but no matter: Madison’s correspondence with another anti-slavery advocate made clear that he was not about to support checking the future of slavery in Missouri. Boudinot’s and the committee’s efforts did, however, meet with approval from antislavery notables such as John Jay. It also galvanized a multitude of anti-Missouri meetings all across the northern states, pressuring Congress to hold fast on restricting slavery’s spread. “It seems to have run like a flaming fire through our middle States and causes great anxiety,” Boudinot wrote to his nephew at the end of November. The proslavery St. Louis Enquirer complained two months later that the agitation begun in Burlington had reached “every dog-hole town and blacksmith’s village in the northern states.” The protests, the largest outpouring of mass antislavery opinion to that point in American history, were effective: by December, according to the New Hampshire political leader William Plumer, it had become “political suicide” for any free-state officeholder “to tolerate slavery beyond its present limits.”  

    Apart from indicating the scope and the fervor of popular antislavery opinion well before the rise of William Lloyd Garrison, two elements in this story connect in important ways to the larger history of the antislavery movement in the United States, one element looking forward from 1819, the other looking backward. Of continuing future importance was the breadth of the movement’s abolitionist politics, as announced in the circular of the Trenton mass meeting. Although it aimed, in this battle, simply to halt the extension of slavery, the anti-Missouri movement’s true aim, the circular announced, was nothing less than the complete destruction of slavery in the United States. “The abolition of slavery in this country,” it proclaimed, was one of “the anxious and ardent desires of the just and humane citizens of the United States.” It was not just a matter of requiring that Missouri enter as a free state: by blocking human bondage from “every other new state that may hereafter be admitted into the Union,” it would be only a matter of time before American slavery was eradicated. Just as important, the abolitionists took pains to explain that restricting slavery in this way fell within the ambit of Congress’ powers, “in full accordance with the principles of the Constitution.” Here lay the elements of the antislavery constitutionalism — asserting congressional authority over slavery in places under its jurisdiction — that would evolve, over the ensuing thirty-five years, into the Republican Party’s program to place slavery, as Abraham Lincoln put it, “in the course of ultimate extinction.” 

    The second connection, looking backward, was embodied by Elias Boudinot. Some historians have linked Boudinot’s antislavery enthusiasm in 1819 to his Federalist politics; more persuasive accounts see it as a natural outgrowth of a deeply religious humanitarianism that had led him, after his retirement from politics and government, to help found the American Bible Society and become a champion of American Indians. The most recent comprehensive study of the Missouri crisis depicts him as something of a throwback, “the quintessential antiegalitarian patrician Federalist” with a pious humanitarian streak who had lingered long enough to play a part in the commencement of the nation’s crisis over slavery.

    In fact, Boudinot had already had a long career not only as an antislavery advocate but also as an antislavery politician. He first threw himself seriously into antislavery politics in 1774 when, as a member of the colonial assembly, he worked closely with his Quaker colleague and abolitionist leader Samuel Allinson in ultimately unsuccessful efforts to hasten total abolition in New Jersey. In 1786, Boudinot joined with another antislavery politician, Joseph Bloomfield, in founding the New Jersey Society for Promoting the Abolition of Slavery; and after several years of indifferent activity, the Society presented a gradual emancipation plan that Bloomfield, elected New Jersey’s governor in 1803, signed into law the following year. Boudinot, meanwhile, was elected to the first U.S. Congress in 1789, where he denounced slavery as an offence against the Declaration of Independence and “the uniform tenor of the Gospel.” In all, if the antislavery arguments of the 1850s dated back to the Missouri crisis, then the antislavery politics that brought about that crisis dated back to the Revolutionary era.

    These two connections — the history of the antislavery constitutionalism that surfaced in the Missouri crisis and the history of antislavery politics dating back to the Revolution — deserve an important place in our account of our origins. I have argued, in a recent book, that by refusing to recognize the legitimacy of property in man in national law, the Federal Convention in 1787 left open ground upon which antislavery politics later developed at the national as well as the state level. Those politics emerged, to be sure, out of the local struggles that dated back before the American Revolution. But the ratification of the Constitution, even with that document’s notorious compromises over slavery, left room for the rise of antislavery politics on the national level. And the origins of those politics, as I wish to make clear here, lay in the efforts by antislavery agitators and their allies in Congress, beginning in the very first Congress, to find in the Constitution the authority whereby the national government could abolish slavery or, at the very least, hasten slavery’s abolition.

    These national antislavery politics, it needs emphasizing, developed by fits and starts, and only began to gather lasting strength in the 1840s. The abolitionists enjoyed just a few significant successes at the national level during the twenty years following the Constitution’s ratification, and they endured some important defeats. These were some of the leanest years in the history of antislavery politics. But that the abolitionists won anything at all, let alone anything significant, contradicts the conventional view that southern slaveholders thoroughly dominated national politics in the early republic. The abolitionists did occasionally prevail; and just as important, in doing so they discovered and began to refine the principles and stratagems of antislavery constitutionalism that would guide antislavery politics through to the Missouri crisis and then, further refined, to the Civil War.

    Reviewing the early history of these abolitionist politics — running from the birth of the federal government in 1789 until the abolition of American involvement in the Atlantic slave trade in 1807 — is part of a broader re-evaluation currently underway of what Manisha Sinha has called “the first wave” of abolitionist activity that lasted from the Revolutionary era through the 1820s. Scholarship by a rising generation of historians, including Sarah Gronningsater, Paul J. Polgar, and Nicholas P. Wood, as well as Manisha Sinha, have begun to revise completely the history of antislavery in this period. They have more or less demolished, for example, the once dominant view of northern emancipation as a grudging and even conservative undertaking, led by polite gentlemen unwilling to take their antislavery too far. When completed, the work of these scholars and others will, I am confident, become the basis for a new narrative for the history not just of antislavery but of American politics from the Revolution to the Civil War. But there is a lot of work left to do.

    Prior to the 1750s, there was very little in the way of antislavery activity among white Americans, with the exception of the Quakers, and it took even the Quakers several decades of struggle among themselves before they turned misgivings about slavery into formal instructions to abandon the institution. Amid an extraordinary moral rupture at mid-century, wider antislavery activity began in earnest. Initially, these efforts emphasized limited public efforts to change private behavior, relying on moral suasion to hasten manumissions, but soon enough some antislavery reformers turned to politics in more forceful ways. In 1766 and 1767, Boston instructed its representatives in the colonial assembly to push for the total eradication of slavery. In 1773, a Quaker-led campaign against the slave trade, captained by Anthony Benezet, the greatest antislavery agitator of the time, swept through the middle colonies and touched New England; and in that same year several Massachusetts towns petitioned the assembly to abolish the slave trade and initiate gradual emancipation. Black abolitionists, including Felix Holbrook and Prince Hall in Massachusetts, initiated their own petition drives, supplementing the freedom suits that would kill slavery in Massachusetts outright in the mid-1780s. Bills for the gradual abolition of slavery were debated in New Jersey in 1775 and in Connecticut in 1777; Vermonters approved the first written constitution ever to ban adult slavery in 1777; and by 1780 ascendant radical reformers in Pennsylvania led by George Bryan prepared to enact the first gradual emancipation law of its kind in history.  

    By then, political abolitionists had begun organizing their own institutions. On April 14, 1775 — five days before the battles of Lexington and Concord — a group consisting chiefly of Quakers formed the Society for the “Relief for Free Negroes Unlawfully Held in Bondage,” the first society with antislavery aims anywhere in the world. Although the Revolution soon disrupted the group, it reorganized in 1784 as the Pennsylvania Society for the Promotion of the Abolition of Slavery; three years later, the society named Benjamin Franklin — conspicuously a non-Quaker — as its president. In 1785, the New-York Manumission Society appeared, dedicated to the same basic goals. By 1790, two more states, Rhode Island and Connecticut, had approved gradual emancipation. Slavery was ended in Massachusetts by judicial decree in 1783, had crumbled in New Hampshire; and at least six more abolitionist societies had formed from Rhode Island as far south as Virginia (where, in 1785, an abolition law was debated to supplement a widened manumission law enacted in 1782). In 1794, the state societies confederated as the American Convention for Promoting the Abolition of Slavery and Improving the Condition of the African Race.

    Abolitionist politics at the national level would await the framing and ratification of the Federal Constitution in 1787-1788. Since the Articles of Confederation had afforded the national government no authority over national commerce, let alone either slavery or the Atlantic slave trade, national abolitionist politics barely existed. The one exceptional effort came in 1783, when a small Quaker delegation from the Philadelphia Yearly Meeting delivered to the Confederation Congress, then sitting in temporary exile in Princeton, a petition signed by some five hundred Quakers, asking in vain for a prohibition of the Atlantic trade. With the calling of the Federal Convention in 1787, though, both of the then-existing abolitionist societies, in Philadelphia and New York, mobilized to send petitions. Benjamin Franklin, a delegate to the convention as well as president of the Pennsylvania Abolition Society decided on tactical grounds against presenting his group’s forceful memorial opposing the Atlantic slave trade, while the New-York Manumission Society failed to complete its broader antislavery draft before learning that slavery as such would not be debated at the convention.

    To comprehend the national abolitionist politics that followed these developments requires a closer look at the Constitution’s paradoxes and contradictions concerning slavery. None of the framers’ compromises over slavery that many historians cite as the heart of the supposedly proslavery Constitution were nearly as powerful in protecting slavery as an assumption that was there from the start: that whatever else it could do, the federal government would be powerless to interfere with slavery in the states where it existed — a doctrine that became known as the federal consensus. This assumption, far more than the three-fifths clause or the Atlantic slave trade clause or the fugitive slave clause or anything else, was the basis of the slaveholders’ confidence that the Constitution had enshrined human bondage. But if the federal government could not abolish slavery outright, then how might it be done, short of hoping that the slaveholders of South Carolina and Georgia would suddenly see the light — a prospect that the South Carolinians and Georgians made clear was not in the offing anytime soon? Once the abolitionists had launched the campaign for emancipation in the North, this would be their great conundrum — but they seized upon it immediately, with actions as bold as their demands. In doing so, they fostered a convergence of radical agitation and congressional politics that would have enduring if as yet unforeseen repercussions.   

    Far from discouraging abolitionist activity, the ratification of the Constitution, even with its notorious compromises over slavery, bolstered it. Above all, the framers’ granting to the new national government, over furious southern objections, the authority to abolish the nation’s Atlantic slave trade, even with a twenty-year delay, struck many and probably most abolitionists and their political allies as a major blow for freedom. This should not be surprising: as historians note too rarely, it was the first serious blow against the international slave trade undertaken anywhere in the Atlantic world in the name of a national government; indeed, the American example, preceded by the truly inspiring anti-slave agitation led by Anthony Benezet, encouraged the rise of the British movement to end the Atlantic trade, formally organized in 1787. Some leading American abolitionists described the Constitution as nothing less than, in the words of the framer James Wilson, “the foundation for banishing slavery out of this country.” Ending the trade had long been considered the vital first step toward eradicating slavery itself; and it seemed at the least highly probably that, as soon as 1808 arrived, Congress would do so. More immediately, though, members of the Pennsylvania Abolition Society wanted to see if Congress would entertain extending its constitutional authority beyond the slave trade provision.

    The first great confrontation over slavery in national politics was a famous but still largely misunderstood conflict in the House of Representatives during the First Congress’ second session in New York, the nation’s temporary capital, in 1790. Through a friendly congressman, the Pennsylvania Abolition Society presented a petition to the House of Representatives, above the signature of its aging president Franklin, bidding the representatives to “step to the very verge of the powers vested in you” and to abolish slavery itself, not simply the Atlantic slave trade. (At the request of John Pemberton of the PAS, two groups of Quakers had already sent milder petitions referring only to the trade.) Paying no attention to the federal consensus, the PAS petition specifically cited the preamble of the Constitution that empowered the new government to “promote the general Welfare and secure the blessings of Liberty to ourselves and our Posterity,” which they contended authorized far-reaching if unspecified congressional action against slavery. Without telling Congress exactly what to do, the petitioners bid the representatives to look beyond the federal consensus to find ways they could attack slavery — to the extent, quite possibly, of disregarding the federal consensus entirely.

    A fierce on-and-off debate over the next three months ended with Congress affirming the federal consensus as well as the ban on congressional abolition of the Atlantic trade until 1808. The outcome is often portrayed fatalistically as a crushing defeat for the abolitionists, sealing the immunity of slavery in the new republic while calling into question the rights of abolitionists even to petition the Congress — an effort undertaken, in one historian’s estimation, by naïve and “psychologically vulnerable” reformers, unprepared “for the secular interest politics of a modern nation.”

    In fact, although the petition (along with the two others from regional Quaker meetings) did not gain the sweeping reforms it sought, it was decidedly not a failure. For one thing, the mobilization behind it, far from weak-kneed, was the first auspicious political protest of any kind to be directed at the new national government. Strikingly modern in its strategy and its tactics, the abolitionists blended insider maneuvering and hard-headed direct appeals to members of Congress with popular propagandizing and political theater of a kind associated with the protest movements of much later decades. The campaign was spearheaded by a delegation of eleven Quaker lobbyists from Philadelphia, including John Pemberton and Warner Mifflin, who were certainly the opposite of naïve and vulnerable. As a consequence, the congressional deliberations over the petitions took a surprisingly radical turn, and in  the end the effort secured important political as well as practical gains.

    Lower South slaveholders reacted with predictable fury as soon as congressmen friendly to the abolitionists introduced the petitions on the floor of the House. The slaveholders’ diatribes asserted that the constitutional ban on congressional abolition of the Atlantic slave trade until 1808 meant that the Constitution barred any federal interference with slavery whatsoever. Given the federal consensus, meanwhile, the slaveholders called the petitions unconstitutional on their face and demanded they be rejected without further debate. But despite the inflation of their numbers in the House by the three-fifths clause, the proslavery forces were badly outnumbered. (“Alass — how weak a resistance against the whole house,” one resigned South Carolina congressman wrote.) By a vote of 43 to 11, the House approved sending the radical petitions to a special committee for consideration.

    Working hand-in-hand with members of the special committee, the abolitionists immediately supplied them with a small library of abolitionist writings, while they arranged, through the Speaker of the House, an ally, to distribute additional abolitionist propaganda to the rest of the chamber. The Quaker lobbyists then advised the committee on its report behind the scenes, sharing drafts and submitting their own suggestions while backing up the PAS petition’s claim that the “General Welfare” section of the Constitution’s preamble gave Congress some unspecified powers over slavery. The committee narrowly turned aside that suggestion — by a single vote, John Pemberton reported — and agreed that the Congress could not ban the Atlantic slave trade before 1808. Yet it also asserted, contrary to lower South protests, that the federal government could regulate the trade as it saw fit at any time. More portentously, the members included wording that the Constitution empowered Congress to abolish slavery outright after 1808 — making the special committee’s report perhaps the most radical official document on slavery approved by any congressional entity before the Civil War.   

    When the report reached the House, the abolitionists swung into action as both agitators and what today we would call lobbyists. Quakers crowded the House gallery to witness the debate, their presence in Quaker gray suits and broad-brimmed black hats inciting and unnerving the southerners. Outside the hall, the abolitionists pursued individual congressmen right down to their lodging houses and taverns and eating places to make their case. Mifflin began a letter-writing campaign, addressed both to individual congressmen and to the House at large. The abolitionists also arranged with allies in the New-York Manumission Society to have a full record of the House debates printed along with antislavery articles in the New York Daily Advertiser, as well as to distribute pamphlets vividly describing the horrors of the slave trade.  

    Finally the House affirmed Congress’ powerlessness over slavery where it existed and over the Atlantic trade before 1808, and a revised report removed the select committee’s language about abolishing slavery itself after 1808. Yet the outcome was hardly a one-sided triumph for the proslavery southerners. The lower South failed utterly in its initial effort to dismiss the petitions without debate. Evidently, contrary to the slaveholders, Congress might well have some authority over slavery worth debating. In the course of arguing that point, moreover, several House members had affirmed that, short of abolishing slavery outright, Congress might restrict slavery in various ways quite apart from the slave trade, including, James Madison remarked, banning slavery from the national territories, where, he declared, “Congress have certainly the power to regulate slavery.” And over howls from lower South slaveholders, the final report affirmed that Congress could legislate over specific matters connected to the Atlantic trade before 1808 — issues that, as we shall see, the abolitionists would agitate successfully. In all, the federal consensus stood, but at the same time the House majority repulsed the proslavery forces and backed the abolitionists on whether slavery was immune from federal authority.

    Over the ensuing decade, the abolitionists, far from discouraged, redoubled their national efforts, despite some serious setbacks. The Southwest Territory — what would become the state of Tennessee — was admitted to the Union with slavery in 1790, with little debate. A coterie of antislavery congressmen could not stave off passage of the Fugitive Slave Act of 1793. Five years later, a spirited antislavery effort to bar slavery from Mississippi Territory was defeated by a wide margin. 

    And yet the abolitionists had reason to remain optimistic. At the state level, the New York legislature, under intense abolitionist pressure, finally passed a gradual emancipation law in 1799 and New Jersey followed five years later, completing the northern “first emancipation.” In part as a response to the Fugitive Slave Act, the American Convention of Abolition Societies was up and running in 1794. There were various signs, from a proliferation of freedom suits in Virginia to the spread of antislavery opinion in Kentucky and Tennessee, that the upper South was seriously questioning slavery. In national politics, antislavery congressmen, numbering about a dozen and led by a few northerners who worked closely with the abolitionists, made good in 1794 on the victory wrung from the abolitionist petition debates four years earlier, passing a law that outlawed the use of any American port or shipyard for constructing or outfitting any ship to be used for the importing of slaves. 

    Five years later the Reverend Absalom Jones, a prominent abolitionist and mainstay of Philadelphia’s free black community, helped lead an even more propitious effort. Late in 1799, a group of seventy free men of color in Philadelphia, headed by Jones, sent yet another petition to the House of Representatives. The drafters of the petition, as Nicholas Wood has shown, were John Drinker and John Parrish, prominent local Quaker abolitionists who had long worked closely with Jones and other black abolitionists; the signers included members of various black congregations, including Jones’ St. Thomas African Episcopal Church, the majority of them unable to sign their names. 

    The petitioners asked for revisions of the laws governing the Atlantic slave trade as well as the Fugitive Slave Law of 1793. But they also went further, as far as the PAS petitioners had in 1790, pressing for — as the abolitionist congressman Nicholas Waln observed when he introduced the petition to the House — “the adoption of such measures as shall in due course emancipate the whole of their brethren from their present situation.” Stating that they “cannot but address you as Guardians of our Civil Rights, and Patrons of equal and National Liberty,” the petitioners expressed hope that the House members 

    will view the subject in an impartial, unprejudiced light. — We do not ask for the immediate emancipation of all, knowing that the degraded State of many and their want of education, would greatly disqualify for such a change; yet humbly desire you may exert every means in your power to undo the heavy burdens, and prepare the way for the oppressed to go free, that every yoke may be broken.

    As if brushing aside the House’s decision in 1790, the abolitionists, citing once again the Constitution’s preamble, wanted Congress to probe once more the document’s antislavery potential. The idea that Congress had untapped antislavery powers was emerging as a core abolitionist argument. And, though the sources are silent, this portion of the petition may have also had tactical purposes. In 1790, the defeat of grand claims about emancipation proved the prelude to the House affirming Congress’ authority over more specific issues connected to slavery. Roughly the same thing would happen this time.

    Southern slaveholders and their New England allies reacted with predictable wrath. John Rutledge, Jr. of South Carolina thanked God that Africans were held in slavery, then railed against the “new-fangled French philosophy of liberty and equality” — he was talking about Thomas Jefferson and his supporters — that was abroad in the land. Rutledge’s fellow Federalist, the notorious Atlantic slave trader John Brown of Rhode Island, attacked the petition’s effort to restrain American participation in the trade, while another New England Federalist, Harrison Gray Otis, sneered that most of the petitioners were illiterate and thus unable to understand what they had endorsed, and that receiving their memorial would mischievously “teach them the art of assembling together, debating, and the like.”  

    The next day, the House considered a resolution condemning those portions of the petition “which invite Congress to legislate upon subjects from which the General Government is precluded by the Constitution.” The resolution passed 85 to 1, a crushing repudiation of the idea that Congress possessed implied powers to interfere directly with slavery where it already existed. Even the abolitionist congressman who presented the free blacks’ petition ended up voting with the majority.

    But that was only part of the story. The core of antislavery Northerners fiercely rebutted the proslavery outbursts. George Thacher, a Massachusetts Federalist and longtime antislavery champion in the House, repudiated the racist attacks on the petitioners, upheld the right of constituents to a redress of grievances regardless of their color, and condemned racial slavery as “a cancer of immense magnitude, that would some time destroy the body politic, except a proper legislation should prevent the evil.” Moreover, once the condemnation resolution predictably passed — Thacher’s was the sole vote in opposition — the House was free to act on the petitioners’ more specific demands, which it swiftly did, sending the petition to committee — thereby, among other things, affirming the right of free blacks to petition Congress.

    The committee assigned to consider the petition sympathized with its section on the fugitive slave law — free blacks, its report contended, were “entitled to freedom & Protection” — but the slaveholders and their allies prevailed on that issue on jurisdictional grounds. On the slave trade, however, Congress took action. After a heated debate, the House, with the concurrence of the Senate, approved by a wide margin the Slave Trade Act of 1800, banning even indirect involvement by Americans with the shipping of Africans for sale in any foreign country while also authorizing naval vessels to seize ships that were in violation. While it expanded enforcement of the restrictive law enacted six years earlier, the new law reinforced expectations that the Atlantic slave trade to the United States would be entirely abolished at the earliest possible date in 1808. 

    The scale of this antislavery victory should not be exaggerated — indeed, three years later South Carolina would re-open its own slave trade with a vengeance — but neither should it be scanted. Most immediately, within a year, under the new law’s provisions, the man-of-war U.S.S. Ganges seized two illegal slave schooners off the coast of Cuba and discovered more than one hundred and thirty African captives, men, women, and children, in chains, starving and naked; once freed, the Africans obtained apprenticeships and indentures from the Pennsylvania Abolition Society. The free black petition debate also marked a highpoint in the efforts by the antislavery congressmen, first to restrict and regulate the Atlantic slave trade prior to its abolition, and then to reform and restrict the Fugitive Slave Law. 

    More broadly, that same small but resolute group took up new antislavery battles and established an antislavery presence that from time to time became an antislavery majority. This was not just the agitation of an elite. It must be emphasized that the congressmen acted in coordination with dense interregional as well as interracial networks of antislavery activists, organized in state abolition societies, churches and church committees, mutual aid societies, fraternal groups, and more. With such popular backing, year after year, antislavery congressmen voiced defiantly antiracist as well as antislavery sentiments on the floor of the House, exploring the Constitution in search of antislavery meanings, trying to find in it whatever powers they could whereby the federal government could limit slavery’s expansion leading to its eventual eradication. Some of their successes were defensive, as when they defeated efforts to augment the Fugitive Slave Act, to otherwise restrict the rights of free blacks, and to repeal the Northwest Ordinance’s ban on slavery in Illinois and Indiana. But the antislavery forces in Congress could be aggressive as well. 

    In 1804, once again bidden by abolitionist petitions, the Senate approved a provision that would have effectively shut the domestic slave trade out of the entire Louisiana Territory, obtained from France a year before, while the House, stunningly, passed a bill that banned outright further introduction of slavery into the territory. The House provision failed to gain approval from the Senate, and the efforts to keep slavery out of Louisiana proved futile, but the passing success was a signal that the antislavery presence in Congress had grown since 1790. Fittingly, the effort in the House was led by a sharp-witted and acid-tongued member from New Jersey named James Sloan, a Jeffersonian Republican who had cut his political teeth as a member of the New Jersey Abolition Society and as its delegate to the American Convention. A permanent goad to the southern slaveholders, including those in his own party, Sloan would cause an uproar in the House in 1805 by proposing a plan for gradual emancipation in the District of Columbia — yet another effort to find places in the Constitution giving the federal government the authority to attack slavery.  

    Finally, in 1807, at the earliest date stipulated by the Constitution, Congress approved the abolition of the Atlantic slave trade to the United States. With the bill supported by most of the large Virginia delegation, whose slaveholders stood benefit, the outcome was a foregone conclusion, but the antislavery members had to beat back several efforts to soften the law, including one proposal by the states-rights dogmatist John Randolph which in effect would have recognized slaves as property in national law. “Hail! Hail, glorious day,” the New York black abolitionist minister Peter Williams, Jr., an ally of the New-York Manumission Society, exclaimed at the city’s celebration.

    This high point in the politics of early American abolitionism would also prove a turning point. Although national agitation continued, there was a noticeable decline in enthusiasm in the ranks, at least outside Pennsylvania, once New York and New Jersey had completed their emancipation laws. A powerful racist backlash instigated by the Haitian Revolution and then by reactions to northern emancipation jolted the existing abolitionist societies and paved the way for the emergence of the American Colonization Society. Just as their British counter-parts perfected the massive petition campaigns required to shake Parliament into abolishing Britain’s Atlantic slave trade, also achieved in 1807, the American movement began to falter. Above all, the dramatic shift in the Southern economy that came with the introduction of the cotton gin in 1793 and the consequent renaissance of plantation slavery dramatically changed the terms of antislavery politics, dispelling forever the original abolitionist hope that the closing of the Atlantic trade would doom American slavery.

    Northern antislavery opinion did rebound after 1815 and reached a political flashpoint during the Missouri crisis of 1819-1820. But the abolitionist organizations, including the American Convention, although still alive and active, were becoming less of a factor in guiding events in Congress than they had been at the end of the eighteenth century. By now, with the expansion of mass mainstream party politics, popular mobilizations in the form of an impromptu Free Missouri movement did more to embolden antislavery congressmen than did the abolitionist’s continued memorials, petitions, and lobbying efforts. And then, in the wake of the Missouri crisis, shaken mainstream politicians sealed what amounted to a bipartisan consensus to prevent slavery from ever again entering into national political debates. With national politics seemingly closed to antislavery agitation, the old Quaker abolitionist strategy of working directly with sympathetic officeholders and political leaders began to look feeble. 

    But the fight had been irreversibly joined. The established abolitionist movement’s strategies left an important legacy on which later antislavery political movements would build. Even as the early abolitionist movement sputtered out, it played a part in shaping abolitionism’s future. In forming as sophisticated a political movement as they did, the early abolitionists created a practical model for organized political agitation in the new republic, antedating the political parties that arose thereafter. Although the effectiveness of that model declined after 1800 or so, it never disappeared; and elements of it would remain essential to later abolitionist politics, including the transformation of abolitionist petitioning into monster popular campaigns, along the lines that British abolitionists had pioneered after 1787. 

    The legacy was even more important with respect to antislavery ideology and strategy. If the initial impetus of the early abolitionists, dating back to 1775, had been to politicize antislavery sentiment, in order to make direct claims on government, so the abolitionists of the early republic perpetuated the idea that politics was the only sure means to achieve slavery’s eradication. In national politics, after the ratification of the Constitution, that meant, above all, advancing antislavery interpretations of the framers’ work. Although the most expansive ideas about Congress’ authority over slavery met with ever firmer resistance, the idea that Congress possessed numerous implicit or indirect powers to hasten slavery’s demise remained.

    Consider again the petition from the free men of color of Philadelphia in 1799. In addition to asking Congress to find the authority to abolish slavery, the petition included its own innovative antislavery interpretation of the Constitution to demonstrate that the Fugitive Slave Law was unconstitutional: as “no mention is made of Black people or Slaves” in the Constitution, the document observed, it followed that “if the Bill of Rights or the declaration of Congress are of any validity,” then all men “may partake of the Liberties and unalienable Rights therein held forth.” The assertion got nowhere, but it had been made, and as long as abolitionists kindled a basic optimism about the Constitution’s antislavery potential, they would sustain their belief that political efforts, and not moral suasion alone, would bring the complete abolition of American slavery. 

    This optimism peaked again during the Missouri crisis, when abolitionists seized upon federal control of the territories and the admission of new states as an instrument to commence slavery’s abolition. The optimism persisted through the 1820s, even as the colonization movement flourished and even as mainstream political leaders built a new system of national politics based on two opposed intersectional national parties — a party system deliberately designed to keep antislavery agitation at the margins. In 1821, a sometime colonizationist, the pioneering abolitionist editor Benjamin Lundy, offered a comprehensive seven-point plan to abolish slavery under the Constitution that began with banning slavery in the national territories and abolishing the domestic slavery trade. Four years later, Lundy joined with the abolitionist and political economist Daniel Raymond in trying to establish an antislavery political party in Maryland. After that failed, Lundy persuaded the American Convention to pick up the dropped thread of James Sloan’s earlier agitation in the House and pressure Congress to use its authority to abolish slavery and the slave trade in the District of Columbia. He then had the idea of mounting a mass petition campaign to support the demand; and in 1828, working in coordination with a Pennsylvania Abolition Society member, congressman Charles Miner, who had announced his intention to work for abolition in the district, he forced the issue to the floor of the House. Younger PAS members warmed to the campaign and kept it going; so would, somewhat ironically in retrospect, the young editor whom Lundy later picked up as his assistant and brought into the abolitionist cause, none other than William Lloyd Garrison. 

    The optimism would be badly battered in the 1830s and 1840s. Some members of a new generation of radical abolitionists, led by Garrison, would conclude that there was no hope of achieving abolition and equality in a political system attached to a proslavery U.S. Constitution — a “covenant with death” and “agreement with hell,” in Garrison’s famous condemnation. Only moral suasion backed with militant protest, Garrison declared, would advance the cause; moral purification would have to precede political action. Taking the long view, this represented as much a regression as an advance, back to the anti-political stance of the more pious of the Quaker abolitionists in the 1750s and 1760s. Garrison’s absolutist high-mindedness forthrightly but perversely lifted the cause above the grimy necessities of actual politics. 

    Yet for all of Garrison’s fiery and intrepid polemics, he and his followers were a minority inside the abolitionist movement, increasingly so after 1840. The abolitionist majority never relinquished the idea, passed on from the first-wave abolitionists, that Congress, by acting wherever it could against slavery, would hasten slavery’s destruction. Inside Congress, meanwhile, a luminary with antislavery convictions but no previous antislavery record, John Quincy Adams, led a small group of colleagues in a guerilla war against the gag rule and finally prevailed in 1844. Adams, the ex-president turned congressman, was a singular figure in American politics, unlike any before or since; and the 1840s were not the 1820s or the 1790s. But Adams, who came to work closely with abolitionists, in his way reprised the roles of George Thacher, James Sloan, and Charles Miner, becoming the face of antislavery inside the Capitol — “the acutest, the astutest, the archest enemy of slavery that ever existed,” in the view of his fiercely proslavery Virginia rival Henry A. Wise.

    By the time he collapsed and died on the floor of the House in 1848, opposing the American war with Mexico, Adams had also helped turn antislavery politics back toward issues concerning federal power over slavery in the territories — the very issues that, within a decade, led to the formation of the Republican Party. The abolitionists search for the constitutional means to attack slavery, begun in 1790, culminated in the agitation over Kansas, the convulsions that followed the Dred Scott decision in 1857, and everything else that led to the Civil War. All of which is a vast and complicated story, making the final connection between the antislavery politics of Anthony Benezet and Benjamin Franklin with those of Frederick Douglass and Abraham Lincoln. The important point, in the consideration of American origins, is that the early American abolitionists, audacious in their own time, formulated the essentials of a political abolitionism that, however beleaguered and often outdone, announced its presence, won some victories, and made its mark in the national as well as state politics of the early republic. It was not least owing to this constitutive achievement of American democracy that in the relatively brief span of fifty years, some of them very violent, slavery would be brought to its knees.

    Which brings us back to Benjamin Quarles’ observations, about the concomitant development of American slavery and American antislavery. The struggle for justice is always contemporaneous with injustice, quite obviously, and the power of injustice to provoke a hostile response is one of the edifying lessons of human life. Once joined, that struggle forever shapes both sides: there is no understanding the growth of pro-slavery politics, leading to the treason of secession, without reference to the growth of anti-slavery politics, just as anti-slavery politics makes no sense absent pro-slavery politics. But the history of anti-slavery in America, even during its most difficult periods, is not merely a matter of edification. It is also a practical necessity, a foundation for political action. It presents contemporary anti-racism with a tradition from which it can draw its ideas and its tools. It is a barrier against despair, and a refreshment of our sense of American possibility. The struggle against slavery was hard and long, and it was won. The struggle against racism is harder and longer, and it has not yet been won. But as our history shows, it has certainly not been lost.

    Loosed Quotes

    THE SECOND COMING 

    Turning and turning in the widening gyre
    The falcon cannot hear the falconer;
    Things fall apart; the centre cannot hold;
    Mere anarchy is loosed upon the world,
    The blood-dimmed tide is loosed, and everywhere
    The ceremony of innocence is drowned;
    The best lack all conviction, while the worst
    Are full of passionate intensity.

    Surely some revelation is at hand;
    Surely the Second Coming is at hand.
    |The Second Coming! Hardly are those words out
    When a vast image out of Spiritus Mundi
    Troubles my sight: somewhere in sands of the desert
    A shape with lion body and the head of a man,
    A gaze blank and pitiless as the sun,
    Is moving its slow thighs, while all about it
    Reel shadows of the indignant desert birds.
    The darkness drops again; but now I know
    That twenty centuries of stony sleep
    Were vexed to nightmare by a rocking cradle,
    And what rough beast, its hour come round at last,
    Slouches towards Bethlehem to be born?

           W.B. YEATS

    Turning and turning in the widening gyre
    The falcon cannot hear the falconer;
    Things fall apart; the centre cannot hold;

    ….

    The best lack all conviction, while the worst
    Are full of passionate intensity.

    In every crisis they appear, those famous and familiar lines from “The Second Coming,” written in 1919 by W. B. Yeats. Journalists and critics alike seem to take them as final assertions of Yeats’ own beliefs. Such innocent judgments do not ask why those lines open the poem, or for how long their assertions remain asserted. The poem itself has become lost behind the quotability of its opening lines. And Yeats, it seems, wants to be a pundit.

    In our ready “yes, yes” to those lines, we think we are accepting the judgment of a sage, but by the time we reach the close of the poem — which is a question, not an assertion — we are driven to imagine the changing states of the writer composing this peculiar poem, and we raise questions. What feelings required Yeats to change his bold initial stance, and in what order did those feelings arise? In order to understand this poem, to free it from its ubiquitous misuses, and to restore it to both its opening grandeur and its subsequent humiliation, those are the questions that we must answer. 

    Yeats was an inveterate reviser of his own ever-laborious writing: recalling his difficulty in composing “The Circus Animals’ Desertion,” he confesses, “I sought a theme and sought for it in vain,/ I sought it daily for six weeks or so.” (Mention of that poem in his letters of the time prove this no exaggeration: I counted the weeks.) What was the obstacle suspending his progress? (He spends the poem finding out.) In “Adam’s Curse” he remarks in frustration, “A line will take us hours maybe.” Hours to do what? “To articulate sweet sounds together.” Yeats puts the sequence of sounds first; he composed by ear. Are the resulting sounds always “sweet” in the ordinary sense of the word? Not at all; but they are “sweet” in the internal order of rhythms and styles as the poem evolves. When the poet has articulated its theme, its sounds, and its lines to the best of his powers, the ear registers its satisfaction.  

    “The Second Coming” is a lurid refutation of the lurid Christian expectations of the Second Coming of Christ, which Jesus himself foretells in Matthew 29-3:

    Immediately after the distress of those days, the sun will be darkened, and the moon will refuse her light, and the stars will fall from heaven, and the powers of heaven will rock; and then the sign of the Son of Man will be seen in heaven; then it is that all the tribes of the land will mourn, and they will see the Son of Man coming upon the clouds of heaven, with great power and glory.

    Yeats proposes a surreal alternative to Jesus’ prophecy, proposing that on the Last Day we will see not Christ in majesty but a menacing, pitiless, and coarse beast who “slouches toward Bethlehem to be born.” “After us, the savage god,” Yeats had said as early as 1896. He watched through the decades, appalled by the sequential horror of world events: The World War from 1914-1918; the failed Easter Rising in Ireland in 1916; the Bolshevik Revolution in 1917. And his first assertions in “The Second Coming” are indeed thoughts prompted by such political upheavals (and by earlier ones — Marie Antoinette appears in the drafts).   

    But what sort of assertions does he choose to express his thoughts? After the octave of assertions, there is a break not entirely accounted for, since the whole poem is not written in regular stanzas, and there are no further breaks. The compressed sentiments preceding the break are undermined by the unexplained and increasing mystery of the poet’s phrases, bringing the reader into the perplexity of the poet. The whole octave is full of riddles: What is a gyre? Whose is the falcon? What is the centre the center of? Why all the passive verbs? Who loosed the anarchy? Whose blood, loosed by whom, has dimmed what tide? What is meant by the ceremony of innocence? Who are the best and who are the worst? Such abstract language, such invisible agents, and such unascribed actions persist in Yeats’ opening declarations, down to the period that closes the octave.

    The quotability of Yeats’ opening passage derives, of course, from the total and unmodified confidence of its initial reportage, impersonal and unrelenting, offering a naked list of present-tense events happening “everywhere.” Stripped to their kernels, these are Yeats’s truculently unmitigated hammer-blows of grammar:

    The falcon cannot hear
    Things fall apart
    The centre cannot hold
    Mere anarchy is loosed
    The blood-dimmed tide is loosed
    Everywhere the ceremony of innocence is drowned
    The best lack all conviction
    The worst are full of passionate intensity

    The break, after Yeats’ introductory eight-line block, leads an educated reader to expect that a six-line block will follow, completing a sonnet. Yet the poet finds himself unable to maintain his original jeremiad, which has been aggressive, omniscient, panoramic, and prophetic. Yeats “begins over again,” and utters in the fourteen lines following the break a complete second “sonnet,” a rifacimento of the one originally intended, in which he rejects his earlier rhetoric of impersonal omniscience as inauthentic from his human lips. Who is he to speak as though he could see the world with the panoramic scan proper only to God? That so many successive writers have been eager to reissue his lines reveals how greatly the human mind is seduced by the vanity of the unequivocal. Can we requote without unease what the poet himself immediately rejected?

    Although “The Second Coming” begins with an attempt at couplet-rhyme, soon — as Peter Sacks has pointed out to me — the couplets begin to disintegrate, as though they themselves were intent on demonstrating how “things fall apart.” After the break, Yeats reveals in its wake a second attempt at a fourteen-line sonnet, one exhibiting a traditional “spillover” octave of nine lines (implying overmastering emotion in the writer) before a truthful closing “sestet” of five lines, making up the desired fourteen. The second, revisionary octave replaces the certainty of the poet’s original octave with the self-defensive uncertainty of “Surely.” Longing for a revelation more humanly reliable than an unsupported façade of godlike prophecy, Yeats insistently utters his second “Surely,” one no less dubious than the first. The second “Surely” attempts to locate a cultural myth to which he can attach the vision vouchsafed to him in a revelation arising within his human consciousness. “Surely the Second Coming is at hand. / The Second Coming!”

    For the first time in the poem, we hear Yeats speaking in the first person, declaring that “a vast image out of Spiritus Mundi / Troubles my sight.” The poet is the sole spectator of this vast image, and he claims that it stems not from his own bodily sense of sight but from the World Spirit, a universal Spiritus Mundi always potentially able to rise into human awareness. (Poets so often describe the initial inspiration for a poem as something coming unbidden that the reader is not troubled by Yeats’ myth of a World-Spirit supplying the image for his revelation.) The poet has decided that it is more honest, more tenable, to write in the first person, to present himself as one whose imagination has reliably generated a telling and trustworthy “vast image” of his historical moment. He has forsaken his impressive but fraudulent rhetoric of omniscience for an account of his private inspiration.  

    Once Yeats has repudiated his initial “divine” posture as a guaranteed seer-of-everything-everywhere, he can take on, in the first person, his limited historical image-making self and create with it a “human” sestet for his newly “remade” sonnet. Admitting the fallibility of any transient metaphorical image, he acknowledges that his image vanishes, “the darkness drops again,” and he is left alone. Yet he grandly maintains, in spite of his abandoning a prophetic stance, that he now definitely “knows” something.

    The “something” turns out to be a single historical fact: the exhaustion of Christian cultural authority after its “twenty centuries” of rule. His “vast image” — its nature as yet unspec-ified — has shown him that Christianity will be replaced by a counter-force, a pagan one. Drawing on his reading of Vico and Herbert Spencer, Yeats believed that history exhibited repetitive cycles of opposing forces. Just as Christianity overcame the preceding centuries of Egypt and Greece, now it is time for some power to defeat Christianity.

    In his private “revelation” the poet has seen the Egyptian stone sphinx asleep “somewhere” in sands of a desert. (The uncertain “somewhere” admits the loss of the initial “everywhere” of Yeats’ prophetic opening.) The “stony sleep” of the Sphinx has lasted through the twenty centuries of Christianity, but now Fate has set an anticipatory cradle rocking in Bethlehem, birthplace of the previous god, and a sphinx-like creature rouses itself to claim supremacy:

    The darkness drops again; but now I know
    That twenty centuries of stony sleep
    Were vexed to nightmare by a rocking cradle…

    Although the poet “knows” that Christianity is undergoing the nightmare of its death-throes, he cannot declare with any confidence what will replace it. He can no longer boast “I know that…”: he can merely ask a speculative question which embodies his own mixed reaction of fear and desire to the vanishing of a now outworn Christianity, the only ideological system he has ever known. What will replace the Jesus of Bethlehem, he asks, and invents a brutal and unaesthetic divinity, a sphinx seen in glimpses — “with lion body and the head of a man, / A gaze blank and pitiless as the sun.” The desert birds (formerly, it is implied, perched at rest on the immobile stone of the Egyptian statue) are now disturbed by the unexpected arousal of the “slow thighs” beneath them. The indignant birds, their movement in the sky inferred from their agitated cast shadows, “reel about,” disoriented, projecting, as surrogates, the poet’s own indignation as he guesses at the future parallel upheaval of his own world. Unable to be prophetic, unable now even to say “Surely,” the poet ends his humanly authentic but still unsatisfied sestet with a speculative question, one that fuses by alliteration “beast” and “Bethlehem” and “born”:

    And what rough beast, its hour come round at last,
    Slouches towards Bethlehem to be born?

    A conventional reading of the poem might take us this far. But no one, so far as I know, has commented that the culminating and ringing phrase, “Its hour come round at last,” is an allusion to Jesus’ famous statement to his mother at the wedding feast at Cana. When she points out to her son that their host has run out of wine, he rebukes her as he had once done in his youth when she had lost him in Jerusalem and found him preach-ing to the rabbis in the temple: “Wist yet not that I must be about my Father’s business?” (Luke 2: 48-49) At Cana, Jesus is even harsher as he tells his mother that he is not yet willing to manifest his divinity: “Woman, what have I to do with thee? mine hour is not yet come.” Not answering her son’s austere question, she simply says to the servants, “Whatsoever he saieth unto you, do it.” He tells them to fill their jugs with water, yet when they pour it is wine that issues, as, in silent obedience to his mother, Jesus performs his first miracle, even though to do so means changing his own design of when he will reveal his divinity. The evangelist comments: “This beginning of miracles did Jesus in Cana of Galilee, and manifested forth his glory” (John 2: 4-5,11). Unlike Jesus, who wished to delay his hour of divine manifestation, Yeats’ rough beast has been impatiently awaiting his own appointed hour, and it has come. His allusion to Jesus’ “Mine hour is not yet come” establishes a devastating parallel between the rough beast’s presumed divinity and that of Jesus, as the poet quails before the savage god of the future. 

    One senses there must be a literary bridge between the glorious “hour” of Jesus and the hideous hour of the rough beast. As so often, one finds the link in Shakespeare. In Henry V, Shakespeare alludes to Jesus’ remark, but adds the malice and impatience that will be incorporated by Yeats in his image of the rough beast. A French noble at Agincourt describes, in prospect, the vulturous hovering of crows waiting to attack the corpses of the English who will have died in battle. Eager for their expected feast on English carrion, “their executors, the knavish crows, / Fly o’er them, all impatient for their hour.” We know that the rough beast has been, like the crows, “all impatient for [his] hour,” because, once loosed on the world, he knows that his appointed hour, long craved by him, has come “at last.” Yeats had been alluding to Jesus’ words about the appointed hour ever since 1896: in his youthful poem “The Secret Rose,” a benign apocalypse is ushered in by the idealized romance symbol of the rose. He even remembered — writing in 1919 — his original inscription of the longing word “Surely” in the envisaged victory of the Secret Rose:

    Surely thine hour has come, thy great wind blows,
    Far-off, most secret, and inviolate Rose?

    “Surely thine hour has come;” “Surely a revelation is at hand”: apocalyptic symbols thread their way through Yeats’ life-work. In the same volume as “The Secret Rose,” we find a contrastively violent version of the End Times, drawing on the sinister Irish legend of a battle in “The Valley of the Black Pig” ushering in what Yeats called “an Armageddon which shall quench all things in the Ancestral Darkness again.” Just as the brave warrior Cuchulain — in Yeats’ deathbed poem, “Cuchulain Comforted” — must be reincarnated as a coward to complete his knowledge of life, so the serene beauty of the Secret Rose must, to be complete, coexist with a twin, a wildness of apprehension. Maud Gonne, whom Yeats loved in frustration all his life, incarnated for him the conjunction of wildness and beauty:

    But even at the starting post, all sleek and new,
    I saw the wildness in her and I thought
     A vision of terror that it must live through
    Had shattered her soul.

    Maud had already appeared in 1904 as the paradoxical “wild Quiet,” “eating her wild heart” (an image of wild love borrowed from the opening sonnet of Dante’s La Vita Nuova). She is the female companion to another apocalyptic creature, the Sagittarius of the zodiac; he is a Great Archer poised, his bow drawn, in the woods of Lady Gregory’s estate. He, like Shakespeare’s predatory birds, “but awaits his hour” to loose arrows upon a degenerate Ireland, where English archaeologists are sacrilegiously excavating sacred Tara and the ignorant Dublin masses are actually celebrating the coronation in England of Edward VII:

    I am contented for I know that Quiet
    Wanders laughing and eating her wild heart
    Among pigeons and bees, while that Great Archer,
    Who but awaits His hour to shoot, still hangs
    A cloudy quiver over Pairc-na-lee.

    By 1919, in “The Second Coming,” the Yeatsian apocalyptic symbol has shed its early romance component of the idealized Rose, has lost the starry constellation of the vengeful zodiacal Archer, and, in the hour of its Second Coming, has become “a vision of terror” like the one Yeats saw in the young Maud’s soul. Yeats had thought of calling his poem “The Second Birth,” but by renaming it “The Second Coming,” he ensured that in spite of the rocking cradle, all his recurrences of “Mine hour is not yet come” recall the self-manifestation of Jesus not as a child, but as the adult of Cana, the miracle-worker who will return to the world at the end of time.  

    “The Second Coming” is in fact a thicket of allusions. A hybrid one pointing to Spenser’s Faerie Queene and Milton’s Paradise Lost adds an opaque quality to the mythical dimension of the rough beast: he cannot be accurately described. Yeats presents him vaguely as “a shape,” borrowing from Spenser the concept of Death’s resistance to visual representation and from Milton the shapeless word “shape.” In Spenser’s first Mutability Canto, after a procession of months representing the passage of time, Death, symbol of the end of time, appears both seen and unseeable, “Unbodièd, unsouled, unheard, unseen”:

    And after all came Life, and lastly Death;
    Death with most grim and griesly visage seene,
    Yet is he nought but parting of the breath;
    Ne ought to see, but like a shade to weene,
    Vnbodièd, vnsoul’d, vnheard, vnseene.

    Imitating his master, the “sage and serious poet, Spenser,” Milton has his Satan meet Death, equally indescribable except by the word “shape” and its successive ever-less-visible negations (Milton substitutes “shadow” for Spenser’s Hades- issued “shade.”) Death confounds even Satan:

    The other shape,  If shap
    e it might be call’d that shape had none
    Distinguishable in member, joynt, or limb,
    Or substance might be call’d that shadow seem’d,
    For each seem’d either; black it stood as Night,
    Fierce as ten Furies, terrible as Hell,
    And shook a dreadful Dart; what seem’d his head
    The likeness of a Kingly Crown had on.

    Retaining the word “shape” but changing the concept of the shapeless shadowy “shape” inherited from his predecessors, Yeats attempts to describe in disarticulated images the nameless figure of his own chimerical “vast image” with a “lion body and head of a man,”: he adds a description of its gaze “blank and pitiless as the sun,” sexualizing it by the “slow thighs” unattached to any completed bodily description, and debasing it by its “slouching” motion, its lurching advance as it gradually reactivates its stony limbs. So grotesque is the figure, so unnameable by any visual word, that Yeats rejects even his own impotent efforts at specialized description, tethering his final question to the vague words “rough beast,” offering nothing but its genus. It is a generalized “beast” rather than a recognized species, let alone an individual creature.

    There are, then, four evolving motions successively represent-ing Yeats’ mind and emotions in “The Second Coming.” We see first an impersonal set of prophetic declamations; these are replaced by a first-person narration of the appearance of the troubling “vast image” coming to replace the Chris-tian past; this, disappearing, is replaced by a “factual” account of the obsolescence of Christianity (“Now I know”); but after this flat declaration of secure knowledge, Yeats can muster no further direct object of what he “knows.” Instead, he launches a final speculative query (“And what rough beast”). These four feeling-states — impersonal omniscience; a first-person boast of a private “revelation”; a “true” historical judgment as to the nightmarish dissolution of the Christian era; and a blurred query uttered in fear — mimic the poet’s changes of response as he attempts to write down an accurate poem of this life-moment. A desire for authentically human speech has made him turn away from his initial confident (and baseless) soothsaying to a personal, transitory, (and therefore uncertain) private “revelation.” He tries finally to attain to truth in judging the end of the Christian era.  

    But what truth can he declare of what is to come? He acknowledges — in a move wholly unforeseen in the strong and quotable opening octave — how limited his “knowledge” actually is. The “darkness” of fear cannot be resoundingly swept away by a transitory image from an unknowable source: opacity drops again. By the end, Yeats must forsake his proposed prophetic and visionary and historical styles and resort to a frustrated human voice that confesses the helplessness of the human intellect and the humiliation of admitting incomplete knowledge. At the inexorable approach of an unknowable, shapeless, coarse, and destructive era, “the darkness drops again.” 

    It is not mistaken, however, to think of the resounding opening summary list as “Yeats’ views” as he begins the poem. He even quotes himself in a letter of 1936 to his friend Ethel Mannin, anticipating the next war: “Every nerve trembles with horror at what is happening in Europe, ‘the ceremony of innocence is drowned.’” The sentiments are genuine, but in a poem something more has to happen than the static observation of a moment in time. A credible artifact has to be constructed, the “sweet sounds” have to be articulated, and a persuasive structure has to be conceived. Since Yeats had lost faith in both Blakean denunciation and Shelleyan optimism by the time he wrote “The Second Coming,” he had gained the humility to confess, at the end of the poem, the limits of human knowledge and human vision. Though his diction is still grand in his closing, he is no longer boasting his seer-like knowledge, no longer claiming a unique private vision, no longer able to assuage the nightmare of the End Times of Christianity. To admit Yeats’ final acknowledgment of human incapacity is essential to perceiving his overreaching in his earlier claims to prophetic power and visionary insight. 

    Painful as it is to see the truncated opening lines — however memorable — become all that is left of the poem, and of Yeats’ character, in popular understanding, it is more painful to see the disappearance of the human drama of the poem in itself as it evolves, in its desire for authentically human speech and an authentic estimation of human powers — better and truer things than arrogant and stentorian utterances of omniscience. In repudiating his first octave of omniscience, making a break, and then having to write a different “sonnet” to attain a more accurate account of himself and his time, Yeats repeats, by remaking his form, his disavowal of the vain human temptation to prophecy. “Attempting to become more than Man, we become less,” said Blake, in what could serve as an epigraph to Yeats’ intricate and terrifying and regularly misread poem.

    On Indifference

    What blurt is this about virtue and about vice?
    Evil propels me and reform of evil propels me, I stand indifferent,
    My gait is no fault-finder’s or rejecter’s gait,

    I moisten the roots of all that has grown.

    WALT WHITMAN


    The Olympian gods are not our friends. Zeus would have destroyed us long ago had Prometheus not brought fire and other useful things down to us. Prometheus was not being benevolent, though. He was angry at Zeus for having locked away the Titans and then for turning on him after Prometheus helped secure his rule. We humans were just pawns in their game. The myths teach that we are here on sufferance, and that the best fate is to be ignored by these poor excuses for divinities. On their indifference depends our happiness. Fortunately we have only minimal duties towards them, so once the ashes from the sacrifices are swept away, the libations mopped up, the festival garlands recycled, we are free to set sail.

    The Biblical God requires more attention. Though he is sometimes petulant, his providential hand is always at work for those who choose to be chosen. Providence comes at a price, though. We are obliged to fear the Lord, to obey his commandments, and to internalize the moral code he has blessed us with. For purists, this can mean that virtually every hour of every day is regulated. But that is not how the Bible’s protagonists seem to live. They love, they fight, they rule kingdoms, they play the lyre, and only when they lust after a subject’s wife and arrange for his death in battle does God stop the music and call them to account. And repentance done, the band strikes up again. The covenant limits human freedom, but it also self-limits God’s. Our to-do list is not infinite. Once we have fulfilled our duties, we are left to explore the world. We good here? Yeah, we’re good.

    Tut, tut child! Everything’s got a moral, if only you can find it.

    THE DUCHESS, ALICE IN WONDERLAND


    But as a Christian my work is never done. I must have the vague imitatio Christi ideal before my eyes at all times and must try to answer the riddle, what would Jesus do?, in every situation — and bear the guilt of possibly getting the answer wrong. Kierkegaard was not exaggerating when he said that the task of becoming a Christian is endless. It can be brutal, too. Jesus told his disciples they must be ready at any moment to drop every-thing if the call comes, adding, if any man come to me, and hate not his father, and mother, and wife, and children, and brethren, and sisters, yea, and his own life also, he cannot be my disciple.

    Saint Paul’s God has boundary issues. More busybody than Pied Piper, he is always looking into our hearts, parsing our intentions, and demanding we love him more than we love ourselves. That master of metaphor Augustine found a powerful one to describe the new regime: Two cities have been formed by two loves: the earthly city was created by self-love reaching the point of contempt for God, the Heavenly City by the love of God carried as far as contempt of self. He hastened to add that the earthly city plays a necessary role in mortal life, offering peace and comfort in the best of times. But over the millennia — such is the power of metaphor over reason — zealots hedging their bets have concluded that if we are to err, it is better to fall into self-loathing than discover any trace of pride within. A moral scan will always turn up something. And so they lock themselves into panopticons where they serve as their own wardens and where nothing is a matter of spiritual indifference.

    Subsequent Christian theologians raised doubts about this rigorist picture of the Christian moral life. In the Middle Ages they debated whether there might be such things as “indifferent acts,” that is, acts that have no moral or spiritual significance. Scratching one’s beard was a common example used by the laxists. Aquinas conceded the point concerning beards, but otherwise declared that if an action at all involves rational deliberation it cannot be indifferent, since reason is always directed towards ends, which can only be good or evil. Q.E.D. And so the class of genuinely indifferent acts was left quite small in official Catholic teaching. That sat just fine with a monastic and conventual elite already devoting their lives to self-abnegating spiritual exercises, accompanied by tormenting doubts about whether such exercises were prideful. But they were a class apart. Ordinary clerical functionaries led more lenient lives, which is how we got cardinals with concubines and with Titian portraits of themselves hanging over the fireplace. Vigilance was not their vocation.

    In the Protestant view, that was precisely the problem. Protestantism, and Calvinism in particular, brought back moral rigorism and then democratized it. Now every burgher was expected to frisk himself while meditating on the terrifying mystery of predestination. The anxiety only increased when Protestants faced the choice among different and hostile denominations. Was there only one true church? Or were certain dogmatic disputes among denominations matters of indifference to God? Combatants in the Wars of Religion said no: true Christians must not only walk the right walk, they must talk the right talk. But, over time, as the denominations proliferated like tadpoles in a pond, and the doctrinal differences among them became more abstruse, the rigorist line became more difficult to maintain. Perhaps the Lord’s house has many mansions after all.

    That thought is exactly what Catholic critics of the Reformation, worried about. If we concede that there are many Christian paths to salvation, people will ask whether there are also non-Christian religious paths. If we concede that there are, they will then ask whether there are decent and admirable non-religious paths to moral perfection. And if we concede that there are — here is the crucial leap — they will be tempted to ask whether there might also be decent and admirable ways of life that do not revolve around moral perfection. The danger would not be that people would abandon morality altogether; no self-declared anti-moralist, not even Nietzsche, has ever renounced the words must and ought. It would be that they would start considering morality to be just one dimension of life among others, each deserving its due. It would mean the end of morality’s claim to be the final arbiter of what constitutes a life well lived.

    The gradient on this slope of questioning is steep. Montaigne slid to the bottom of it while the Wars of Religion were still raging and has been dragging unsuspecting readers along with him ever since. He did not openly state the case against the imperialism of conscience; a bon vivant, he was in no rush to become a bon mourant. Instead he wrote seemingly lighthearted essays full of anecdotes that subtly held up the rigorist life to ridicule or revulsion, implying that there must be a better way to live, without specifying exactly what that might be. He only pointed to himself as a genial, indeed irresistible, exemplar of tolerant, urbane contentment.

    Pascal, Montaigne’s greatest reader, immediately discerned the threat that the Essays posed to the Christian moral edifice: Montaigne inspires indifference about salvation, without fear and without repentance. Atheism is refutable, but indifference is not. The scholastic debate over indifferent acts had presumed a desire to get our moral houses in order. The Reformation and Counter-Reformation debates over justification presumed a desire to get our theological houses in order. Montaigne’s indifferentism, as it came to be called, made all well-ordered houses look menacing or faintly ridiculous. That is why indifferentism was denounced along with liberalism as modern “pests” by Pope Pius IX in his Syllabus of Errors of 1864. He understood that there is nothing more devastating to dogma than a shrug of the shoulders.

    It is nonsense and an antiquated notion that the many can do
    wrong. What the many do is God’s will. Before this wisdom all
    people have had to this day bowed down — kings, emperors,
    and excellencies. Up to now all our cattle have received
    encouragement through this wisdom. So God is damned well
    going to have to learn how to bow down too.

    KIERKEGAARD


    Americans’ relation to democracy has never been an indifferent one — or a reasoned one. For us it is a matter of dogmatic faith, and therefore a matter of the passions. We hold these truths to be self-evident: has ever a more debatable and consequential assertion been made since the Sermon on the Mount? But for Americans it is not a thesis one might subject to examination and emendation; even American atheists skip over the endowed by their Creator bit in reverent silence. We are in the thrall of a foundation myth as solid and imposing as an ancient temple, which we take turns purifying like so many vestals. We freely discuss how the mysterium tremendum should be interpreted and which rituals it imposes on us. But the oracle has spoken and is taking no further questions.

    Which is largely a good thing. Not long ago there was breezy talk of a world-historical transition to democracy, as if that were the easiest and most natural thing in the world to achieve. Establish a democratic pays légal, the thinking went, and a democratic pays réel will spontaneously sprout up within its boundaries. Today, when temples to cruel local deities are being built all over the globe, we are being reminded just how rare a democratic society is. So let us appreciate Americans’ unreasoned, dogmatic attachment to their own. Not everything unreasoned is unwise.

    But neither are all good things entirely good. This is what the dogmatic mind has trouble grasping. If some end — the rule of the saints, say, or the dictatorship of the proletariat — is deemed to be worth pursuing, the dogmatist needs to believe it is the only and perfect good, carrying no inherent disadvantages. Blemishes must be ignored so as not to distract the team. But once problems become impossible to ignore, as inevitably they will be, they must be explained. And so they will be attributed either to alien, retrograde forces that have infiltrated paradise, or to insufficient zeal among believers in pursuing the good. The dogmatic mind is haunted by two specters: the different and the indifferent.

    Americans’ dogmatism about democracy strengthens their attachment to it, but it weakens their understanding of it. The hardest thing for us is to establish enough intellectual distance from modern democracy to see it in historical perspective. (While virtually every American university has courses on “democratic values,” I am unaware of any that offers one on “undemocratic values,” despite the fact that almost all societies from the dawn of time to the present have been governed by them.) The Framers had experience with monarchy and had studied the failed republics of the European past. They looked upon democracy as one political form among others, a means to particular ends, with strengths and weakness like any other political arrangement. But once Americans in later generations came to know nothing but democratic life, democracy became the end itself, the summum bonum from which all discussion and debate about means must flow. When Americans ask how can we make our democracy better? what they are really asking is how can we make our democracy more democratic? — a subtle but profound difference.

    Our dogmatism shows up in other ways, too. Spend some time abroad and you start to notice that Americans rarely express mixed feelings about their country as other peoples do about theirs. We oscillate humorlessly between defensive boosterism and self-flagellation, especially the latter over the past half century. Today there is nothing more American than condemning American democracy or declaring ourselves alienated from it. Yet the only charge we can think of leveling against it is that of failing to be democratic enough. No one appreciates the irony except the alert foreign observer with a sense of humor, like the divine Mrs. Trollope. Foreign anti-Americanism is always, at some level, anti-democratic, which is what can make it enlightening, and useful to us. American anti-Americanism is hyper-American and earnest as dust. We find it virtually impossible to get outside ourselves. We breed no Tocquevilles, we must import them.

    Other countries claim to revere democracy, and many do. But few think of democracy as a never-ending moral project, a world-historical epic. And none have considered it their divine duty to bring democracy to the unbaptized. The Protestant stamp on the American mind is so deep that collectively we take on the mantle of the Pilgrim Church marching towards a redemption in which all things will be made new. For much of our history the sacred individual task of becoming a more Christian Christian ran parallel to the sacred collective task of becoming a more democratic democracy. Note that I do not say liberal democracy. For there is nothing liberal about Americans when they are on the march. Which is why when conscription begins, the indifferent, who for whatever reason do not feel like marching just now or have other destinations in mind, beat a retreat. Some have sought refuge in rural solitude, some in the American metropolis, some in foreign capitals. Anywhere where they might be free of the unremitting imperative to become a better person or a better American. Anywhere where they could simply become themselves.

    The thesis that huge quantities of soap testify to our greater
    cleanliness need not apply to the moral life, where the more
    recent principle seems more accurate, that a strong compulsion
    to wash suggests a dubious state of moral hygiene.

    ROBERT MUSIL


    A hand goes up in the audience: But we are no longer a Protestant country! We are a secular one that has gotten over religious conformism. What on earth are you talking about?

    Thank you for that question. In one decisive respect we have indeed moved beyond Protestantism: we no longer believe we are fallen, sinful creatures. The Protestant divine was severe with his flock and occasionally with his country, but he was also severe with himself. He was a busybody because his God was a busybody who put everyone, including the clergy, under divine scrutiny. There is none righteous, no, not one, says Saint Paul. What a terrible way to start the day.

    But in other respects we have retained vestiges of our Protestant heritage and even exaggerated them. Hegel foresaw this. Considering the moral and religious psychodynamics of his time, he observed that the Dialectic has a sense of humor: toss Calvin out the front door and Kant sneaks in through the back. No sooner had the empiricism and skepticism of the Enlightenment disenchanted nature, draining it of moral purpose, than German idealism surreptitiously reestablished the principles of Christian morality on abstract philosophical grounds. And no sooner had Kant midwifed that rebirth than the moral impulse floated free of his universalist strictures and became more subjective, less subtle, more excitable, less grounded in ordinary existence. In a word, it became Romantic. The saints are dead; long live the “beautiful souls.”

    What is a beautiful soul? For Schiller, who coined the term, it was a person in whom the age-old tension between moral law and human instinct had been overcome. In a beautiful soul, he wrote, individual deeds are not what is moral. Rather, the entire character is…The beautiful soul has no other merit, than that it is. Schiller imagined individuals who so fully incarnate the moral law that they have no need of moral reasoning and who experience no struggle to surmount the passions. This beautiful soul does not really act morally, it simply behaves instinctively — and such behaving is good. (Ring a bell? And God saw every thing that he had made, and, behold, it was very good.) A disciple of Kant, Schiller took the moral law to be by definition universal. What he did not anticipate was that the notion of a beautiful soul could inspire a radical impudence in anyone convinced of his or her own inner beauty. Who would not want to be crowned a moral Roi Soleil, absolved in advance of guilt, self-doubt, repentance, and expressions of humility? Who would not want to learn that the definition of righteous-ness is self-righteousness?

    So, in answer to the question, yes, in one sense America is a post-Protestant nation. The uptight Bible-thumping humbug of yore has been shamed off the public square — but only to make room for networks of self-righteous beautiful souls pronouncing sentence from the cathedras of their inner Vaticans. What no one seems to recognize is that they are an atavism, a blast from the past, not a breeze from a progressive future. Like their ancestors, they are prone to schisms and enter civil wars with the giddiness of Knights Templar descending on Palestine. Yet they are bound together by an unshakeable old belief that when it comes to making the world a better place there are no indifferent acts, no indifferent words, no indifferent thoughts, and no rest for the virtuous. Our beautiful souls are Marrano Christians as radical as old Saint Paul. They just don’t know it. Yes, the Dialectic really does have a sense of humor.

    “Ah,” Miss Gostrey sighed, “the name of the good American
    is as easily given as taken away! What is it, to begin with, to be
    one? And what’s the extraordinary hurry?”

    HENRY JAMES


    America is working on itself. It is almost always working on itself because Americans believe that life is a project, for individuals and nations. No other people believes this quite the way we do. There is no Belgian project, no Kenyan project, no Ecuadoran project, no Filipino project, no Canadian project. But there is an American project — or rather a black box for projects that change over time. We are always tearing out the walls of our collective house, adding additions, building decks, jackhammering the driveway and pouring new asphalt. We are seldom still and never quiet. And when we set to work we expect everyone to pitch in. And that means you.

    Which can put you in an awkward position. Let’s say you are unhappy with the project of the moment. Or you approve of it but think it should be handled differently. Or you appreciate the way it is handled but don’t feel particularly inclined to participate right now. Or you even want to participate but resent being dragooned into it or learning that others are being punished for not joining in. Or say that you simply want to be left alone. In any other country these would be considered entirely reasonable sentiments. But not in America when it is at work on itself.

    The projects of our moment may sound radical, but they are just extensions of the old principles of liberty, equality, and justice. That certainly speaks in their favor. What is new, thanks to our beautiful souls, is that the task of making this a better America has now been conflated with that of making you a better person. In the Protestant age, the promotion of Christian virtue ran parallel to the promotion of democracy but usually could be distinguished from it. Bringing you to accept Jesus as your personal savior had nothing necessarily to do with bringing you to accept William Howard Taft as your national savior. The first concerned your person, the second concerned your country.

    In the age of the beautiful soul our evangelical passions have survived and been transferred to the national project, personalizing it. Beautiful souls believe that one’s politics emanate from an inner moral state, not from a process of reasoning and dialogue with others. Given that assumption, they reasonably conclude that establishing a better politics depends on working an inner transformation on others, or on ostracizing them. And thanks to the wonders of technology, the scanning of other people’s souls has never seemed easier.

    These wonders have also landed us in a virtual, and global, panopticon. It has no physical presence, it exists solely in our minds. But that is sufficient to maintain a subtle pressure to demonstrate that we are all fully with the newest American projects. In periods of Christian enthusiasm in the past, elites would make ostentatious gestures of faith in order to ward off scrutiny. They would fund a Crusade, commission an altarpiece, make a pilgrimage, join a confraternity, or sponsor a work of theological apologetics. Virtue-signaling is an old human practice. Today the required gestures are of a political rather than spiritual nature. We have all, individuals and institutions, learned how to make them by adapting how we speak, how we write, how we present ourselves to the world, and — most insidiously — how we present the world to ourselves. By now we hardly notice that we are making such gestures. Yet we certainly notice when the codes are violated, even inadvertently; the reaction is swift and merciless. Such inadvertence, even due to temperament or sensibility, is read as indifference to building a more democratic America, which ranks very high on the new Syllabus of Errors.

    It is of vital importance to art that those who are made its
    messengers should not only keep their message uncorrupted, but
    should present themselves before their fellow men in the most
    unquestionable garb.

    THE CRAYON (1855)


    Aristocracies are aloof and serene. American democracy is needy and anxious. It wants to be loved. It is like a young puppy that can never get enough petting and treats. Who’s a good boy? Who’s a very good boy? And if you repeat this often enough, eventually the dog will lick your face, as if to say, and you’re a good boy too! The rewards for satisfying this neediness, and the penalties for failing to satisfy it, are powerful incentives to conform in just about every sphere of American life, no more consequentially than in intellectual and artistic matters. Every society, every religion, every form of government offers such incentives. Since ancient times worldly intellectuals and artists have understood that they are never entirely free from the obligation to genuflect occasionally, and the clever ones learn how to wink subtly at their audiences to signal when they are doing just that. L’art vaut une messe. Romanticism in the nineteenth century was the first movement to fuel the fantasy of complete autonomy from society, only to itself become a dogma that all thinkers and artists were expected to profess.

    It is one thing, though, to self-consciously genuflect when necessary — and then, just as self-consciously, to stand up when mass is over and return to your workplace. It is quite another to convince yourself that kneeling is standing. Or that you must turn your workplace into a chapel. What Tocque-ville meant by the “tyranny of the majority” was exactly this infiltration of public judgment into individual conscious-ness, changing our perceptions of and assumptions about the world. It is not really “false consciousness,” which is the holding of false beliefs that enhance the power of those who dominate others. Rather it is a kind of group conscious-ness that morphs and re-morphs arbitrarily like cumulus clouds. False consciousness obscures precise class interests. The tyranny of the majority obscures the interests, feelings, thoughts, and imagination of the self.

    What is so striking about the present cultural moment is how many Americans who occupy themselves with ideas and the imagination — writers, editors, scholars, journal-ists, filmmakers, artists, curators — seem to be suffering from Stockholm Syndrome. Rerouted from their personal destinations toward a more moral and democratic America, they are losing the instinct to set their own course. They no doubt believe in what they are doing; the question is whether they are in touch enough with themselves to feel any healthy tension between their presumed political obligations and whatever other drives and inclinations they might have.

    Talk to creative young people today and prepare yourself for the patter celebrating the new collective journey, which they have no trouble linking to their personal journeys, however short those still are. The rhetoric of identity is very useful here because it has both individual-psychological and political meaning, blurring the distinction between self-expression and collective moral progress. That is also why identity-talk has become the lingua franca of all grant-making and prize-giving bodies in the United States. The committees are much more comfortable exercising judgment based on someone’s physical characteristics and personal story than exercising aesthetic and intellectual judgment based on the work. Little do the well-meaning young people drawn into this game suspect that they are not advancing into a more progressive twenty-first century. They have simply been rerouted back to the nineteenth century, where they must now satisfy a newer, hipper class of Babbits. Or, worse, become their own Babbits, convincing themselves that their creative journeys really are and ought to be part of a collective moral journey.

    This is not to say that art has nothing to do with morality. Morality in the broadest sense, the fate of having to choose among conflicting ends and questionable means, is one of art’s great subjects, particularly the literary arts. But the art of the novelist is not to render categorical moral judgments on human action — that’s the prophet’s job. It is to cast them into shadow, to explore all the ruses of moral reasoning. Literature and art are not sustenance for the long march toward national redemption. They have nothing whatsoever to do with “giving voice” or “telling our stories” or “celebrating” anyone’s or any group’s achievements. That is to confuse art with advertising copy. The contribution of literature and art to morality is indirect. They have the power to remind us of the truth that we are mysteries to ourselves, as Augustine put it. Literature is not for simpletons. Billy Budd was not written for Billy Budds. It was written for grown-ups, or those who would become one. Which is why the status of literature and the other arts has never been terribly secure in the land of puer aeternus.

    In the American grain it is gregariousness, suspicion of privacy,
    a therapeutic distaste in the face of personal apartness and
    self-exile, which are dominant. In the new Eden, God’s creatures
    move in herds.

    GEORGE STEINER


    For some, art and reflection have always served as a refuge from the world. In America, the world more often serves as a refuge from art and reflection. We are only too happy when the conversation turns from such matters to those thought to be more practical, more pedagogical, more ethically uplift-ing, or more therapeutic. The history of anti-intellectualism in America is less one of efforts to extinguish the life of the mind than to divert it toward extraneous ends. (See On the Usefulness of the Humanities for Electrical Engineering, 3 vols.) Such efforts reflect a perverse sublimation of the eros behind all creative activity, redirecting it from the inner life of the creative person toward some activity that can be judged in public by commit-tees. The result, in intellectual and artistic terms, is either propaganda or kitsch. And we are drowning in both.

    Censorship in America comes and goes. Self-censorship does too, depending on the public mood at any particular time. The most persistent threat to arts and letters in America is amnesia, the forgetting of just what it is to cultivate an individual vision or point of view in a place where thinking, writing, and making are judged to be necessarily directed toward some external end. The barriers to becoming an individual in individualistic America should never be underestimated. Tocqueville’s deepest insight was into the anxieties of democratic life brought on by the promise and reality of autonomy. Freedom is an abyss; the urge to turn from it is strong. The tyranny of the majority is less a violent imposition than a psychologically comprehensible form of voluntary servitude.

    In such an environment, maintaining a state of inner indifference is an achievement. Indifference is not apathy. Not at all. It is the fruit of an instinct to moisten the roots of all that has grown, as Whitman put it, and experience one’s self and the world intensely without filters, without having to consider what ends are being served beyond that experience. It is an instinct to hit the mute button, to block out whatever claims are being made on one’s attention and concern, confident that heaven can wait. It is an instinct for privacy, far from the prying eyes and wagging tongues of beautiful gods and beautiful souls. It is a liberal instinct, not a democratic one.

    Liberalism, Judith Shklar once wrote, is monogamously, faithfully, and permanently married to democracy — but it is a marriage of convenience. That is exactly right. The liberal indifference of Montaigne was a declaration of independence from the religious zealots of his time. But zealotry is zealotry, and democracy has its own zealots. We may look more kindly on their aims but they are no less a potential threat to inner freedom than our homegrown messiahs are. The indifferent appreciate democracy to the extent that it guarantees that freedom; they distrust and resist it the moment they are invited down to the panopticon for a little chat. They are not anti-democratic or anti-justice or reactionary. They understand that a liberal democracy requires solidarity and sacrifice. and reforms, sometimes radical ones. They wish to be good citizens but feel no obligation to cast down their nets and join the redemptive pilgrimage. Their kingdom is not of this continent.

    It is a paradox of our time that the more Americans learn to tolerate difference, the less they are able to tolerate indifference. But it is precisely the right to indifference that we must assert now. The right to choose one’s own battles, to find one’s own balance between the True, the Good, and the Beautiful. The right to resist any creeping Gleichschaltungthat would bring a thinker’s thoughts or a writer’s words or an artist’s or filmmaker’s work into alignment with a catechism. Dr. Bowdler be damned.

    America is working on itself. Let it work, and may some good come of it. But the indifferent will politely decline the invitation to shake pom-poms on the sidelines or join a Battle for The American Soul just now. Why now? Because the illiberal passions of the moment threaten their autonomy and their self-cultivation, and have formed a generation that fails to see the value of those possessions. That is the saddest part. Perhaps a later one will again find it inspiring to learn what the early modernist writers and artists who fled the country believed: that America’s claim on us is never greater than our claim on ourselves. That democracy is not everything. That morality is not everything. That nothing is everything.

    “From 2020”

    1.
    The first half having been
    given up to space, I decided
    to devote my remaining
    life to time, this thing we live
    in fishily or on like moss
    or the spores of a stubborn
    candida strain only to be
    gored or gaffed, roots
    fossicked out by rake or have
    our membranes made so permeable
    by -azole drugs the contents
    of the cell flood everywhere.

    The bubble gun I’d bought
    on Amazon had come, so
    flushed, time’s new novitiate,
    I stood outside the door
    in velour slippers with a plastic
    wedge, from M&S, the toes
    gone through, and practised
    pulsing softly on the trigger,
    pushing dribbly hopeless sac
    shapes out, dead embryos
    that, managed all the same
    to right themselves to spheres,
    and bob as bubbles do, the colour
    of a rainbow minced or diced
    into the ornamental tree, or else
    just brim the fatal fence, most
    out of reach of the toddler
    capering side to side to keep
    his balance on the grass, one
    snotty finger prodding like a
    rapper turned jihadist’s threat
    of threat and all, ten seconds in,
    unskinned of radiance,
    re-rendered air.

    This would have been in that
    sad hobbled stretch of week
    between a Sunday Christmas
    and new year, my friends all
    40+, harassed by infants, joylessly
    still slugging Côte de Beaune
    and fennel-roasted nuts, the liver
    detox books not downloaded
    to app but only browsed by phone
    in the dark mornings, slitless.
    (I lay there worrying at my own
    which had the meaty bigness
    underrib of foie gras entier.
    The pillow case smelled horsey,
    sheets unchanged, the laundry
    everywhere, mountainously.)

    It wasn’t till my birthday,
    Jan 3, when schools went back,
    search engines saw a volume
    spike for ‘custody’ and gifs of
    sullen cats with emery boards
    explained the dead-eyed un-
    sheathed fear produced by credit
    card repayment plans and pissing
    on ketosis sticks that the month
    could manifest the rawness
    of new year: poverty then,
    and mock exams; now, enzyme
    supplements, and softening
    the 11s, scooped one layer
    deeper by all that red wine,
    by summer’s oxidative damage.

    2.
    The dry trees lolled in drunken
    groups outside front gates,
    waiting for the council van
    to come. Today, which was
    my birthday, macerated shit
    in nappies from the 24th,
    threaded by the bin in links,
    by twisting, like short sausages
    or poodles fashioned from
    balloons, was binned along
    with bean tails, tonic bottles,
    nails, a mini Lamborghini’s
    snapped-off wheels, a magnum
    bitter round the rim with old
    champagne (that halitosis smell),
    and twenty near-identical
    reception Christmas cards:
    a stippled snow-hung tree
    a bloated, ravaged robin.

    My son propped on one hip,
    front door ajar, both shivering
    in the not yet dawn, the heating
    just about to crackle on —
    raised up his palm in silent
    pleasure at the work being done.
    One man, his shoulders dewy
    with reflective strips, waved
    back and called him by his name
    — the weekly ceremony —
    until he bristled in my arms
    legs stiffening with joy.

    3.
    Downstairs I mixed some Movicol
    into warm juice and saw a
    squirrel run across the grass,
    freeze skinny as a meerkat
    on the mostly mud I’d tried
    to reseed twice last summer.
    (After moss killer, waiting,
    something ferrous, the shady
    lawn seed recommended by
    a friend eventually produced,
    as if by staple gun, a few sparse
    fiercely emerald reeds which died.)

    Both boys had scrambled over
    look! and when they turned away
    behind the mouth and nose
    breath diamonds, fading,
    the squirrel was spray-digging,
    pelleting again, even though
    he must have polished off his nuts
    by Halloween. We’d seen him,
    bushier then, a baby really,
    slyly going back and back,
    as we did on school coach trips
    to the battlefields of Ypres
    ripping through the Monster Munch
    long before the sickening ferry
    with its waffle smell and slot
    machines, the textbook poppy
    fields we’d seen on Blackadder,
    now stretching flatly, forever.

    I suppose the squirrel didn’t know
    the days would stick like curtains
    catching on the outer edge
    of the metal track, the yellow
    fleur de lys a half inch less
    wide open every morning.
    I knew that I could probe it,
    hey Siri, do most squirrels
    make it through to spring in their
    first year of life in urban
    environments, but the fact
    that I was always ladling
    porridge as he dug, donating
    raisins, doing calligraphy
    with smooth or crunchy
    peanut butter — there was
    that whole jack-o-lantern
    month, involving apricots,
    when it rained — only added
    to my sense of having been
    complicit in his losses:
    the bad grass, the Amazon
    deliveries that kept coming
    in white Toyota vans, the
    part-thawed corn cobettes
    siloed in their own brown bag,
    spongy with a mortuary
    softness that repelled me.
    He’d seen all that.

    The boys must be upstairs
    — a long withdrawing roar of
    Avalanche! the scuff of
    falling cushions — so I grabbed
    a handful of cashews and stood,
    unseen outside the window,
    scattering them contritely on
    the mud, around the reeds
    now colourless, and the small
    quill of his wavering tail.

    4.
    It being my birthday I was
    standing there, lost in the screen,
    the screen the same for reading
    on and writing this,
    for writing to, for finding out
    how many steps I’d taken
    yesterday/ in March last year,
    when I had spotted, bled,
    the algorithm always and
    upbraidingly concerned
    with sensed decline: a higher
    average headphone volume,
    deafness beckoning,
    and fewer steps, an upward
    trend in weight from these slack days
    around the year’s end picking
    at the Roses box, and making
    desperate cupcakes from a
    bbe last August box mix
    (the dribbly icing misty
    on the spoon, the wafer dog
    — a fireman — loosely hanging on)
    morbid obesity, then death.

    Its view of future time was,
    In a sense, so frictionless
    I envied it — that whole fin-
    de-siècle confidence:
    if history wasn’t progress
    it was Untergang, Déclin,
    the line traced out as if a ball
    dropping from the balltoss
    met the racket’s sweetspot
    swoof and whipped across the net
    and up, and up, so rather
    than returning it evaded
    satellites, fine meteorites,
    the rain, all things held still
    or left to fall, by gravity,
    and just went up and up,
    and quietly on. In China
    health authorities alarm
    as virus tally reaches 44
    in capital of Hubei province
    Wuhan, I could have read,
    if I’d read every piece of news
    that day. I didn’t, of course.

    5.
    Later, as we watched the moth’s
    drab plates of wing contracting
    on the windowpane or rented
    house’s limewashed skirting board,
    my son would talk of new year
    as the time when we had supper
    in the living room and ‘I
    was very ‘cited’. After baths,
    bedtime, the news, the news,
    Zoom wine with friends whose distant
    houses were still lapped, dustily,
    by sun, I lay unblinking
    on the bed, bean-fed again
    (shakshuka, quesadillas,
    cannellini mulched to paste:
    the NYT was camping poverty)
    and worked the chalky residue
    two paracetamols (expired)
    had striped across my tongue
    with squash, a pint. I searched
    for pleurisy, rib pain, cut glass
    opacities, read Twitter feeds
    of people in Berlin disputing
    quarantine R0 pathogen
    that ship the Princess Diamond
    why cocoons are never safe,
    then watched a video of snow
    massing right to left across
    the scientist’s window in Pankow
    until it was the only medium
    the only crazily still
    mobile thing behind the window
    flecked with paint chips, greasy
    fingerprint galaxies. Beyond,
    beyond: the snow did as it pleased
    effaced revealed the avenue
    he lived on with its scrub of
    park, its single taxi, and the lines
    of parked-up old estates which
    like the broken-backed receding
    linden trees reached to the
    grey horizon’s grainy limit.

    The Peripheralist

    During Black History Month earlier this year, the New York City streetwear boutique Alife brought to market a limited set of six heather grey hooded sweatshirts made of heavyweight, pre-shrunk  fourteen-ounce cotton fleece, with ribbed cuffs and waist. The garments, whose sole decorative flourish were the names of black cultural icons — from Harriet Tubman to Marcus Garvey — screen-printed in sans-serif across the chest, retailed for $138 a pop and sold out promptly. Of the six men and women featured in the campaign, there was only one writer: James Baldwin.

    On Instagram, to promote its product, the brand deployed a short clip of Baldwin’s extraordinary debate against William F. Buckley, Jr., on the theme “Is the American Dream at the Price of the Negro?” at the Cambridge Union in 1965 — a grainy YouTube gem beloved by aficionados that was recently brought to mainstream attention in Raoul Peck’s documentary I Am Not Your Negro. A friend messaged the post to me accompanied by the Thinking Face emoji, finger and thumb against the chin, a look of skepticism. I responded differently. I wasn’t incredulous about this cultural commoditization: Baldwin’s name had long since become a kind of shorthand, an emblem of a position — a way, increasingly fashionable in its own right, to signal which side of any number of contested issues of the day one wishes to come down on.

    Jean-Paul Sartre once described the young Albert Camus as “the  admirable conjunction of a man, of an action, and of a work,” by which he meant, simply, that there was no daylight between his life and his ideas, and it was impossible to think of one without conjuring the other. In an essay for the New York Review of Books in 1963, in which she contrasted morally virtuous if artistically second-tier writers (“husbands”) with perverse and reckless but exciting geniuses (“lovers”), Susan Sontag took Sartre’s observation as a springboard for a merciless review of Camus’ posthumously published Notebooks. “Today only the work remains,” she asserted. “And whatever the conjunction of man, action, and work inspired in the minds and hearts of his thousands of readers and admirers cannot be wholly reconstituted by experience of the work alone.” Elsewhere she expanded the critique:

    Whenever Camus is spoken of there is a mingling of personal, moral, and literary judgment. No discussion of Camus fails to include, or at least suggest, a tribute to his goodness and attractiveness as a man. To write about Camus is thus to consider what occurs between the image of a writer and his work, which is tantamount to the relation between morality and literature. For it is not only that Camus himself is always thrusting the moral problem upon his readers. … It is because his work, solely as a literary accomplishment, is not major enough to bear the weight of admiration that readers want to give it. One wants Camus to be a truly great writer, not just a very good one. But he is not. It might be useful here to compare Camus with George Orwell and James Baldwin, two other husbandly writers who essay to combine the role of artist with civic conscience.

    What occurs between the image of a writer and her work: the same problem afflicts the reception of Sontag herself. Still, she has a point. She writes elsewhere that Camus, as a novelist, attained a different altitude than either Orwell or Baldwin, but I have never been able to unsee that dressing down of all three “husbandly” men, Baldwin in particular, or to entirely dislodge him from her framework. As the years accumulate and Baldwin’s image and moral authority become ever more flattened, ever more frequently appropriated for the preoccupations of the present moment — with the most casual assumption of self-evidence — something in Sontag’s refusal to play along nags at me. In any event, and even though Baldwin, later in his career, wrote that he had “never esteemed [Camus] as highly as do so many others,” I have always found it useful to think of him as a kind of Harlem companion to the scholarship student from Algeria who became — and then failed to remain — his nation’s moral compass, who was blessed with the same gift of preternatural eloquence, and who struggled mightily and elegantly and perhaps vainly to bridge the disparate worlds that he straddled.

    Like Camus, a decade his senior, James Baldwin was born in the first quarter of the twentieth century in squalor, about as far as possible — spiritually if not physically — from the glittering intellectual circles that he would come to dominate. Both young men were total packages, publishing stories, novels, plays, essays, reviews and reportage after having exploded on the scene fully formed in their twenties. Likewise, both men rose to global stardom outside their home countries, specifically in Paris, and peaked at an age when others only start to hit their stride — more or less around forty. Unlike Camus, Baldwin was not exactly fatherless, but it was necessary for him to eliminate one such figure after another to make space in his life for his own prodigious talent. In this sense, he was every bit the “first man” that Camus intended. By the time that Baldwin died of stomach cancer in the sunbaked Mediterranean village of Saint-Paul-de-Vence — not so far from the equally picturesque medieval town of Lourmarin, where Camus invested his Nobel money and is buried—he too was regarded as passé by a generation of readers no longer interested in reconciling differences or avoiding conflict. “Unfortunately, moral beauty in art — like physical beauty in a person — is extremely perishable,” Sontag warned. Baldwin did have the good fortune to have won at least two very influential younger champions in Henry Louis Gates, Jr. and Toni Morrison. But it was not at all a foregone conclusion that he would become, in the next three decades, nothing less than the pop culture patron saint of an entire generation of black (and increasingly non-black) artists, activists, and writers, in America and beyond. I am referring to the generation that came of intellectual age during the Obama presidency and the Black Lives Matter movement, which defined this decade’s response to the spate of highly publicized police and vigilante killings of unarmed African Americans, beginning with Trayvon Martin’s murder in Sanford, Florida in 2012. The enormous renewal of attention paid to Baldwin — which, at least until the coronavirus catapulted The Plague back onto bestseller lists around the world, had eluded Camus — has certainly been merited and illuminating. It has also been reductive  and disturbing.  

    Poor, black, and not straight — intersectional avant la lettre — Baldwin fits seamlessly, as very few icons from the past are able to do, into the readymade template of our era’s obsession with identity. (Even Sontag, a near-exact contemporary who outlived him by almost twenty years, could not entirely bring herself to admit that she was gay.) Books about Baldwin abound, biographical and literary and political studies, and films too: a cottage industry of Baldwiniana has emerged over the past decade. The most sensational entry in the contest for Baldwin’s halo would have to be Ta-Nehisi Coates’ Between the World and Me, his letter to his teenaged son that was formally modeled on the first section of Baldwin’s book The Fire Next Time, called “My Dungeon Shook: an open letter to my nephew.” The motor of Coates’ essay was the question that Baldwin debated with Buckley — is the American Dream at the price of the Negro? In his own response to that question, Coates divided America into two essentialized camps, the “Dreamers” and a permanent black underclass. Between the World and Me went on to become one of the most widely read and discussed works of nonfiction in the new century.

    In the book’s sole blurb, the late Morrison herself enthused: “I’ve been wondering who might fill the intellectual void that plagued me after James Baldwin died. Clearly it is Ta-Nehisi Coates.” More than anything else, that endorsement bound the two men together in the public’s imagination. In his biography of Balwin, which appeared last year, Bill V. Mullen goes so far as to argue that Between the World and Me “was singularly responsible for the rediscovery of Baldwin by the Black Lives Matter movement.” Whether or not that is true, five years out a certain irony is clear: Morrison’s remark and Coates’ success had an even greater impact on the way we perceive Baldwin than the way we do Coates. 

    Despite the hard-won optimism and ardent emphasis on reconciliation and regeneration through love that distinguishes his work, there is an undeniably pessimistic strain in Baldwin that often rings prophetic today. Drawing on this latter element alone, Coates captured and vocalized the profound disappointment provoked by the many limitations of the first black presidency. Between the World and Me, which so frankly and forcefully embodied the rage and justifiable frustration of an historically oppressed people with a rising set of expectations, rhetorically homed in on a single (mostly but not entirely late-phase) blue note in Baldwin’s catalogue of sonorities. If there is a problem here, it is not that Coates’ version of Baldwin rings altogether false. But it is tendentiously selective. It is a simplifying and coarsening distillation of a versatile and multifaceted writer, a supple and self-contradictory writer, into a single dark and haranguing register. In the process we are made to sacrifice a large amount of the complexity that made the author of Giovanni’s Room and Another Country so special and difficult to pin. Baldwin is revered, but he is lost.

    Consider also that Oscar-nominated Baldwin documentary, I Am Not Your Negro. Though a decade in the making, the project arrived at and helped to define the Baldwin renaissance. The film takes as its impetus Baldwin’s thirty-page unfinished manuscript, Remember This House, which he described in a letter to his agent in 1979 as an exploration of race in America told through the assassinations of three prominent Civil Rights leaders: Medgar Evers, Malcolm X, and Martin Luther King, Jr. Onto this frame Peck grafts footage of Baldwin at roundtables and debates, familiar and jarring archival clips of violent white reaction to Civil Rights progress, such as school and bus integration, as well as contemporary shots of charged police confrontations with activists in Ferguson and elsewhere. There are no interviews with scholars and experts, no talking heads. Peck calculates correctly that Baldwin’s words alone will carry the film (he is the sole writer credited on the project), whether spoken directly or read with understated authority by the actor Samuel L. Jackson. The effect is exhilarating — Baldwin’s language is always captivating and lucid; he needs no translation or amplification. Even the wildly charismatic Jackson refrains from any attempt to compete with the words that he reads, which were written by a former child preacher in Harlem who was one of the few great writers in recent memory to be an equal or better public speaker, a distinction that the film makes thrillingly apparent.

    Yet I Am Not Your Negro inadvertently makes manifest some of the incongruities between the smooth new radical mythology of the writer and the man as he actually existed and co-existed with the cultural forces and major personalities of his era. Though it purports to tease out important connections — “I want these three lives to bang against each other,” Baldwin writes of the project — we learn very little about the relationship between him and the trio of martyrs he set out to examine in Remember This House. This is both because those leaders, while they knew and understood each other, did not really constitute a fraternity of any sort, and also — perhaps more importantly — because it can be expedient to avoid the complexity and contradictions of Baldwin’s own insecure position within the actually existing black America, to and from which he remained throughout his adulthood a permanent “transatlantic commuter.” 

    Of the three, he may have experienced the most straight-forward fellowship with the Mississippi activist Medgar Evers, the youngest of the group and the first to be murdered. Malcolm X was explicit, however, that what he sought was a “real” revolution, not the “pseudo revolt” of someone like James Baldwin. And Martin Luther King, Jr., as Douglas Field shows in All Those Strangers: The Art and Lives of James Baldwin, once balked — in a conversation taped by the F.B.I. — at appearing alongside the writer on television, claiming to be “put off by the poetic exaggeration in Baldwin’s approach to race issues.” It is hard to imagine that he could have been unaware that Baldwin was being denigrated as “Martin Luther Queen” in civil-rights circles.

    Baldwin himself was understandably eager to emphasize and even embellish his connection to such extraordinary and sacrificial figures, especially King, but their realities were highly incommensurate on a variety of levels. In his memoir No Name in the Street, in 1972, there is a revealing set piece in which Baldwin writes about buying a nice dark-blue suit for a scheduled appearance with King at Carnegie Hall. Two weeks later, after the latter was brutally assassinated, it would be Baldwin’s attire for his funeral. Early in the Peck film we hear Baldwin worry over his role as a “witness” and not an “actor” in the convulsions of his time, only to resolve the apparent discrepancy by declaring that the two roles are separated by a “thin line indeed.” In his attempts to write himself over that line and into proximity with men like Evers, King, and Malcolm and by extension into the center of the civil rights struggle — to collapse that space between man, action, and work — Baldwin at once underestimated a crucial distinction (as well as his own specialness) while also betraying his insurmountable distance from all of them. Darryl Pinckney, in a review of the Library of America’s edition of Baldwin’s writings, kindled to Baldwin’s comment to a newspaper journalist that he would never be able to wear that suit again:

    A friend of Baldwin’s, a US postal worker whom he rarely saw, had seen the newspaper story and, because they were the same size, asked for the suit that to Baldwin was “drenched in the blood of all the crimes of my country.” Baldwin went up to Harlem in a hired “Cadillac limousine” in order to avoid the humiliation of watching taxis not stop for him, a black man. His life came into the “unspeakably respectable” apartment of his friend like “the roar of champagne and the odor of brimstone.” He characterizes himself as he assumes he must have appeared to his friend’s family: “an aging, lonely, sexually dubious, politically outrageous, unspeakably erratic freak.”

    His friend had also “made it” — holder of a civil-service job; builder of a house next to his mother’s on Long Island. Baldwin was incredulous that his friend had no interest in the civil rights struggle. They got into an argument about Vietnam. Baldwin says he realized then that the suit belonged to his friend and to his friend’s family. “The blood in which the fabric of that suit was stiffening was theirs,” and the distance between him and them was that they did not know this.

    The story is tortured and yet, regardless of Baldwin’s outrage at indifference or his identification with slain civil rights leaders, there is something wrongly insinuating about his depicting his scarcely worn suit as drenched and stiffening with blood, even metaphorical blood. People still remember what Jesse Jackson’s shirt looked like after King was shot.

    This slightly frivolous side of Baldwin can just be glimpsed in I Am Not Your Negro (and is almost totally absent from the new hagiography). “I was never in town to stay,” he admits on the film, and after Evers’ death we do hear Jackson read, “Months later, I was in Puerto Rico, working on a play,” as the camera reveals a sparkling beachscape. But he assumes his comparative privilege in No Name in the Street, where he notes that, when King was murdered, he was ensconced in Palm Springs, working on an unrealized screenplay for The Autobiography of Malcolm X. After the emotional and rhetorical shift to Black Power at the end of the ’60s, many of Baldwin’s contemporaries and descendants wrote him off — much the same way that intellectuals and radicals in Algeria and Paris turned their backs on Camus — considering him too enamored of his own voice and far too comfortable in the white world. No Name in the Street, like much of Baldwin’s later output, can be read as a kind of overture to these critics, a capitulation to the new rules of engagement. 

    “I was in some way in those years, without realizing it, the great white hope of the great white father,” Baldwin concedes. “I was not a racist, or so I thought. Malcolm was a racist, or so they thought. In fact we were simply trapped in the same situation.” In actual fact their situations were very different and those differences are worth thinking through — not wishing away — because they help to explain why their worldviews differed, too. Baldwin was in London when Malcolm was murdered. In the epilogue of No Name in the Street, just a beat after he writes that “the Western party is over, and the white man’s sun has set. Period,” he signs off “New York, San Francisco, Hollywood, London, Istanbul, St. Paul de Vence.” Unlike Malcolm X, there were plenty of lovely and welcoming places where James Baldwin could go, Pinckney mordantly notes, “to remind himself that he felt trapped.”

    Yet he did not invent his own marginality. It is no exaggeration to say that he was in some crucial ways homeless. In 1950, with a reasoning that anticipates the desire of today’s #ADOS movement to disentangle the all-American experience of descendants of slaves from any larger pseudo-biological notion of international blackness — to say nothing of that infinitely fuzzier category “people of color” — Baldwin wrote in his essay “Encounter on the Seine” that “they face each other, the Negro and the African, over a gulf of three hundred years — an alienation too vast to be conquered in an evening’s good will, too heavy and too double-edged ever to be trapped in speech.” In Paris, he discovered what he could not recognize under the specific conditions of racial bigotry in New York City, and what he could never entirely disavow once he had experienced it: “I proved, to my astonishment, to be as American as any Texas G.I. And I found that my experience was shared by every American writer I knew in Paris.”

     

    That revelation comes in Nobody Knows My Name, his phenomenal second essay collection: “Like me, they had been divorced from their origins, and it turned out to make very little difference that the origins of white Americans were European and mine were African — they were no more at home in Europe than I was.” This is the Baldwin that the new revival has tended to gloss over or outright ignore. It is what distinguishes Baldwin from so many of his contemporaries and ours. This is the mature Baldwin, the wise Baldwin, the Baldwin who seethes at injustice but is not duped by the excesses of radicalism. It is the writer whose message — while not quite tailor-made to sell sweatshirts — is ultimately persuasive and always necessary. There can be an uncanny Benjamin Button-sense to reading Baldwin in chrono-logical order: it can feel as if the young man and not the elder is the all-accomplished, all-knowing sage. Here is that young-old man in his astonishing debut collection, Notes of a Native Son, recalling his birthday in 1943, which also happened to be the day that his father died and his sister was born. Riots in Harlem had erupted after a white police officer and a black soldier clashed in a hotel lobby in a dispute over a woman:

    Negro girls, white policemen, in or out of uniform, and Negro males — in or out of uniform — were part of the furniture of the lobby of the Hotel Braddock and this was certainly not the first time such an incident had occurred. It was destined, however, to receive an unprecedented publicity, for the fight between the policeman and the soldier ended with the shooting of the soldier. Rumor, flowing immediately to the streets outside, stated that the soldier had been shot in the back, an instantaneous and revealing invention, and that the soldier had died protecting a Negro woman. The facts were somewhat different — for example, the soldier had not been shot in the back, and was not dead, and the girl seems to have been as dubious a symbol of womanhood as her white counterpart in Georgia usually is, but no one was interested in the facts. They preferred the invention because the invention expressed and corroborated their hates and fears so perfectly.

    Later in the essay, in words he would live by to the end, he writes, “In order really to hate white people, one has to blot out so much of the mind — and the heart — that this hatred becomes an exhausting and self-destructive pose.” And he continues, magnificently: “That bleakly memorable morning I hated the unbelievable streets and the Negroes and whites who had, equally, made them that way. But I knew that it was folly, as my father would have said, this bitterness was folly. It was necessary to hold on to the things that mattered. The dead man mattered, the new life mattered; blackness and whiteness did not matter; to believe that they did was to acquiesce in one’s own destruction.”

    I would like to believe that Baldwin never grew out of such views, that he remained an outsider — a peripheralist, as my own father might say — his entire life; and that this is one of the reasons he lived out his final seventeen years in Provence and could never quite bring himself back to America. He paid huge costs to remain semi-aloof, one of which might be the risk of permanent misunderstanding, even in his posthumous homecoming — but I am convinced that this ability to stand apart, this refusal to be completely subsumed and taken over by any group or collectivity, is what ultimately spared him from the all-consuming identity myopia that plagued his era and now plagues ours. He was not a Black Muslim or a Black Panther, he observed, “because I did not believe all white people were devils and I did not want young black people to believe that.” The simple decency of that sentence still holds the power to shock. It is the kind of correct-to-the-point-of-seeming-naïve insight that puts me in mind of Camus, the belief of a naturally humane and moral man, which we are desperately in need of in this age of opportunism and distrust.

    None of this is to imply that Baldwin was ever less than lucid about the nature and tenacity of American racism. Baldwin in his nobility was nobody’s fool. One of the most powerful sequences in I Am Not Your Negro is instructive about what makes him, today, such an irresistible figure. Here at last we see him in crackling black-and-white in the company of two of the three martyrs. Here we encounter the “conjunction of man, action, and work” of which Sontag spoke. On a panel moderated by the sociologist E. Franklin Frazier — there was so much aggregated brilliance and iconography assembled there! — a weary-look-ing King and an implacable Malcolm appear as dignified props for an immensely thoughtful Baldwin, who speaks stirringly of the “vast, heedless, unthinking, cruel white majority.” Peck cuts to recent black-and-white images of contemporary American police on a war footing, storming through the streets of Fergu-son. “I’m terrified at the moral apathy,” Baldwin says, “these people have deluded themselves for so long that they really do think I’m not human. It means that they have themselves become moral monsters.” Now the screen floods with color as nostalgic mid-century shots of an all-white beauty pageant, and young white women frolicking in spotless ensembles against a radiant blue sky, wash over the viewer. The dissonance of the juxtaposition is excruciating, undeniable. 

    How are we ever to find our way out of this conundrum? Baldwin hit upon some of the answers. Late in life he seemed to return to a complex understanding of struggle that contrasts with the victim-oppressor binary to which the discourse that overtook him adheres. “It seemed to me that if I took the role of a victim then I was simply reassuring the defenders of the status quo,” he told The Paris Review shortly before he died. “As long as I was a victim they could pity me and add a few more pennies to my home-relief check. Nothing would change in that way. . . . It was beneath me to blame anybody for what happened to me.” And in “Letter from a Region in My Mind,” his essay in The New Yorker in 1962 that became The Fire Next Time, he was even clearer. “For the sake of one’s children, in order to minimize the bill that they must pay, one must be careful not to take refuge in any delusion,” he wrote. “And the value placed on the color of the skin is always and everywhere and forever a delusion,” he continued. “I know that what I am asking is impossible. But in our time, as in every time, the impossible is the least that one can demand.”

    A dozen years later the Israeli-Palestinian writer Emile Habibi coined the wonderful term “pessoptimist” for the title of a satirical novel. I cannot think of a better way to describe the mottled sensibility and variegated conscience that Baldwin brought to black American life and letters. He was repulsed by the stark, cliché-ridden, and fatalistic “Afro-pessimism” that we have become conditioned to espouse, and to tweet; nor was his understanding of race anything like the Panglossian self-hating optimism for which contemporaneous critics such as Eldridge Cleaver excoriated him. To reduce him to either pole in Habibi’s paradox is as irresponsible as it is boring. A great deal hangs on the proper interpretation of James Baldwin’s work and legacy. Even more than Malcolm X or Martin Luther King, Jr., and certainly more than Ralph Ellison, his principal African American rival in talent, James Baldwin has become one of the primary arenas in which the most urgent questions — the meanings of the past, the possibilities of the future — of black American life are being contested today. These are not idle feuds. The stakes of getting his reputation right extend well beyond literary disputations.

    Last May, the excruciating videotaped killing of George Floyd, a forty-six-year-old black man in Minneapolis on whose neck a white police officer kneeled for nearly nine minutes, was yet another brutal and galvanizing cause for pessimism, as Baldwin would rightly have told us. It is at once astonishing and unbearable that our society (and not just white society, as George Zimmerman and other killers “of color” grimly attest) can still produce so many instances of appalling cruelty and injustice, instances which disproportionately target blacks. And yet even as we condemn such evil, our indignation cannot support a total or unending negativity. Baldwin would have admonished us about this, too. It would be just as disastrous a misjudgment of the schizophrenic American reality to argue that nothing (or next to nothing) has changed, that “lynchings” continue to define the black experience some two decades into the twenty-first century, as it would be to dismiss the very specific and incontrovertible familiarity and dread with which so many black Americans viewed that stomach-turning footage from Minneapolis. What is so challenging — but all the more essential for its difficulty — for its absurdity, you could say — is to keep in mind two competing ideas simultaneously. The fight for justice must not end merely in blind revenge or catharsis. The struggle demands not just fury and resentment, but also hope and wisdom. 

    In maintaining such ambiguity, in defending such complexity, we are left with a single abiding truth: evil is always with us because it is one of the permanent conditions of humankind. Black people — like all other peoples forced to recognize up close the mixed-up character of life, its inextricable tangle of lights and darks — must become connoisseurs of pessimism and optimism to equal degrees. In his moral and intellectual capaciousness, Baldwin models this pessoptimistic mentality on and off the page. In this way his work (as opposed to the compressed and glib image that we are increasingly sold) is mimetic of American reality itself — plenty of which may turn out to be irreconcilable in the end, but none of which is ever enough to justify a single response in every season. Whatever our way out of our racial pain, it will be complicated and fitful and without fully satisfying once-and-for-all resolutions. Much like the context that created him, it is not necessary or even desirable to admire everything that James Baldwin said or did. But he exists to discomfit us, and to call us beyond tidy conclusions and easy emotions. He is forever inconvenient, which is why he is exactly what we need.

    The Indian Tragedy

    Earlier this year, the Republic of India turned seventy. On January 26, 1950, the country adopted a new Constitution, which severed all ties with the British Empire, mandated multi-party democracy based on universal adult franchise, abolished caste and gender distinctions, awarded equal rights of citizen-ship to religious minorities, and in myriad other ways broke with the feudal, hierarchical, and sectarian past. The chairman of the Drafting Committee was the great scholar B. R. Ambedkar, himself a “Dalit,” born into the lowest and most oppressed strata of Indian society, and representative in his person and his beliefs of the sweeping social and political transformations that the document promised to bring about.

    The drafting of the Constitution took three whole years. Between December 1946 and December 1949, its provisions were discussed threadbare in an Assembly whose members included the country’s most influential politicians (spanning the ideological spectrum, from atheistic Communists to orthodox Hindus and all shades in between) as well as leading economists, lawyers, and women’s rights activists. When these deliberations concluded, and it fell to Ambedkar to introduce the final document — with 395 Articles and 12 Schedules, the longest of its kind in the history of the democratic world — to the Assembly, he issued some warnings, of which at least one was strikingly prophetic. He invoked John Stuart Mill in asking Indians not “to lay their liberties at the feet of even a great man, or to trust him with powers which enable him to subvert their institutions.” There was “nothing wrong,” said Ambedkar, “in being grateful to great men who have rendered life-long services to the country. But there are limits to gratefulness.” His worry was that “for India, bhakti, or what may be called the path of devotion or hero-worship, plays a part in its politics unequalled in magnitude by the part it plays in the politics of any other country. Bhakti, in religion, may be a road to the salvation of the soul. But in politics, bhakti or hero-worship, is a sure road to degradation and to eventual dictatorship.”

    When he spoke those words, Ambedkar may have had the possible deification of the recently martyred Mahatma Gandhi in mind. But his remarks seem uncannily prescient about the actual deification of a later and lesser Gandhi. In the early 1970s, politicians of the ruling Congress Party began speaking of how “India is Indira and Indira is India,” a process that culminated, as Ambedkar had foreseen, in political degradation and eventual dictatorship. In June 1975, Prime Minister Indira Gandhi suspended civil liberties, jailed all opposition politicians, and imposed a strict regime of press censorship. This was a time of fear and terror, which lasted almost two years, and ended when Mrs. Gandhi — provoked in part by criticism from Western liberals and in part by her own conscience — ended the Emergency and called for fresh elections, which she and her party lost.

    If one is reminded of Ambedkar’s warning when reflecting on the career of Indira Gandhi, it brings to mind even more starkly the career of India’s current Prime Minister, Narendra Modi. In terms of their upbringing and ideological formation, no two Indian politicians could be more different than Modi and Mrs. Gandhi. One witnessed enormous hardship while growing up; the other was raised in an atmosphere of social and economic privilege. One had his worldview shaped by the many years he spent in the Hindu supremacist organization, the Rashtriya Swamaysevak Sangh (RSS); the other  was deeply influenced by her father, Jawaharlal Nehru, India’s first Prime Minister, who detested the RSS. One has no family; the other had children and grandchildren. One had to work his way up the ladder of Indian politics, step by step; the other had a lateral entry into a high position purely on account of her birth.

    And yet there are significant commonalities. These very different personal biographies notwithstanding, it has long seemed to me that there are striking similarities in their political styles. Back in 2013, I wrote in The Hindu that “neither Mr. Modi’s admirers nor his critics may like this, but the truth is that of all Indian politicians past and present, the person Gujarat Chief Minister most resembles is Indira Gandhi of the period 1971-77. Like Mrs. Gandhi once did, Mr. Modi seeks to make his party, his government, his administration and his country an extension of his personality.” At the time the article was published, the Chief Minister of the western state of Gujarat was making his national ambitions explicit. Fifteen months later, Narendra Modi became Prime Minister of India, his Bharatiya Janata Party (BJP) winning, under his leader-ship, the first full majority in Parliament of any party since 1984. Modi’s time in office has seemed to confirm the parallels between him and Indira Gandhi. As she had once done, he cut the other leaders in his party down to size; sought to tame the press; used the civil services, the diplomatic corps and the investigative agencies as political instruments; and corralled the resources of the state to build a personality cult around himself. 

    In January 2020, when the Republic of India turned seventy, Narendra Modi was facing his first serious challenge since he became Prime Minister six years earlier. Modi’s ideological formation in the RSS had convinced him that India’s destiny was to be a “Hindu Rashtra” — a theocratic state run by Hindus and in the interests of Hindus alone. In his first term as Prime Minister, Modi had kept these beliefs largely under wraps. But when he was re-elected with a large majority in May 2019, the majoritarian agenda came strongly to the fore. On August 5, 2019, the government of India abrogated Article 370 of the Constitution, which accorded cultural and political autonomy to the state of Jammu and Kashmir. This was done unilaterally, without consulting the people of the state (as the law required them to do). It was a wanton intervention in one of the most dangerous areas of contention in the world. The state of Jammu and Kashmir was abruptly converted into a mere “Union Territory.” It was henceforth to be ruled directly by New Delhi, preparatory to what the rulers of India called a “full integration with the Nation,” which the people of the Kashmir Valley feared would result in an invasion of their land by grasping outsiders and a transformation of this Muslim-majority state into a Hindu colony. 

    Worse was to follow. In early December, the Parliament passed the Citizenship Amendment Act (CAA). This sought to give Indian citizenship to people fleeing religious persecution in three countries: Bangladesh, Pakistan, and Afghanistan. The Act was illogical — it ignored the largest group of stateless refugees in India, the Tamils from Sri Lanka; and it was also spiteful, for it had carefully specified that Muslims from any country, however persecuted they might be, would not get refuge in India. Moreover, the Modi government announced that the CAA was to be accompanied by a National Register of Citizens (NRC), which would demand, from everyone living in India, documentary proof of Indian parentage, length of residence in India, and so on. Those who were unable to “prove” to the government’s satisfaction that they had these papers would be declared illegal immigrants. But if they had the good luck to be Hindu, Buddhist, Jain, Sikh, Parsi, or Christian — that is, anything other than Muslim — they could apply to become Indians under the Citizenship Amendment Act. The CAA was a clear violation of Articles 14 and 15 of the Constitution, which promised equality before the law and prohibited discrimination on the grounds of religion. Following on the downgrading of Jammu and Kashmir from full statehood to Union Territory status, the passing of the CAA represented a further — and fuller — ethnonationalist step towards the construction of a Hindu State. Were it to be implemented along with the NRC, as top government ministers had repeatedly threatened, Muslims would become, formally as well as legally, second-class citizens.

    The abrogation of Jammu and Kashmir’s statehood was met with muted protest by intellectuals and human rights activists, and little else. Prime Minister Modi and his hardline Home Minister, Amit Shah, clearly hoped that these new changes in the citizenship laws would likewise go uncontested. They were wrong. There were widespread protests across India, led at first by students, but then with a wide cross-section of the citizenry joining in. Elderly Muslim women staged a peaceful sit-in for weeks in South-East Delhi, this act inspiring many similar sit-ins in other cities and towns. The state sought to suppress the protests through colonial-era laws prohibiting gatherings of more than five people, but the non-violent and collective civil disobedience continued. Although the Acts targeted Muslims specifically, many non-Muslims participated in the protests, outraged at this whole stigmatization of their fellow citizens merely on account of their faith. The countrywide upsurge within India was accompanied by widespread condemnation of the Modi Government in the international press. This intensified when President Donald Trump visited India in late February, his visit coinciding with religious rioting in Delhi, the country’s capital, in which radical Hindus were the main perpetrators and Muslims the main sufferers. 

    At this time, it seemed that the degradation of Indian democracy had been arrested. The pushback against the cult of personality and the ideology of Hindu supremacy had begun and seemed as if it might perhaps accelerate. Then came the pandemic, and India, and the world, gasped in wonder and horror. I shall return to the consequences of covid19 for my country at the end of my essay. But first I wish to outline the historic roots of the struggle that has been unfolding within India, between the capacious ideals with which the Indian republic was founded and the majoritarian tendency that seeks to replace it. We must begin with the intellectual and moral origins of the Constitutional idea of India, which Narendra Modi and his party wish to consign to the ash heap of history.

    Like the railways, electricity, and the theory of evolution, nationalism was invented in modern Europe. The European model of nationalism sought to unite residents of a particular geographical territory on the basis of a single language, a shared religion, and a common enemy. To be British, you had to speak English, and minority tongues such as Welsh and Gaelic were either suppressed or disregarded. To be properly British you had to be Protestant, which is why the king was also the head of the Church, and Catholics were distinctly second-class citizens. Finally, to be authentically and loyally British, you had to detest France.

    Now, if we go across the Channel and look at the history of the consolidation of the French nation in the eighteenth and nineteenth centuries, we see the same process at work, albeit in reverse. Citizens had to speak the same language, in this case French, so dialects spoken in regions such as Normandy and Brittany were sledgehammered into a single standardized tongue. The test of nationhood was allegiance to one language, French, and also to one religion, Catholicism. So Protestants were persecuted. Likewise, French nationalism was consolidated by identifying a major enemy, although who this enemy was varied from time to time. In some decades the principal adversary was Britain; in other decades, Germany. In either case, the hatred of another nation was vital to affirming faith in one’s own nation.

    This model — a single language, a shared religion, a common enemy — is the model by which nations were created throughout Europe. And it so happens that the Islamic Republic of Pakistan is in this respect a perfect European nation. Pakistan’s founder, Mohammad Ali Jinnah, insisted that Muslims could not live with Hindus, so they needed their own homeland. After his nation was created, Jinnah visited its eastern wing and told its Bengali residents they must learn to speak Urdu, which to him was the language of Pakistan. And, of course, hatred of India has been intrinsic to the idea of Pakistan since its inception. 

    Indian nationalism, however, radically departed from the European template. The greatness of the leaders of our freedom struggle — and Mahatma Gandhi in particular — was that they refused to identify nationalism with a single religion. They further refused to identify nationalism with a particular language, and — even more remarkably — they refused to hate their rulers, the British. Gandhi lived and died for Hindu-Muslim harmony. He liked to emphasize the fact that his party, the Indian National Congress, had presidents who were Hindu, Muslim, Christian, and Parsi. Nor was Gandhi’s nationalism defined by language. As early as the 1920s, Gandhi pledged that when India became independent, every major linguistic group would have its own province. But perhaps the most radical aspect of the Indian model of nationalism was that hatred of the British was not intrinsic to it. Indian patriots detested British imperialism, they wanted the Raj out, they wanted to reclaim this country for its residents — but they did so non-violently, and while befriending individual Britons. (Gandhi’s closest friend was the English priest C.F. Andrews.) Moreover, they wished to get the British to ‘Quit India’ while retaining the best of British institutions. An impartial judiciary, parliamentary democracy, the English language, and not least the game of cricket; these are all aspects of British culture that Indians sought to keep after the British had themselves left.

    British, French, and Pakistani nationalism were based on paranoia, on the belief that all citizens must speak the same language, adhere to the same faith, and hate the same enemy. Indian nationalism, by contrast, was based on a common set of values. During the non-cooperation movement of 1920-1921, people all across India came out into the streets, gave up jobs and titles, left their colleges, and courted arrest. For the first time, the people of India had the sense, the expectation, the confidence that they could create their own nation. In 1921, when non-cooperation was at its height, Gandhi defined Swaraj (Freedom) as a bed with four sturdy bed-posts. The four posts that held up Swaraj, he said, were non-violence, Hindu-Muslim harmony, the abolition of untouchability, and economic self-reliance.

    When the Republic of India was created in 1950, its citizens sought to be united on a set of ideals: democracy, religious and linguistic pluralism, caste and gender equality, and the removal of poverty and discrimination. The basis of citizen-ship was adherence to these values, not to a single religion, a shared faith, or a common enemy. I would describe this found-ing model of Indian nationalism as constitutional patriotism, because it is enshrined in our Constitution. Its fundamental features are outlined below.

    The first feature of constitutional patriotism is the acknowledgement and appreciation of our inherited and shared diversity. In any major gathering in a major city — say, in a music concert or in a cricket match — people who compose the crowd carry different names, wear different clothes, eat different kinds of food, worship different gods (or no god at all), speak different languages, and fall in love with different kinds of people. They are a microcosm not just of what India is, but of what its founders wished it to be. For the founders of the Republic had the ability (and the desire) to endorse and emphasize our diversity. Multiethnicity was not the problem, it was the solution. As the poet Rabindranath Tagore once said about my country, “no one knows at whose call so many streams of men flowed in restless tides from places unknown and were lost in one sea: here Aryan and non-Aryan, Dravidian, Chinese, the bands of Saka and the Hunas and Pathan and Mogul, have become combined in one body.” An appreciation of this rich inner diversity means that we understand that no type of Indian is superior or special because they belong to a particular religious tradition or because they speak a certain language. Patriotism was defined by the allegiance to the values of the Constitution, not by birth, blood, language or faith.

    The stress on cultural diversity and religious pluralism was all the more remarkable because it came in the wake of the savage rioting of Partition. Gandhi and the Congress had hoped for a united India, but in the event, when the British left in August 1947, they divided the country into two sovereign nations, India and Pakistan. The division was accompanied by ferocious clashes between Hindus and Muslims, in which an estimated one million people died and more than ten million people were made into refugees. But Pakistan was explicitly created as a homeland for Muslims, whereas India resolutely refused to define itself in majoritarian terms. As the country’s first Prime Minister, Jawaharlal Nehru, wrote to the Chief Ministers of States in 1947, “We have a Muslim minority who are so large in numbers that they cannot, even if they want to, go anywhere else. They have got to live in India. … Whatever the provocation from Pakistan and whatever the indignities and horrors inflicted on non-Muslims there, we have got to deal with this minority in a civilized manner. We must give them security and the rights of citizens in a democratic State.”

    The second feature of constitutional patriotism is that it operates at many levels. Like charity, it begins at home. It is not just worshipping the national flag that makes you a patriot. It is how you deal with your neighbors and  your neighborhood, how you relate to your city, how you relate to your state. In America, which is professedly one of the most patriotic countries in the world, every state has its own flag. And some states of India also have their own flag, albeit informally. Every November 1, when the anniversary of the formation of my home state, Karnataka, is celebrated, a red-and-yellow flag is unfurled in many parts of the state. It is not Anglicized upper-class elites such as myself who display the state flag of Karnataka, but shopkeepers, farmers, and autorickshaw drivers.

    Patriotism can operate at multiple levels. The Bangalore Literary Festival (which is not sponsored by large corporations but is crowd-funded) is an example of civic patriotism. The red-and-yellow flag of Karnataka is an example of provincial patriotism. Cheering for the Indian cricket team is an example of national patriotism. This patriotism can operate at more than one level — the locality, the city, the province, the nation. A broad-minded (as distinct from paranoid) patriot recognizes that these layered affiliations can be harmonious, complementary, and reinforce one another.

    The model of patriotism advocated by Gandhi and Tagore was not centralized but disaggregated. And it helped make India a diverse and united nation. Look at what is happening in Spain today. Why are so many Catalans keen on a nation of their own? Because they believe that they have been denied the space and the freedom to honorably have their own language and culture within a united Spain. The central-ized Spanish state came down so hard that the Catalans had a referendum in which many of them insisted upon nothing less than independence. Had the Republic of Spain been founded and run on Indian principles, this may not have happened. Had Pakistan not imposed Urdu on Bengalis, they may not have split into two nations a mere quarter of a century after independence. Had Sri Lanka not imposed Sinhala on the Tamils, that country may not have experienced thirty years of ethnic strife. India has escaped civil war and secession because its founders wisely did not impose a single religion or single language on its citizens.

    One can be a patriot of Bangalore, Karnataka, and India — all at the same time. Yet the notion of a world citizen is false. The British-born Indian J.B.S. Haldane put it this way: “One of the chief duties of a citizen is to be a nuisance to the government of his state. As there is no world state, I cannot do this…. On the other hand I can be, and am, a nuisance to the government of India, which has the merit of permitting a good deal of criticism, though it reacts to it rather slowly. I also happen to be proud of being a citizen of India, which is a lot more diverse than Europe, let alone the U.S.A, USSR or China, and thus a better model for a possible world organization. It may, of course, break up, but it is a wonderful experiment. So I want to be labelled as a citizen of India.” A citizen of India can vote in local, provincial and national elections. In between elections he or she can affirm their citizenship (at all these levels) through speech and (non-violent) action. But global citizenship is a mirage, or a cop-out. It is only those who cannot or will not identify with locality, province, or nation who accord themselves the fanciful and fraudulent title of “citizen of the world.” 

    The third feature of constitutional patriotism, and this again comes from people such as Gandhi and Tagore, is the recognition that no state, no nation, no religion, and no culture is perfect or flawless. India is not superior to America necessarily, nor is America superior to India necessarily. Hinduism is not superior to Christianity necessarily, nor is Islam superior to Judaism necessarily. The fourth feature is this: we must have the ability to feel shame at the failures of our state and society, and we must have the desire and the will to correct them. The most egregious aspects of Indian culture and society are discrimination against women and the erstwhile “Untouchable” castes. A true patriot must feel shame about them. That is why our Constitution abolished caste and gender distinctions. Yet these distinctions continue to pervade everyday life. Unless we continue to feel shame, and act accordingly, they will continue to persist.

    The fifth feature of constitutional patriotism is the ability to be rooted in one’s culture and one’s country while being willing to learn from other cultures and other countries. This, too, must operate at all levels. Love Bangalore but think what you can learn from Chennai or Hyderabad. Love Karnataka, but think what you can learn from Kerala or Himachal Pradesh. Love India, but think of what you can learn from Sweden or Canada. Here is Tagore, in 1908: “If India had been deprived of touch with the West, she would have lacked an element essential for her attainment of perfection. Europe now has her lamp ablaze. We must light our torches at its wick and make a fresh start on the highway of time. That our forefathers, three thousand years ago, had finished extracting all that was of value from the universe, is not a worthy thought. We are not so unfortunate, nor the universe so poor.” And here is Gandhi, thirty years later: “In this age, when distances have been obliterated, no nation can afford to imitate the frog in the well. Sometimes it is refreshing to see ourselves as others see us.”

    As a patriotic Indian, I believe that we must find glory in the illumination of any lamp lit anywhere in the world.

    The crisis of contemporary India may be described succinctly: the model of constitutional patriotism is now in tatters. It is increasingly being replaced by a new model of nationalism, which prefers and promotes a single religion, Hinduism, and proclaims that a true Indian is a Hindu. This new model also elevates a single language — Hindi. It insists that Hindi is the national language, and whatever the language of your home, your street, your state, you must speak Hindi also. Thirdly, this model luridly presents a common external enemy — Pakistan.

    Whether they acknowledge it or not, those promoting this new model of Indian nationalism are borrowing (and more or less wholesale) from nineteenth-century Europe, where nationalism, for all its cultural riches, culminated in disaster. And to the template of a single religion, a single language, and a common enemy they have added an innovation of their own — the branding of all critics of their party and their leader as “anti-national.” This scapegoating comes straight from the holy book of the RSS, M.S. Golwalkar’s Bunch of Thoughts, which appeared in 1966. In his book Golwalkar identified three “internal threats” to the nation — Muslims, Christians, and Communists. Now, I am not a Muslim, a Christian, or a Communist, but I have nonetheless become an enemy of the nation. This is so because any critic, any dissenter, anyone who upholds the old ideal of constitutional patriotism, is considered by those in power and their cheerleaders to be an enemy of the nation.

    In the wonderful Hindi film Newton, one character says, “Ye desh danda aur jhanda se chalta hai,” the stick and the flag define this country. This line beautifully captures the essence of a paranoid and punitive form of nationalism, based on the blind worship of the sole and solitary flag, and on the use of the stick to harass those who do not follow or obey you. This new nationalism in India is harsh, hostile, and unforgiving. The name by which it should be known is certainly not patriotism, and not even nationalism. It should be called jingoism.

    The dictionary defines a patriot as “a person who loves his or her country, especially one who is ready to support its freedoms and rights and to defend it against enemies or detractors.” Note the order: love of country first, support of freedom and rights second, and defense against enemies last. And what is the dictionary definition of jingoist? One “who brags of his country’s preparedness for fight, and generally advocates or favors a bellicose policy in dealing with foreign powers; a blustering or blatant ‘patriot’; a Chauvinist.” The order is reversed: first, boasting of the greatness of one’s country; then advocating attacking other countries. No talk of rights or freedom, or of love either. Patriotism and jingoism are antithetical varieties of nationalism. Patriotism is suffused with love and understanding. Jingoism is motivated by hatred and revenge.

    I have already outlined the founding features of constitutional patriotism. What are the founding features of jingoism? First, the belief that one’s religion, culture, and nation (and leader) are perfect and infallible. Second, the demonization of critics as anti-nationals and Fifth Columnists. Rather than engage critics in debate, hyper-nationalists harass and intimi-date them, through the force of the state’s investigating agencies and through vigilante armies if required.

    In recent years, Indian nationalism has been captured by its perverted jingoist version. But the country remains some sort of democracy, where the jingoist version is popular among a large section of the population and has been brought to power through the ballot box. How did this come to pass? Why is it that the party of the Hindu Right has so many supporters in India today?

    I believe there are four major reasons why jingoism is ascendant in India, while constitutional patriotism is in retreat. The first is the hostility of the Indian left to our national traditions. The Communist parties are still an important political force in India. They have been in power in several states. Their supporters have historically dominated some of our best universities, and been prominent in theater, art, literature, and film. But the Indian left, sadly and tragically, is an anti-patriotic left. It has always loved another country more than its own.

    That country used to be the Soviet Union, which is why our Communists opposed the Quit India Movement, and launched an armed insurrection on Stalin’s orders in 1948, immediately after Gandhi was murdered. Later the country that the Communists loved more than India was China; and so, in 1962, they refused to take their homeland’s side in the border war of that year. Still later, when the Communists became disillusioned with both Soviet Union and China, they pinned their faith on Vietnam. When Vietnam failed them, it became Cuba; when Cuba failed them, it became Albania. When I was a student in Delhi University, there was a Marxist professor who taught that Enver Hoxha was a greater thinker than Mahatma Gandhi. But then Albania failed, too. So now the foreign country that our comrades love more than India is — what else? — Venezuela. The late (and by me unlamented) Hugo Chavez was venerated on the Indian left. If you think Modi is authoritarian, then Chavez was Modi on steroids — the ur-Modi. The megalomaniac Chavez destroyed the Venezuelan economy and Venezuelan democracy, and yet he continued to be worshipped by Indian leftists young and old.

    The degradation of patriotism in India has also been abetted by the corruption of the Congress Party. The great party which led India’s freedom movement has in recent decades been converted into a single family. I have spoken of how the Left chooses its icons, but in some ways the Congress is even worse. When it was in power, it named everything in sight after Jawaharlal Nehru or his daughter or his grandson. Why couldn’t the new Hyderabad international airport have been named after the Telugu composer Thygaraja or the Andhra patriot T. Prakasam? Why Rajiv Gandhi? Likewise, when the new sea link in Mumbai had to be given a name, why couldn’t the Congress consider Gokhale, Tilak, Chavan, or some other great Maharashtrian Congressman? Why Rajiv Gandhi again?

    Many, indeed most, of the icons of the national movement belonged to the Congress party. But the Congress has abandoned and thrown them away because it is only Nehru, Indira, Rajiv, Sonia, and now Rahul that matter to them. (The only Congressman outside the family they are willing to acknowledge is Mahatma Gandhi, because even they can’t obliterate him from their party’s history.) If someone like Hugo Chavez is adored so much by Indian leftists, then obviously this will help the jingoists — and likewise, if the Congress government named all major schemes and sites after a single family, ignoring even the great Congress patriots of the past, then that would give a handle to the jingoists, too. The corrupt and sycophantic culture of the Congress Party is a disgrace. When I made a sarcastic remark on Twitter about Rahul Gandhi becoming Congress president, someone put up a chart listing the presidents of the BJP since 1998 — Bangaru Laxman, Jana Krishnamurthi, L.K. Advani, Rajnath Singh, and so on, the last name on the list being Amit Shah, followed by “party worker,” whereas the presidents of the Congress in the same period were “Sonia Gandhi, Sonia Gandhi, Sonia Gandhi…Rahul Gandhi….”  

    A third reason for India’s jingoist fate is, of course, that jingoism is a global phenomenon, manifest in the rise of Trump, Brexit, Le Pen, Erdogan, Putin, Bolsonaro, Orban, and the rest, all of whom pursue a xenophobic, paranoid, often hateful form of nationalism. The rise of such narrow-minded nationalism elsewhere encourages the rise of jingoism in India to match or rival it, and friendships between the authoritar-ians are naturally formed. And finally we must note the rise of Islamic fundamentalism in our own backyard. Over the decades, the state and society of Pakistan have become danger-ously and outrageously Islamist. Once they persecuted Hindus and Christians; now they persecute Ahmadiyyas and Shias, too. And Bangladesh is also witnessing a rising tide of violence against religious minorities. Since religious fundamentalisms are rivalrous and competitive, every act of violence against a Hindu in Bangladesh motivates and emboldens those who want to persecute Muslims in India.

    The Bharatiya Janata Party, Modi’s party, and its mother organization, the RSS, claim to be authentically Indian, and damn the rest of us as foreigners. Intellectuals such as myself are dismissed as bastard children of Macaulay, Marx, and Mill. As an historian, however, I would say that it is the ideologues of the RSS who are the true foreigners. Their model of nationalism — one religion, one language, one enemy — is foreign to the Indian nationalist tradition, to the Gandhian model of nationalism which was an innovative indigenous response to Indian conditions, designed to take account of cultural diversity and to tackle caste and gender inequality.

    If the RSS model of nationalism is inspired by Europe, their model of statecraft is Middle Eastern in origin. From about the eleventh to the sixteenth century, there were states where monarchs were Muslims and the majority of the popula-tion was Muslim, but a substantial minority was non-Muslim, composed mainly of Jews and Christians. In these medieval Islamic states, there were three categories of citizens. The first-class citizens were Muslims, who prayed five times a day and went to mosque every Friday, and who believed that the Quran was the word of God. The second-class citizens were Jews and Christians whose prophets were admired by Muslims, as preceding Mohammed, the last and the greatest prophet. Third-class citizens were those who were neither Jews nor Christians nor Muslims. These were the unbelievers, the Kafirs.

    In medieval Muslim states, Jews and Christians, the ‘People of the Book’, were defined as ‘Dhimmi’, which in Arabic means ‘protected person’. As a protected person, they had certain rights. They could go to the synagogue or church; they could own a shop; they could raise a family. But other rights were denied them. They could not enroll in the military, serve in the government, be a minister or prime minister. Nor, unlike Muslims, could they convert other citizens to their faith. Such was the second-class status of Jews and Christians in medieval Islam. This model was applied in Medina and Andalusia, and in Ottoman Turkey. While Kafirs (including Hindus) had to be suppressed and subdued, Jews and Christians could practice their profession and raise their family, so long as they did not ask for the same rights as Muslims. 

    This is precisely how the Hindu Right wants to run politics in the Republic of India today. Muslims in modern India now must be like Jews and Christians of the medieval Middle East. If Muslims accept the theological, political and social superiority of Hindus they shall not be persecuted or killed. But if they demand equal rights they might be. 

    The new jingoism in India is a curious mixture of outdated ideas of nationalism mixed with profoundly anti-democratic ideas of citizenship. And yet it finds wide acceptance. But its popularity does not mean that we should surrender to it, or that it is legitimate, or that it is genuinely Indian. For the Republic of India is an idea as well as a physical and demographic entity. Those of us who are constitutional patriots must continue to stand up for the values on which our nation was nurtured, built and sustained. If the BJP and the RSS are to continue unchecked and unchallenged, they will destroy India, culturally as well as economically.

    The political and ideological battle in India today is between patriotism and jingoism. The battle is currently asymmetrical, because the jingoists are in power, and because they have a party articulating and imposing their views. The constitutional patriotism of Gandhi, Tagore, and Ambedkar has no such party active today. The Communists followed Lenin and Stalin rather than Gandhi and Tagore, and the Congress has turned its back on its own founders. But while Indians patriots  may not currently have a credible party to represent them, they are — as the protests in December 2019 and January 2020 showed — willing to carry on the good fight for constitutional values even in its absence. Those protests admirably demonstrated that citizenship is an everyday affair. It is not just about casting your vote once every five years. It is about affirming the values of pluralism, democracy, decency, and non-violence every day of our lives. 

    It was ordinary citizens, not opposition parties, who presented the Modi government with the first major challenge since it came to power in 2014. The challenge was political, it was moral, it was constitutional. But then came the pandemic, and the balance shifted once more, back in favor of the ruler and the regime.

    In the beginning of this essay I spoke of how Narendra Modi’s was the second great personality cult in the history of the Indian republic. The first, that of Indira Gandhi, had led to the imposition of a draconian Emergency. When Modi became Prime Minister, I myself had no illusions about his centralizing instincts, yet the historian in me was alert to how the India of 1975 differed from the India of 2014. When the Emergency was imposed by Indira Gandhi, her Congress Party ruled the Central Government in New Delhi, and also enjoyed power — on its own or in coalition — in all major states of the Union except Tamil Nadu. On the other hand, when Narendra Modi became Prime Minister, many states of the Union were outside the control of his Bharatiya Janata Party.

    My hope therefore was that our federal system would serve as a bulwark against full-blown authoritarianism. In Narendra Modi’s first term as Prime Minister, the BJP won elections in some major states while losing elections in other major states. Even after Modi and the BJP emphatically won re-election at the national level in 2019, they could not so easily win power in the state Assembly elections that followed. The anti-CAA protests further strengthened one’s faith in the democratizing possibilities of Indian federalism. Large sections of the citizenry rose up in opposition to a discriminatory act that seemed grossly violative of the Constitution. The Chief Ministers of several large states were also opposed to the new legislation. This seemed like further confirmation that the present was not the past. Indira Gandhi could do what she did only because her party controlled both the Center as well as all the states in India (Tamil Nadu’s DMK Government having been dismissed a few months after the Emergency was promulgated). But this was not the case with Modi and his BJP.

    The covid19 pandemic has changed this calculus. It has given Narendra Modi and his government the opportunity to weaken the federal structure and radically strengthen the powers of the Center vis-a-vis the States. They have used a variety of instruments to further this aim. They have invoked a “National Disaster Management Act” to suspend the rights of States to decide on the movement of peoples and goods, the opening and closing of schools, colleges, factories, public transport, and so on, and to centralize all these powers in the Central Government, effectively in the person of the Prime Minister. They have further postponed the disbursal of funds already due to the States as their share of national tax collections — substantial revenues, amounting to more than Rs 30,000 crores ($40 billion), which, if released, could greatly alleviate popular distress. They have created a new fund at the Centre, the so-called PM-CARES, which discriminates against the States in that it gives special exemptions (to write off donations as “Corporate Social Responsibility”) that are denied to those who wish to donate instead to the Chief Minister’s Fund of their own states. This fund gives the Prime Minister enormous discretionary power in disposing of thousands of crores of rupees as he pleases. The functioning of the fund is shrouded in secrecy, with even the Comptroller and Auditor General are not allowed to audit it.

    This heartless exploitation of the covid19 pandemic to weaken federalism has been accompanied by a systematic attempt to further build up the personality cult of the Prime Minister. State-run television, senior Cabinet Ministers, and the ruling party’s IT Cell have all been working overtime to proclaim that only Modi can save India. Even as lives are lost and livelihoods are destroyed by the pestilence, the Prime Minister is going ahead with an expensive plan to redesign India’s capital, New Delhi. This will destroy the historic centre of one of the most beautiful cities in the world, and replace it with a series of concrete and glass blocks. The showpiece of this project is a grand new house for the Prime Minister himself. As one writer has remarked, “the biggest irony remains that a prime minister from the humblest of backgrounds should yearn for a house on Rajpath, no less, to endorse his vision of personal greatness and legacy. Would Emmanuel Macron demand and, more importantly, get a house on the Champs Elysées? Can even Trump order himself a second home on the Mall?” The Prime Minister’s own justification of the project is that it was to mark not a personal but a national milestone—the seventy-fifth anniversary of Indian independence. This is disingenuous, because past anniversaries overseen by past Prime Ministers had not called for such a spectacular extravaganza. Apparently, what was good enough for Indira Gandhi and I. K. Gujral won’t quite do for the great Narendra Modi.

    The architecture of power reveals a lot about those who wield it, and Modi’s redesign of New Delhi brings to mind not so much living Communist autocrats as it does some dead African despots. It is the sort of vanity project, designed to perpetuate the ruler’s immortality, that Felix Houphouet-Boigny of the Ivory Coast and Jean Bédel-Bo-kassa of the Central African Republic once inflicted on their own countries. (I refer readers to V. S. Naipaul’s great essay “The Crocodiles of Yamoussoukro.”) And as this wasteful and pharaonic self-indulgence proceeds, an economy that was already flailing has been brought to the brink of collapse by the pandemic. The ill-planned lockdown has led to enormous human suffering. Working-class Indians, already living on the edge, are now faced with utter destitution. In his speeches to the nation since the pandemic broke, the Prime Minister has repeatedly asked Indians to sacrifice — sacrifice their time, their jobs, their lifestyles, their human and cultural tendency to be gregarious. Surely it is past time for citizens to ask the Prime Minister to sacrifice something for the nation as well. Anyway, he won’t.

    When he was first elected Prime Minister in 2014, Narendra Modi said that he wished to redeem India from the thousand years of slavery it had suffered before his election. My son, the novelist Keshava Guha, commented at the time that Modi saw himself as the first Hindu leader to have the entire country under his command. Nehru and Indira — the two prime ministers of comparable popularity before him — were to him fake Hindus, their faith corrupted by their English education and what he and his party saw as an unconscionable partiality towards Muslims. My son is right. Narendra Modi thinks of himself as doing what medieval chieftains such as Shivaji and Prithviraj Chauhan could not do — make the whole country a proud Hindu nation. His followers call him Hindu Hriday Samrat, the Emperor of Hindu Hearts, but it would be more precise to call him Hinduon ka Samrat, an Emperor for and of Hindus. He is, to himself and millions of others, Emperor Narendra the First. The history of personality cults tells us that they are always disastrous for the countries in which they flourished. Narendra Modi will one day no longer be Prime Minister, but when will India recover from the damage he has done to its economy, its institutions, its social life, and its moral fabric?

    The Human Infinity: Literature and Peace

    Writers often talk of the torments of writing, of “the fear of the blank page,” of nights waking in a cold sweat because suddenly they see the weaknesses, the vulnerabilities, of the story that they have been writing, sometimes for years. This distress is certainly real, but I insist also upon the pleasures of creation, of inventing an entire fictional world out of thousands of facts and details. There is a particular kind of wonder that I feel when a character I have invented begins to overtake me, to run ahead and pull me forward: suddenly this imagined character knows more than I do about its own fate, its own future, and also about other characters in the story, and I must learn to follow, to catch up. In a way that I do not fully understand, my invented person infuses me with the materials of life, with ideas, with plot twists, with understandings I never knew I possessed.

    A creative work represents, for me, the possibility of touching infinity. Not mathematical infinity or philosophical infinity, but human infinity. That is, the infinity of the human face. The infinite strings of a single heart, the infinity of an individual’s intellect and understanding, of her opinions, urges, illusions, of his smallness and greatness, her power to create, his power to destroy — the infinity of her configurations. Almost every idea that comes to my mind about the character I am writing opens me up to more and more human possibilities: to a lush garden of forking paths.

    “To be whole, it is enough to exist,” wrote the poet Fernando Pessoa. This wonderful observation pours salt on the wounds of every writer who knows how difficult it is to translate a character born in the imagination into a character that contains even a particle of the Pessoan “wholeness,” even a fraction of the fullness of life that exists in one single second of a living person. It is this wholeness — made up also of infinite flaws, with defects and deficiencies of both mind and body — to which a writer aspires. This is the writer’s wish, this is the writer’s compulsion: to reach that alchemical develop-ment at which suddenly, through the use of inanimate matter — symbols arranged on a page in a particular order — we have conjured into being a life. Writers who have written characters and dissolved into them and then come back into themselves; who have come back to find themselves now composed in part of their character; who know that if they had not written these characters they would not truly know themselves — these writers know the pleasures to be found in the sense of life’s fullness that lives inside each of us.

    It is almost banal to be moved by this, but I am: we, each and every one of us, are in fact a plenitude of life. We each contain an infinity of possibilities and ways of being inside life. Yet finally such an observation is not banal at all. It is a truth of which we regularly need to remind ourselves. After all, look how cautiously we avoid living all the abundance that we are, how we dodge so many of the possibilities that are broached by our souls, our bodies, our circumstances. Quickly, at an early age, we ossify, and diminish ourselves into a single thing, a “one,” a this or a that, a clearly delineated being. Perhaps it is our desire not to face this confusing and sometimes deceptive welter within us that makes us lose some part of ourselves.  

    Sometimes the unlived life, the life we could have lived but were unable to live, or did not dare to live, withers inside us and vanishes. At other moments we may feel it stirring within, we may see it before our eyes, and it stings us with regret, with sorrow, with a sensation of squandered chances, with humiliation, even with grief, because something, or someone, was abandoned or destroyed. It might be a passionate love that we renounced in favor of calm. Or a profession wrongly chosen, in which we molder for the rest of our lives. Or an entire life spent in the wrong gender. It could be a thousand and one choices that are not right for us, which we make because of pressures and expectations, because of our fears, our desire to please, our submission to the assumptions and the prejudices of our time.

    Writing is a movement of the soul directed against such a submission, against such an evasion of the abundance within us. It is a subversive movement of the writer made primarily against himself. We might imagine it as a tough massage that the writer keeps administering to the stale muscles of his cautious, rigid, inhibited consciousness. In my own case, writing is a free, supple, easy movement along the imaginary axes between the little boy I still am and the old man I already am, between the man in me and the woman in me, between my sanity and my madness, between my inner Jew-in-a-concentration-camp and my inner commander of that camp, between the Israeli I am and the Palestinian I might have been.

    I remember, for example, the difficulties I experienced when I wrote Ora, the main character in To the End of the Land. For two years I struggled with her, but I was unable to know her completely. There were so many words surrounding her, but they had no living focal point. I had not yet created in her the living pulse without which I cannot believe in — I cannot be — the character I am writing. Finally I had no choice but to do what any decent citizen in my situation would do: I sat down and wrote her a letter, in the old fashioned way, with pen and paper. Ora, I asked, what’s going on? Why won’t you surrender?

    Even before I had finished the letter, I had my answer. I grasped that it was not Ora who had to surrender to me, but I who had to surrender to her. In other words, I had to stop resisting the possibility of Ora inside me. I had to pour myself into the mold of she who was waiting deep inside me, into the possibility of a woman within me — more, the possibility of this particular woman within me. I had to be capable of allowing the particles of my soul — and of my body too — to float free, uninhibited and incautious, without narrow-minded, practical, petty self-interest, toward the powerful magnet of Ora and the rich femininity that she radiates. And from that moment on she practically wrote herself.

    There are extra-literary implications to my discovery of another interiority, a human plenitude, within my writing self. A few years ago, I gave a speech on Mount Scopus in Jerusalem. It was late afternoon, the sun was preparing to set. The mountains of Moab behind me, at the edge of the horizon, would soon be painted red, and gradually turn paler until their outlines blurred and darkness finally descended. I spoke about my submission to Ora, and then I turned to the reality of our lives here in Israel — to what we Israelis somewhat grimly call hamatzav, or the Situation. It is a word that in Hebrew alludes to a certain stability, even stasis, but is in fact a euphemism for more than a century of bloodshed, war, terror, occupation, and deadly fear. And most importantly, fatalism and despair.

    Perhaps there is no more appropriate place to talk about the Situation than on Mount Scopus, because I find it difficult to gaze at that beautiful landscape in a way that is disconnected from reality, from the fact that we are looking a what is called, in conflict-speak, “Ma’aleh Adumim and Zone E-1.” That location is precisely the point at which many Israelis, including government officials, wish to begin the annexation of the West Bank. Others, myself included, believe that such an act would put an end to any chance of resolving the conflict and doom us all to a life of ceaseless war.

    On Mount Scopus our reality seems all the more densely present, containing not only the Hebrew University, with all the wisdom, knowledge, humanity, and spirit of freedom that it has amassed for almost a century, but also the three thousand Bedouins in the adjacent desert — men, women and children, members of a tribe that has lived there for generations, who are denied their rights and citizenship, and subjected to constant abuses, the purpose of which is to remove them from this place. They, too, are part of the Situation. They, too, are our situation: our writing on the wall.

    Fifty years ago, after the end of the Six-Day War, in the amphitheater on Mount Scopus, Lieutenant-General Yitzhak Rabin, the Chief of Staff who oversaw Israel’s victory, accepted an honorary degree, and his speech on that day reverberated throughout the country. Rabin’s address was an attempt — a successful attempt — to construct the collective conscious-ness and the collective memory of his contemporaries. I was thirteen at the time, and I still remember the chills it sent down my spine. Rabin articulated for us Israelis the sense that we had experienced a miracle, a salvation. He gave the war and its results the status of a morality tale that almost exceeded the limits of reality and reason.

    When we said “The finest to the Air Force,” Rabin said in his speech, referring to a famous recruitment slogan, “we did not mean only technical aspects or manual skills. We meant that in order for our pilots to be capable of defeating all the enemies’ forces, from four states, in a matter of hours, they must adhere to the values of moral virtue, of human virtue.” He continued: “the platoons that broke enemy lines and reached their targets….. were borne by moral values and spiritual reserves — not by weapons and combat techniques.”

    It was a breathtaking speech. (It was written by Chief Education Officer Mordechai Bar-On.) It was impassioned but not over the top, although those were euphoric days. God is not mentioned even once. Nor is religious faith. Even the experience of finally touching the stones of the Western Wall is described not in a religious context, but rather in an historical one: “the soldiers touched right at the heart of Jewish history.” Just imagine the florid prominence that would be given to religion, to holiness, to God, in such a speech today.

    Rabin also declared that “the joy of victory seized the entire nation. But despite this, we have encountered….. a peculiar phenomenon among the soldiers. They are unable to rejoice wholeheartedly. Their celebrations are marred by more than a measure of sadness and astonishment… Perhaps the Jewish people has not been brought up to feel, and is not accustomed to feeling, the joy of the occupier and the victor,” But as Rabin uttered those words, the embryonic occupation had already begun to grow. It already contained the primary cells of every occupation — chauvinism and racism, and in our case also a messianic zeal. And there also began to sprout among us, without a doubt, “the joy of the occupier” which Rabin believed we were incapable of feeling, and which ultimately led, through a long and torturous path, to his assassination twenty-eight years later.

    It appears that no nation is immune to the intoxication of power. Nations stronger and more steadfast than ours have not been able to withstand its seductions, much less the small state of a nation such as ours, which for most of its history was weak and persecuted, and lacked the weapons, the army, the physical force with which to defend itself. A nation that in those early days of June, 1967 believed it was facing a real threat of annihilation, and six days later had become almost  a small empire.

    Many years have passed since that victory. Israel has evolved unrecognizably. The country’s accomplishments in almost every field are enormous and should not be taken for granted. And neither should the larger saga: the Jewish people’s return to its homeland from seventy diasporas, and the great things it has created in the land, are among humanity’s most incredible and heroic stories. Without denying the tragedy that this historical process has inflicted upon the Palestinians, the natives of this land, the Jewish people’s transition from a people of refugees and displaced persons, survivors of a vast catastrophe, into a flourishing, vibrant, powerful state — it is almost incomprehensible.

    In order to preserve all the precious and good things that we have created here, we must constantly remind ourselves of what threatens our future. I am not referring only to the external dangers that we face. I have in mind, first foremost, the distortion that damages the core of Israel’s being — the undeniable fact that it is a democracy that is no longer a democracy in the fullest sense of the word. It is a democracy with anti-democratic illusions, and very soon it may become an illusion of democracy.

    Israel is a democracy because it has freedom of speech, a free press, the right to vote and to be elected to parliament, the rule of law and the Supreme Court. But can a country that has occupied another people for fifty years, denying its freedom, truly claim to be a democracy? Can there be such an oxymoronic thing as an occupier democracy?

    A hundred years of conflict. Fifty years of occupation. Beyond the details of the political debate, we must ask: what do those fifty years do to a person’s soul, and to the soul of a nation? To both the victim and the victimizer? I return here to the process of artistic creation that I described earlier — the axiomatic sense of a person’s infinity, whoever that person may be. In the context of our present historical circumstances, I summon back the writer’s understanding that beneath every human story there is another human story. I insist again upon the archeological nature of human life, which is composed of layers upon layers of stories, each of which is true in its own way. The imagination of all these layers and truths, upon which the writer relies for the richness of his creation, has another name: empathy.

    But a life lived in constant war, when there is no genuine intent to end the war — a life of fear and suspicion and violence — does not recognize or encourage or tolerate this abundance of human realities. It is by definition a morally unimaginative life, a life of restriction. It narrows the soul and contracts the mind. It is a life of crude stereotypical perceptions, which in denying another people’s humanity promotes a more general denial of all otherness and difference. This is the sort of climate that finally gives rise to fanaticism, to authoritarianism, to fascist tendencies. This is the climate that transforms us from human beings into a mob, into a hermetic people. These are the conditions under which a civil, democratic, and pluralistic society, one that draws its strengthen from the rule of law and an insistence on equality and human rights, begins to wither and fray.

    Can we say with confidence that Israeli society today is sufficiently aware of the magnitude of these dangers? Is it fully capable of confronting them and contending with them?  Are we sure that those who lead us even want to contend  with them?

    I began with the literary and I end with the real — with the reality of our lives. In my view they are inseparable. We do not know, of course, who will stand here fifty years from now. We cannot predict the problems that will consume them and the hopes that will animate them. To what extent, for example, will technology have changed people’s souls, and even their bodies? Which dimensions and dialects will have been added to the Hebrew language that they will speak, and which will have disappeared? Will they utter in their daily speech the world shalom? Will they do so happily, or with the pain of disappointment and squandered opportunities? Will shalom be spoken naturally, with the ease of the commonplace — routinely, as if peace had become a way of life?

    I do not know what sort of country the Israel of the future will be. I can only hope with all my heart that the man or woman who will stand in my place will be able to say, with their head held high and with genuine resolve: I am a free person, in my country, in my home, in my soul.

    Transgression, An Elegy

    Sade does not give us the work of a free man.
    He makes us participate in his efforts of liberation.
    But it is precisely for this reason that he holds our attention.

    SIMONE DE BEAUVOIR, “MUST WE BURN SADE?”

    Vito Acconci, later to be known as the art world’s “godfather of transgression,” is crouched under a low wooden ramp constructed over the floor of the otherwise empty Sonnabend Gallery in New York. Apparently heʼs masturbating to sexual fantasies about the visitors walking above him, the soundtrack of which is projected through loudspeakers installed in the corners of the gallery. “You’re on my left . . . you’re moving away but I’m pushing my body against you, into the corner . . . you’re bending your head down, over me . . you’re pushing your cunt down on my mouth… you’re pressing your tits down on my cock… you’re ramming your cock down into my ass…” Now and then gallery goers can hear him come. The piece is titled Seedbed.

    It was 1971, Nixon was in the White House, and artists were shooting, abrading, exposing, and abjecting themselves, deploying their bodies to violate whatever proprieties had survived the 1960s, and shatter the boundaries between art and life. This would, in turn, rattle and eventually remake sclerotic social structures and dismantle ruling class hegemony, or so I learned later that decade from my Modern Art History instructor, a charismatic Marxist-Freudian bodybuilder who fulminated about Eros and Thanatos and seems never to have published a word, but greatly influenced my thinking on these matters.

    Transgression had been so long implanted into the curriculum that it had become a tradition — a required introductory course at the art school I attended as an undergraduate. Transgression was the source of all cultural vitality, or so it seemed. We learned that aesthetic assault was the founding gesture of the avant-garde, which had been insulting the bourgeoisie for over a century, dating back in the visual arts to 1863 and the Salon des Refusés in Paris. The classic on exhibit was Manet’s Le Déjeuner sur l’herbe, previously rejected by the jury of the annual sponsored Salon de Paris. Manet was his day’s godfather of transgression, though the real scandal of the painting wasn’t that a nude woman was casually picnicking with two clothed men and gazing directly at the viewer. No, according to my instructor, it was that Manet let his brushstrokes show, an aesthetic offense so great that visitors had to be physically restrained from destroying the painting. It seemed like an enviable time to have been an artist.

    In this lineage, we took our places. I felt it was my natural home, a mental organizing principle. It augured freedom, self-sovereignty — I was angry at the world’s timid rule-fol-lowers and counted myself among the anti-prissy, though my personal disgust threshold has always been pretty low. Acconci I found both disgusting and intriguing. The heroic transgressor mythology, I eventually came to see, definitely had its little vanities, its preferred occlusions. Even the origin story was dodgy; in fact the Salon des Refusés was itself officially sponsored, something I don’t recall my instructor mentioning. Hearing of complaints by the painters who were rejected by the Salon de Paris, Emperor Napoleon III had given his blessing to a counter-exhibition, cannily containing the backlash by accommodating the transgressors. Possibly there’s always a certain complicity between the transgressive and the covertly permitted — shrewd transgressors, like court jesters, knew which lines not to cross.

    A few years before Seedbed, Acconci had performed his equally notorious Following Piece, which involved randomly selecting and then stalking a different unwitting person through the streets of New York City until they entered a locale — an office, a car — where they could not be trailed. He did this every day for a month. The duration of the artwork was effectively controlled by the individual being pursued though their participation was not, which gave the piece its edge of creepiness. The documentation now resides in The Museum of Modern Art’s permanent collection — count Acconci among the shrewd transgressors.

    Of course, terms like “consent” were heard infrequently in arty-leftish circles in those days and the idea that it could be unambiguously established had yet to be invented. Eros itself seemed less containable, which was among the things people mostly liked about it in the years after the sexual revolution and before HIV. Even sexual creepiness seemed less malign: sex was polymorphous and leaky, aggression was inseparable from sex and its attendant idiocies, this was largely understood as the human condition, also a big wellspring of artistic inspiration. Anyway, Seedbed’s audience would have presumably been wise to the content of the piece before entering Sonnabend and being enlisted for roles in Acconci’s onanistic scenarios, though from today’s vantage “implied’ consent is no sort of consent at all. About Seedbed, Acconci was prone to explanations such as “my goal of producing seed led to my interac-tion with visitors and their interaction, like it or not, with me.” The extended middle finger of that “like it or not” (and the unapologetic prickishness of “producing seed”) now seems — to borrow my students’ current terminology — a little “rapey.” But from the new vantage, the entire history of the avant-garde can seem a little rapey.

    What was the turning point? When did transgression go south? Even by 2013 damage control was required. When Following Piece was displayed at a MOMA exhibition that year, a nervously disingenuous caption was posted to mitigate potential umbrage: “Though this stalking was aggressive, by allowing a stranger to determine his route the artist gave up a certain degree of agency.” As if getting to determine the route neutralized the piece’s aggression, like carbon offsets for polluters are meant to do for the environment? The artist gave up nothing that I can see, but that was the basic job description for artists from the Romantic era on: give up nothing.

    The wrestling match between the caption and the photos now seems emblematic. If “like it or not” was the master trope of the Manet-to-Acconci years, today’s would have to be encroachment. Transgression has been replaced by trauma as the cultural concept of the hour: making rules rather than breaking them has become the signature aesthetic move, that’s just how it is, there’s no going back. New historical actors have taken up places on the social stage and made their bids for cultural hegemony, having sent the old ones to re-education camp. These days it’s the transgressed-upon who are the protagonists of the moment: the offended, people who are very upset by things, their interventions a drumbeat on social media, their tremulous voices ascendant. (Online cultural commissar is now a promising career path.) And the mainstream cultural institutions are, on the whole, deferring, offering solace and apologias, posting warning signs and caveats to what might cause aesthetic injury. Aesthetic injuries flourish nonetheless.

    Sure, there have always been offended people, but those people used to be conservatives. Who cared if they were offended, that was the point. What has changed is the social composition of the offended groups. At some point offendability moved its offices to the hip side of town. The offended people say they’re progressives! Which requires some rethinking for those of us shaped by the politics of the previous ethos.

    After a century and a half of cultural immunity, transgression has started smelling a little rancid, like a bloated roué in last decade’s tight leather pants. But okay, change happens, the world is in flux, life is a river, nothing stays the same. Let’s try not to get defensive about it. Okay yes, I’m talking to myself, it’s me who feels defensive. But what’s the point of clinging to superseded radicalisms in a different world and time? Please be patient as I attempt to wrestle myself out of a long-term romance with a dethroned idea. I’m doing my best. I’m a bit conflicted.

    It was never precisely said that I recall, but it seems evident in retrospect that there was a particular idea of the self that was embedded in the aesthetics of transgression: a self too buffered against the blows of the world, too stolid. It was an artistic duty to shatter this securely integrated self. The role of the authoritarian personality in the rise of European fascism, as analyzed by Wilhelm Reich and his Frankfurt School counterparts, was still in the air at the time of my inculcation into the cult of transgression, its tentacles still wrapped around the counter-culture and the antiwar movement. Character rigidity was the signature feature of the political right, we learned, who were despicable moral cops with sticks up their asses. In the version of twentieth-century art history that I was taught, art audiences and upright citizens generally were all deeply in need of psychical jolts and emetics. These benighted people needed to have their complacencies rattled; as an artist, you were meant to take up that task, defy the censors, search out and assault social norms and conventions, especially the ones embedded deepest within our (or their) sensibilities.

    Art had already abandoned objecthood by then; now the mission was plumbing your depths and darkest instincts, then assaulting the audience with the ickiest stuff. Art was supposed to be perilous and messy. Psychoanalysis had long ago told us that the modern personality structure was a hardened carapace formed around traumatic memories or fantasies that had become bottled up and fetid, and had to be manumitted. Sure this was aggressive, but sublimating aggression into art was what made art feel alive, a collective therapeutics, maybe not unlike love: potentially transcendent. It was a world peopled by depressives and jerks who doubled as therapists, putting culture on the couch and then joining it there; we diagnosed its pathologies and our own, we invented curatives. Sometimes those were painful: success was measured in outrage generated.

    People understandably howled when their carapaces were under assault, but that wasn’t bad. Violation was an ethical project. Censorship was a tool of the death drive and the authoritarians, but luckily there was no such thing as successful repression anyway — lectured my instructor. The festering stuff was always leaking out, which the Surrealists understood, along with other leaky heroes such as Jackson Pollock, who started flinging paint at a canvas on the floor, liberating it once and for all from the falsehoods of representation and the prison of the picture plane. It was the wild men and (occasional) women who changed the world — by breaking rules, not following them! As with Pollock, who upended painting entirely, but it was his psyche that had to get released first, thanks to Jungian analysis. We pored over Jung looking for backdoors to the collective unconscious, we memorized Reich, another wild man always making another comeback for whom character was itself a kind of defense.

    The point is that there was an ethics to transgression. As for us aspiring artists, our own defenses needed to be punctured too, our own inflexibilities shattered. Boundaries made us ill. Humans were armored: not only superegos but also bodies needed to be broken down and realigned. Being permeable was good for you. Another of Acconci’s performances from 1970 was Rubbing Piece. This one involved him rubbing his left forearm with his right hand for an hour until he got a horrible sore, his skin angry and abraded. We all needed to shed our skins, give up our self-protections.

    To be sure, these skins were by default white — race wasn’t yet part of the curriculum, though another of my teachers was Robert Colescott, who was at the time painting massive and funnily bitter canvases substituting African-Americans for whites in reprises of iconic history paintings (George Washington Carver Crossing the Delaware). In quest of whatever permeability was available I underwent Rolfing, a sadistic form of therapeutic massage designed to dislodge and release the emotional injuries stored in your connective tissues; this entailed paying to have someone grind the heel of his hand and occasionally an elbow into the soft parts of your corpus until you cried. It really hurt. But how was anything going to get transformed socially and politically if our rigidities remained intact, bolstered by aesthetic politesse and safety-mongering?

    The possibility of smashing everything, your own boundaries included, made for a wonderful political optimism. Aesthetic vanguards and political vanguards seemed like natural allies — the revolutions to come would be left-wing ones, or so we assumed. What innocent times those now seem, when “right-wing radical” was still an oxymoron. Aesthetic conservatives were political conservatives, that was the assumption. The disrupters were on the left; disruption was a left- wing idiom. It was very heady: signing on to the avant-garde linked you to a revolutionary past and future, from the barricades to Duchamp’s urinals to Mai 68. Everywhere the mandate was to dismantle the art-life distinction, and to embrace whatever followed.

    Yes, I do now see there were some convenient fictions embedded in the romance with transgression. For one thing, as much as we hawked dismantling the art-life boundary, we also covertly relied on it: artistic transgressions were allowed to flourish because the aesthetic frame was itself a sort of protective shield. In 1992, in an aptly titled essay “The Aesthetic Alibi,” Martin Jay, while naming no names, gently mocked the whole genre of performance art, invented, he says, to permit behaviors that would put artists in jail or mental wards if art and life were not distinct realms of experience. In other words the transgressions of Acconci and his ilk coasted on the inviolability of art while getting acclaim for appearing to militate against it.

    As a nineteen-year-old aspiring artist I worshipped Vito Acconci, I wanted to be Acconci, though in pictures he looked hairy and unkempt. I thought Seedbed was artistically brilliant. I looked up his address in the New York phone book and thought about dropping by (he lived on Christie Street, I even now recall), or maybe stalking him through the streets of New York and then documenting it — transgressing the transgressor! — to what I imagined would be art world acclaim. It wouldn’t have occurred to me to try to pull off public masturbation, even concealed under a platform; there were limits to the transgressions I could imagine.

    The gender politics of transgression was not initially much on my horizon. Not that there weren’t some stellar female transgressors on the scene: there was Lynda Benglis, for example, who ran a mocking ad in Artforum of herself nude except for white-framed sunglasses, wielding an extra-long dildo like a phallus. (It was a commentary on the art world.) But you didn’t need to appropriate the phallus to be transgressive, you could daintily repudiate it in the manner of the feminist artist Judy Chicago and others, who were reclaiming maligned “feminine” crafts such as china-painting and needle-point to contest the macho grandiosities of minimalism.

    In some ways of telling this story, feminism and transgression were always on a collision course. For one thing, and needless to say, women’s bodies were pretty often transgression’s raw material, in art and in life, on canvas and in the bars. I recall reading the painter Audrey Flack on her first meeting with Jackson Pollock at the Cedar Tavern decades before — he pulled her toward him as if to kiss her, then burped in her face. Flack, twenty at the time, wasn’t particularly offended, she just saw him as desperate. De Kooning chopped women up on canvas, charged early feminist art historians. The artist Ana Mendieta either fell off her 33rd floor balcony or was pushed by minimalist superstar Carl Andre, who was tried for it and found not guilty.

    By the time #MeToo hit, transgression’s sheen was already feeling pretty tarnished. #MeToo was about a lot of things and among them was a cultural referendum on the myth of male genius, which as thousands of first-person accounts have elaborated over the decades, is pretty frequently accompanied by sexual grabbiness and bad breath. Sexual transgressiveness has always been the perquisite of gross men in power, but there is also an added perk, which is that treating the boundaries of less powerful people as minor annoyances makes insecure men feel like creative geniuses, like artists and rock stars. Post #MeToo, the emblematic transgressor was starting to look less like Vito Acconci at Sonnabend and more like Dominique Strauss-Kahn at the Sofitel.

    Apropos my young reverence for Acconci and his idioms, I didn’t at the time ponder my own real-life experiences with real-life masturbators and stalkers. A committed truant and somewhat feral adolescent loner, I could often be found weekday afternoons in one or another of Chicago’s seedy downtown movie palaces, where I would park myself in a mostly deserted theater to enjoy a double feature, or the DIY version, sitting though the same movie twice. The raincoat brigade had their plans, meaning solo men not infrequently scurrying into seats within my eyeline once the movie had started and commencing frantic activity in their laps. It took me a while to figure out what was going on — such things weren’t covered in my junior high sex-ed classes. I would gather my belongings and move seats or sometimes flee to the ladies room.

    Once, feeling aggrieved at having to move seats yet again, I deliberately dumped a large icy soda into the lap of a man I had taken for one of the miscreants. He yelped in outrage, which was thrilling and terrifying, though I wondered for long after whether I had possibly made a mistake. Maybe those teenage experiences of male performance art were buried somewhere in my psyche when I put together my undergraduate thesis show, a semiotic analysis of an obscene phone call I had received, accompanied by deliberately ugly staged photographs of what the caller said he wanted to do. Structuralism and semiotics were then conquering the art world and I liked the intellectual distance they provided, the tools to be cool about a hot subject. I liked the idea of transgressing the transgressor. On to grad school, triumphantly.

    In the following years much of my work, even after decamping the art world, was ambivalently fascinated with transgression, sometimes the aesthetic version, sometimes the true-life exemplars. Critical theories that read real life as a “text” helped to blur the distinction, but so did everything else in the culture. I wrote about Hustler magazine, I wrote books devoted to adulterers, scandalizers, male miscreants, and the professor-student romance crackdown. Though I think of myself as a generally decorous person — only ever arrested once (teenager, charges expunged) — something drew me to indiscretion and imprudence. Envy, sublimated rage, desire, male impersonation? Let me get back to you on it.

    The cultural genres that have flourished in the last few decades have likewise been the ones most dedicated to muddying the art-life distinction: the memoir explosion, autofiction, the psychobiographical/pathographical doggedness in criticism, confessional standup and the heirs of Spaulding Gray, along with the relentless first-person imperatives of social media, where everyone’s now a “culture worker,” everyone “curates” every-day life into pleasing tableaux for public display. Which means what for the fate of transgression, whose métier, as Martin Jay intimated, covertly relied on keeping the distinction intact?

    The concurrent notable trend has been the outperformance of the offense and umbrage sector, now overtaking pretty much everything in the cultural economy. To be sure, umbrage can be a creative force in its own right, as when in 2014 at Wellesley, a woman’s college, students protested a painted bronze statue of a sleepwalking man in his underpants located outside the art museum, because it was regarded as potentially harmful to viewers. The man was balding, eyes closed, arms outstretched — not an especially imposing or threatening figure, in fact he appears quite vulnerable. A petition to move the statue inside the museum got over a thousand signatures.

    Creative umbrage flourished more flamboyantly in 2013, when the Metropolitan Museum staged an exhibit of the painter Balthus’ work and included Thérèse Dreaming, with its notorious flash of the pubescent Thérèse’s white panties smack in the center of the canvas. As to be expected, the Met attempted to accommodate offended sensibilities by posting a safety warning at the entrance to the exhibit advising that “some of the paintings in this exhibition may be disturbing to some visitors.” Though the image of Thérèse is quite stylized, a petition called for the painting’s removal because of “the current news headlines highlighting a macro issue about the safety and wellbeing of women of all ages.” You’d have thought there was a living, breathing pubescent girl flay-legged in the museum (over eleven thousand signatures to date have concurred).

    Speaking of artistic choices, I noted that the anti- Balthus petition was written in the first person, an aesthetic decision that every creative writer faces — whether or not to deploy that all-powerful “I.” “When I went to the Metropolitan Museum of Art this past weekend, I was shocked to see a painting that depicts a young girl in a sexually suggestive pose,” it read, in bold type and melodramatic prose as aesthetically stylized as Balthus’ rendering of Thérèse, the degree of effrontery so precisely calibrated. If the painting was not going to be removed, the petition-writer offered another option: the museum should provide signage indicating that “some viewers find this piece offensive or disturbing, given Balthus’ artistic infatuation with young girls.”

    The demand was that the painting be repackaged as a cautionary tale. And since we live in culturally democratizing times, Thérèse Dreaming now comes swathed in lengthy explanations. From the Met’s website: “Many early twenti-eth-century avant-garde artists, from Paul Gauguin to Edvard Munch to Pablo Picasso, also viewed adolescent sexuality as a potent site of psychological vulnerability as well as lack of inhibition, and they projected these subjective interpretations into their work. While it may be unsettling to our eyes today, Thérèse Dreaming draws on this history.” No longer will a viewer’s eye be drawn to that glimpse of white panties and be unsettled, and wonder what to make of it. Goal to the offended, who have seized the license to be outrageous and impose their stories and desires on the polis, much as the transgressor classes once did. But let’s not imagine there is any less cultural aggression or cruelty being unleashed here than before.

    Trying to construct a timeline for this art-life blur, I recalled an earlier similar remonstrance, one that startled me at the time, given the source — but it now reads like a bellwether. This was Martin Amis, in his literary critic guise, grappling with what he named a “problem from hell” upon the publication in 2009 of his literary hero Nabokov’s unfinished novel The Original of Laura. The problem wasn’t precisely that the subject was the desire to sexually despoil very young girls, a preoccupation it shared with the canonical Lolita and four of Nabokov’s other books, six in all. It was that as the aging Nabokov’s talents drastically waned those “unforgivable activities” — the sexual despoiling stuff — were no longer absolved or wrestled with by the usual stylistic firepower, and what remained on the page was dismal squalor. Worse, Laura’s stylistic failures, along with Ada before it — another late-career nymphet-obsessed ponderous mess — taints the other books. Even the great ones start feeling squalid by proximity, don’t they?

    Though Amis insists that he is making an aesthetic case and not a moral one — “in fiction, of course, nobody ever gets hurt” — as you watch him valiantly trying to pry the two apart, the critical performance is palpably anxious. He feints, he deflects, he finally states outright that it comes down to the truism that writers like to write about the things they like to think about, and without sufficient stylistic perfume to offset the foulness of the subject matter, what Nabokov was thinking about just smells bad. But admitting this means, effectively, retracting the license to transgress that Amis (and most of the literary world) once so appreciatively granted Nabokov, leaving the critic (and the rest of us) wallowing in “a horrible brew of piety, literal-mindedness, vulgarity and philistinism.”

    My own question is, what in the cultural ether pushed this anxiety to the forefront? Had the protective blockades once erected around the aesthetic become that much more porous since Nabokov’s heyday? Literary criticism has always had the sociological move up its sleeve, available to whip out and flay transgressors as necessary — Irving Howe indicting Philip Roth as bad for the Jews, and so on. But when such a prominent writer decides, so late in the day, that Nabokov is bad for pre-teens, it does seem like some major sands have shifted. Reading Amis reread Nabokov’s oeuvre through the lens of Laura, you notice the transgression jumping from the art to the artist, like a case of metaphysical fleas. We have left literature behind and been plummeted into the sphere of moral contagion. The anxiety isn’t just that our glimpses of the violated bodies of pubescent girls have arrived too stylistically unadorned. I wonder if it is also that whatever’s corrupt and ignoble in there will seep out and taint the reader.

    If I understand him correctly Amis’ problem from hell is something like this: What if there resides at the center of this deeply transgressive oeuvre not the “miraculously fertile instability” he reveres about Nabokovian language but, rather, the rigidity of a repetition compulsion?
    Is this a general condition? I’m not sure, but other such “problems from hell” certainly seem to dot the recent social landscape, especially at the art-life checkpoints. When the comedian-genius Louis C.K. was exposed as a compulsive masturbator and encroacher on women in the wake of #MeToo, it naturally brought back my long-ago teenage movie theater experiences. I was fascinated by his fellow comedian Sarah Silverman’s insouciant response. When asked by Louis if he could do it in front of her, Silverman would sometimes respond — at least so she reported — “Fuck yeah, I want to see that!” As she told it, it was a weird, interesting aesthetic experience, and she was Louis’ equal in weirdness, no one’s victim. Silverman had to quickly apologize to all the women who had not felt similarly — for one thing, it wasn’t clear that everyone upon whom this lovely sight was bestowed had been asked for permission or felt able to refuse. Pathetic C.K. may have been, but he was still a comedy gatekeeper.

    Of course he’d also been telling the world for decades exactly who he was, namely a self-loathing guy who was obsessed with masturbation. He did innumerable comedy routines and episodes of various shows devoted to masturbation. Apparently many of his fans — let’s call them the aesthetic-autonomy diehards — thought this was “art,” just a “bit,” and were deeply disappointed in C.K. He was supposed to have been a feminist ally! He was supposed to be fucked up about women, but self-aware! He did comedy routines about how terrible men were at sex, and how grossly they behaved to women — and then he turned around and was gross!

    The world is becoming a tough place for anyone who still wants to separate the artist from the art — then again, pretty few people any longer do. Creative writing students across the country now refuse to read Whitman, a man of the nineteenth century who, they believe, said some racist things in addition to the great poetry. I guess reading him now feels disgusting, as though a cockroach had crawled in your ear and deposited a bunch of racism that you are helpless to expunge.

    Things were much less confusing when the purists were right-wingers, when the “moral majoritarians” railed against cultural permissiveness while concealing their private transgressions behind facades of public rectitude. I loved the last few decades of the twentieth century, when one after another fundamentalist minister was exposed as a scummy lying adulterer and the world made sense. The right was still at it throughout the 1990s, waging their losing culture wars — it was almost too easy to get them to huff and puff. When none other than the reptilian Rudolph Giuliani, then mayor of New York, threatened to shut down the Brooklyn Museum in retribution for an art exhibit he deemed offensive, the museum produced a yellow stamp announcing that the work in the exhibit “may cause shock, vomiting, confusion, panic, eupho-ria and anxiety.” Note that as of 1999 it was still possible to be ironic about offending people, because offended people were generally regarded as morons.

    The rise of identity politics, it is widely agreed, introduced a far more granular vocabulary of umbrage. Now it is the social justice left wielding the aesthetic sledgehammers and “weaponizing” offense. (Note, for the record, that the socialist left, young and old, those for whom class remains the primary category and think identity politics is just corporate liberalism, are not particularly on board with the new umbrage.) There was already a general consensus that pernicious racial and ethnic stereotypes have been among the factors impeding social equality for marginalized groups. The last few decades have introduced a new vocabulary of cultural must-nots: cultural appropriation, microaggression, insensitivity. New prohibitions keep being invented, and political coherence is not required. An obviously antiracist artwork like Dana Schutz’s painting Open Casket, which depicted Emmett Till’s mutilated face and body and was included in the Whitney Biennial in 2017, could be accused by its critics of attempting to transmute “black suffering into profit and fun,” because in the new configuration the feeling of being offended licenses pretty much anything. (Schutz had made it clear that the painting would not be sold.) Protestors blocked the painting from view and petitions demanded that it be destroyed. Offended feelings are like a warrant for the summary arrest of the perps, and prior restraint is expected: the offending thing should never have been said or seen. Culture is no longer where you go to imagine freedom, it’s where you go for scenes of crime and punishment.

    Speaking of political incoherence, the irony of the charges against Schutz was the degree to which they echoed the old miscegenation codes, as if Emmett Till’s murder wasn’t itself spurred by fears and prohibitions about racial mixing. It was the “one-drop rule” in reverse, except now a white woman was being accused of crossing the color line, of positioning herself too intimately to a black male body. The extremity of the accusations made the identity politics of the left seem stylisti-cally indistinguishable from the identity politics of the right, both spawned from the same post-truth bubble — as with Swiftboating, Pizzagating, and “Lock Her Up.” Throw some dirt around and see what sticks.

    Meanwhile more terrible things have been happening. “Transgression” has become the signature style of the alt-right and “alt-light” (those are the slightly less anti-Semitic and white supremacist ones). Now they are the rebellious, anti-establishment ones, gleefully offending everyone. Some even lay the blame for the stylistics of online troll culture — the alt-truth shitposting adopted so successfully by the current president and his basket of deplorables (to borrow Hillary Clinton’s supremely self-annihilating phrase) — at the doorstep of the avant-garde. In Kill All Normies, Angela Nagle traces their antecedents to Sade, the Romantics, Nietzsche, the Surrealists, the Situationists, the counterculture and punk — culminating with far-right culture hero Milo Yiannopoulos, who also extolled the virtues of disrupting the status quo and upsetting the liberals, whom he saw as hegemonic. All was going well for Milo, the self-proclaimed “dangerous faggot,” until he got a smidgen too dangerous by commending pedophilia, or so said his former patrons who quickly smote him into oblivion. Haha, their transgressive spirit is about an inch deep.

    Yet the longstanding association of transgression with the left was always superficial and historically accidental. In Nagle’s version, the alt-right crowd have simply veered toward nihilism in lieu of revolution. She even intimates that it was the virtue-signaling and trigger warnings of the touchy-feely left that gave us Donald Trump and the rest of the destructive right- wing ids; and this has made her persona non grata in certain leftish circles. However you draw your causality arrows, there’s no doubt that the more fun the right started having, the more earnestly humorless the social justice types became, and the more aesthetically conservative. Especially problematic for the younger crowd are jokes: every comedy routine was now examined for transgressions, like a team of school nurses checking kindergarteners for head lice. Comedy is no longer any sort of protected zone, it’s the front lines, with id-pol detectives on house-to-house searches to uncover humor offenses from decades past. Old jokes are not grandfathered in, obviously; old jokes are going to be judged by current standards. Irony has stopped being legible — it puts you on “the wrong side of history,” a phrase you suddenly hear all the time, as though history always goes in the right direction.

    In sum, transgressors are the cultural ancien regime who have reaped the spoils for far too long, and now had better watch their steps. Even France, proud home to Sade and Genet, is dethroning its transgressors and putting them on trial. This includes that most literary of pedophiles, the award-festooned novelist Gabriel Matzneff, currently in hiding in Italy, who used to have a lot of friends in high places despite (because of?) habitually foisting his sexual desires on teenage girls and under-age boys, then writing detailed accounts of his predilections. One of his former conquests, fourteen at the time of their affair, recently wrote her own bestselling book, titled Consent. Another, fifteen when they were involved and whose letters Matzneff appropriated and published (even putting her face on the cover of one of his novels — no, he didn’t ask permission or even inform her), has also gone public. She attempted to do so previously, in 2004, but no one then cared or would publish her account.

    But it’s a new era: the transgressed-upon of the world are speaking, and the world is listening. This changes many things, profoundly. It’s been a long time coming. As to whether injury will prove a wellspring of cultural vitality or a wellspring of platitudes and kitsch, that is what’s being negotiated at the moment. At the very least, trauma is more of an equal-op-portunity creative force than inspiration or talent, which were handed out far more selectively. Trauma is a bigger tent. The injury and the wound — and importantly, the socially imposed injuries of race, ethnicity, gender, queerness — have long been paths to finding a voice, an intellectual “in.” This is hardly new: wounds have long been sublimated into style or form — so argued Edmund Wilson, and before him Freud. It seems like injuries more frequently enter the cultural sphere minus the aesthetic trappings these days — perhaps there is more patience or attention for unembellished pain. The question we’re left with is how much of the world can be understood from the standpoint of a personal injury: does it constrict or enlarge the cultural possibilities?

    Reading about Matzneff, I’d been wondering what the French plan to do about Sade in the post -#MeToo era and was happy to stumble on an essay by Mitchell Abidor pondering the same question. An American who has translated many French avant-gardists and anarchists into English, Abidor rereads Sade through the lens of Jeffrey Epstein, concluding that it is impossible not to see Sade as Epstein’s blueprint. His point is that Sade did not just fantasize on the page, he acted out what he wrote, kidnapping, sexually abusing, and torturing young girls, also numerous prostitutes, and a beggar named Rose Keller — women who supposedly didn’t count, and don’t count to Sade’s legions of readers. Epstein’s victims were, likewise, financially needy teenagers. Two sexually predatory rich guys separated by a few centuries, both monsters of privilege: Sade had his chateau, Epstein his townhouse and his island. Both were arrested and tried; both got out or escaped prison and did more of the same.

    What is inexplicable for Abidor is how many of his fellow intellectuals fell under Sade’s spell and became his great defenders, despite what a verbose and repetitive writer he is. They see him as an emissary of freedom — or as in Simone de Beauvoir’s reading, at least it’s on the itinerary. Abidor says that Sade’s freedom is the freedom of a guard in a concentration camp who does what he likes to his victims because they cannot escape. It’s not just the liberties of surrealism that Sade heralds, but also the death trap of fascism.

    I arranged a coffee date with Abidor not long ago, wanting to meet this assassin of the avant-garde; he suggested a spot where old Brooklyn socialists congregate. He had become a despised figure on the Francophone left, he told me, glancing around nervously and spotting a few former compatriots. The old guard was furious at him for putting their revered transgressive lineage — Apollinaire, Bataille, Barthes, the heirs of Sade, to which they still cling — in such an ugly light. It is the question of our moment: who gets to play transgressor, and who is cast in the role of the transgressed upon. When transgressions — in art, in life, at the borders — repeat the same predictable power arrangements and themes, what’s so experimental about that?

    Yet putting it that way gives me a yucky tingle of sanctimony, a bit of the excess amour-propre that attends taking the “correct” position. What’s left out of the anti-transgression story are the rewards of feeling affronted — how takedowns, shaming, “cancelling,” the toolkit of the new moral majoritarians, invent new forms of cultural sadism rather than rectifying the old ones. All in a good cause, of course: inclusiveness, equality, cultural respect — so many admirable reasons!

    The truant in me resents how much cultural real estate the anti-transgressors now command, while positioning themselves as the underdogs. Witness the new gatekeepers and moral entrepreneurs, wielding not insignificant amounts of social power while decrying their own powerlessness. And thus a new variety of hypocrite is born, though certainly no more hypocritical than the old hypocrites.

    We used to know what transgression was, but that’s not plausible anymore. Maybe violating boundaries was a more meaningful enterprise when bourgeois norms reigned, when liberal democracy seemed like something that would always endure. The ethos of transgression presumed a stable moral order, the disruption of which would prove beneficial. But why bother trying to disrupt things when disruption is the new norm, and permanence ever more of a receding illusion?