America in the World: Sheltering in Place

    I

    On the third week of America’s quarantine against the pandemic, a new think tank in Washington had a message for the Pentagon. “The national security state, created to keep us safe and guard our freedoms, has failed,” Andrew Bacevich, the president of the Quincy Institute for Responsible Statecraft, told viewers on a Skype video from home, interspersed with the sounds of sirens and images of emergency rooms. While microbes from China were mutating and coming to kill us, he preached, we were wasting our time hunting terrorists and projecting military power abroad. It was a sequitur in search of a point — as if America ever faces only one danger at a time. When the black plague struck Europe and Asia in the fourteenth century, it did not mean that Mongol hordes would no longer threaten their cities. Nor does the coronavirus mean that jihadists are not plotting terror or that Russia is not threatening its neighbors or that China is not devouring Hong Kong.

    His casuistry aside, Bacevich was playing to the resentments of Americans who sincerely believe that American foreign policy is driven by an addiction to war. For the first two decades of post-cold war politics, this argument was relegated to the hallucinations of the fringe. But no more. A new national consensus had started to form before the plague of 2020: that there are almost no legitimate uses for American military power abroad, that our wars have been “endless wars,” and that our “endless wars” must promptly be ended. On the subject of American interventionism, there is no polarization in this notoriously polarized country. There is a broad consensus, and it is that we should stay out and far away.

    The concept of “endless wars” has its roots in the middle of the twentieth century. In 1984, most famously, George Orwell depicted a totalitarian state that invents its own history to justify perpetual war between the superpowers to keep its citizens in a state of nationalist fervor. In American political discourse, the concept of a war without end was baked into the influential notion of “the manufacture of consent,” a notion manufactured by Noam Chomsky according to which the media teaches the American people to support or acquiesce in the nefarious activities of the military-industrial complex. But the “endless wars” that so many Americans wish to end today are not like the ones that Orwell imagined. Today Americans seek to end the war on terror, which in practice means beating back insurgencies and killing terrorist leaders in large swaths of the Islamic world. Orwell’s wars were endless because none of the world’s states possessed the power to win them. The war on terror, by contrast, endures because of a persistent threat to Western security and because weaker states would collapse if American forces left. The war on terror pits the American Gulliver against fanatical bands of Lilliputians. But the asymmetry of military power does not change the magnitude — or the reality — of the carnage that “stateless actors” can wreak.

    To get a feel for the new consensus on American quietism, consider some of the pre-pandemic politics surrounding the war in Afghanistan. In a debate during the presidential primaries, Elizabeth Warren insisted that “the problems in Afghanistan are not problems that can be solved by a military.” Her Democratic rivals on the stage agreed, including Joe Biden. This is also Donald Trump’s position. As Warren was proclaiming the futility of fighting for Afghanistan’s elected government, the Trump administration was negotiating that government’s betrayal with the Taliban. (And the Taliban was ramping up its violence while we were negotiating with it.) Before the coronavirus crisis, the Trump administration was spending a lot of its political capital on trying to convince skeptical Republican hawks that the planned American withdrawal would not turn Afghanistan into a haven for terrorists again, which of course is nonsense.

    The emerging unanimity about an escape from Afghanistan reflects a wider strategic and historical exhaustion. Despite the many profound differences between Trump and Obama, both presidents have tried to pivot away from the Middle East to focus on competition with China. (Obama never quite made the pivot.) Both presidents have also mused publicly about how NATO allies are “free riders” on America’s strength. And both presidents have shown no patience with the use of American military force. In 2012, even as the world was once again becoming a ferociously Hobbesian place, the Obama administration’s national defense strategy dropped the longstanding post-cold war goal of being able to win two wars in different geographical regions at once. (The Obama Pentagon seemed to think that land wars are a thing of the past and that we can henceforth make do with drones and SEALs.) Trump’s first defense strategy in 2018 affirmed the Obama formulation.

    Moreover, a majority of Americans agreed with their political leaders. A Pew Research poll in 2019 found that around sixty percent of all Americans did not believe it was worth fighting in Iraq, Syria, or Afghanistan. That percentage is even higher among military veterans. Indeed, Pew research polling since 2013 has found that more Americans than not believe that their country should stay out of world affairs. Hal Brands and Charles Edel, in their fine book The Lessons of Tragedy, point out that majorities of Americans still agreed in the late 2010s that America should possess the world’s most powerful military, and supported alliances, and favored free trade, but they conclude that many Americans are now resistant to the “sacrifices and trade-offs necessary to preserve the country’s post-war achievements.”

    All of that was before covid19 forced most of the country to “shelter in place.” In truth, sheltering-in-place has been the goal of our foreign and national security policy for most of a decade. And it will be much harder to justify a continued American presence in the Middle East, west Asia, Africa and even the Pacific after Congress borrowed trillions of necessary dollars for paycheck protection and emergency small business loans. In addition to all of the older muddled arguments for retreat, there will now be a strong economic case that the republic can no longer afford its overseas commitments, as if foreign policy and national security are ultimately about money. In other words, there are strong indications that the republic is undergoing a profound revision of its role in leading and anchoring the international order that it erected after World War II. The days of value-driven foreign policy, of military intervention on humanitarian grounds, and even of grand strategy, may be over. Should every terror haven, every failed state, every aggression against weak states, and every genocide be America’s responsibility to prevent? Of course not. But should none of them be? America increasingly seems to think so. We are witnessing the birth of American unexceptionalism, otherwise known as “responsible statecraft.”

     

    II

    At the end of the cold war, the spread of liberal democracy seemed inevitable. The Soviet Union had collapsed, and with it the communist governments of the Eastern European countries it dominated. China had momentously made room for a market in its communist system, a strange state-sponsored capitalism that brought hundreds of millions of people out of subsistence poverty. In the West, juntas and strongmen toppled and elected governments replaced them. In every region except for the Middle East and much of Africa, the open society was on the march.

    One of the first attempts to describe the thrilling new moment was a famous, and now infamous, essay by Francis Fukuyama. In 1989, in “The End of History?,” he surveyed a generation that saw the collapse of pro-American strongmen from Spain to Chile along with the convulsions behind the Iron Curtain and concluded that the triumph of liberalism was inevitable. (He has since revised his view, which is just as well.) His ideas provided the intellectual motifs for a new era of American hegemony. “The triumph of the West, and the Western idea, is evident first of all in the total exhaustion of viable systematic alternatives to western liberalism,” Fukuyama wrote. What he meant, in his arch Hegelian way, was that the age of ideological conflict between states was over. History was teleological and it had attained its telos. Fukuyama envisioned a new era in which great power wars would be obsolete. He did not predict the end to all war, but he did predict that big wars over competing ideologies would be replaced by a more mundane and halcyon kind of competition. The principled struggles of history, he taught, “will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”

    Fukuyama’s predictions were exhilarating in 1989 because the consensus among most intellectuals during the Cold War had been that the Soviet Union was here to stay. Early theorists of totalitarianism such as Hannah Arendt and Carl Friedrich had portrayed the Soviet state as an unprecedented and impermeable juggernaut that was terrifyingly strong and durable. The hero of Orwell’s dystopia, the dissident Emmanuel Goldstein, resisted Big Brother but was never a real threat to the state. In the Brezhnev era, analysts of the Soviet Union began to notice that the juggernaut was crumbling from within and had lost the ideological allegiance of its citizens, even as its military and diplomatic adventures beyond its borders continued. Building on this increasingly realistic understanding of the failures of the communist state, Fukuyama observed that totalitarian systems were overstretched and brittle. The West could exhale.

    Not everyone agreed. Samuel Huntington argued that conflict between great powers would remain because identity, not ideology, is what drives states to make war. While it was true that communism was weakening after the collapse of the Soviet Union, other illiberal forces such as religious fundamentalism and nationalism remained a threat to the American-led liberal world order. The hope that China or Iran could be persuaded to open their societies by appealing to prosperity and peace ignored that most nations were motivated not by ideals, but by a shared sense of history and culture. Leon Wieseltier similarly objected that the end of the Soviet Union and its empire would release ethnic and religious and tribal savageries, old animosities that were falsely regarded as antiquated. He also observed that the concept of an “end of history” was borrowed from the very sort of totalitarian mentality whose days Fukuyama believed were over. The worst fiends of the twentieth century justified their atrocities through appeals to history’s final phase; the zeal required for their enormous barbarities relied in part on a faith that these crimes are advancing the inevitable march of history. For Wieseltier, there is no final phase and no inevitable march, and the liberal struggle is endless. “To believe in the end of history,” he wrote, “you must believe in the end of human nature, or at least of its gift for evil.”

    As international relations theories go, “The End of History” was like a medical study that found that ice cream reduced the risk of cancer. Fukayama’s optimistic historicism instructed that the easiest choice for Western leaders was also the wisest. Why devise a strategy to contain or confront Russia if it was on a glide path to democratic reform? Why resist American industrial flight to China if that investment would ultimately tame the communist regime and tempt it to embrace liberalism?

    Every president until Trump believed that it was possible to lure China and Russia into the liberal international order and attempted to do so. Instead of preparing for a great power rivalry, American foreign policy sought to integrate China and Russia into global institutions that would restrain them. Bill Clinton and George W. Bush expanded NATO, but they also invited Russia into the Group of 7 industrialized nations. Clinton, Bush, and Obama — the latter liked to invoke “the rules of the road” — encouraged Chinese-American economic interdependence. Until Obama’s second term, the United States did next to nothing to stop China’s massive theft of intellectual property. Until June 2020, Chinese corporations could trade freely on U.S. stock exchanges without submitting to the basic accounting rules required of American companies. The assumption behind these Panglossian views of China and Russia was that democratic capitalism was irresistible and the end of communism marked the beginning of a new era of good feelings. (Communism never ended in China, of course.) And it was certainly true that trade with China benefitted both economies: Chinese and American corporations prospered and American consumers enjoyed cheaper consumer goods.

    This is not to say that there were no bouts of dissent. In his presidential campaign in 1992, Bill Clinton attacked George H. W. Bush for his capitulation to China after the uprising at Tiananmen Square. And even though Clinton did not alter the elder Bush’s approach to China during his presidency, there was a lively debate about China’s human rights abuses in the 1990s. Clinton expanded NATO, something the elder Bush opposed, but he and later George W. Bush and Barack Obama did little to push back against Russia’s own regional adventures and aggressive behavior. Consider that no serious U.S. war plan for Europe was developed between the end of the Cold War and 2014, the same year that Russia invaded Ukraine and eventually annexed Crimea, and five years after Russia invaded and occupied the Georgian provinces of South Ossetia and Abkhazia. We preferred to look away from Russia’s forward movements — with his cravenness about Syria, Obama actually opened the vacuum that Russia was happy to fill — just as we preferred to look away from the growing evidence of China’s strategic ambitions and human-rights outrages. We were reluctant to lose those good feelings so soon after we acquired them.

    None of this meant that American presidents would not use force or wage war after the collapse of the Soviet Union. They did. But they did not engage in great power wars. The first Bush saved Kuwait from Saddam Hussein and saved Panama from the lesser threat of Manuel Noriega. Clinton intervened in the Balkans to stop a genocide and launched limited air strikes in the Middle East and Afghanistan. In the aftermath of September 11, George W. Bush waged a war on terror and toppled the tyrannies that held Iraq and Afghanistan captive. Obama intervened reluctantly and modestly and ineffectively in Libya; he withdrew troops from Iraq only to send some of them back; and he presided over a “surge” in Afghanistan, even though its announcement was accompanied by a timetable for withdrawal. Trump has launched no new wars, but he has killed Iran’s most important general and the architect of its campaign for regional hegemony, and he has launched strikes on Syrian regime targets in response to its use of chemical weapons, though his strikes have not added up to a consistent policy. But even as optimism about world order has become less easy to maintain, even as the world grows more perilous in old and new ways, the American mood of retirement, the inclination to withdrawal, has persisted. Fukuyama, who acknowledged that the threat of terrorism would have to be met with force, has remarked that our task is not “to answer exhaustively the challenges to liberalism promoted by every crackpot messiah around the world.” But what about the genocides perpetrated by a crackpot messiah (or a rational autocrat)? And what about answering great power rivals? At the time, to be sure, we had no great power rivals. We were living in the fool’s paradise of a “unipolar” world.

    *

    Bill Clinton came to the presidency from Little Rock without a clear disposition on the use of military force. He was at times wary of it. He pulled American forces out of Somalia after a militia downed two American helicopters. In his first term he dithered on the Balkan wars and their atrocities, favoring a negotiation with Serbia’s strongman Slobodan Milosevic. He did nothing to stop Rwanda’s Hutu majority from slaughtering nearly a million Tutsis for three months in the spring and summer of 1994. He was more focused than any of his predecessors or successors on brokering a peace between Israelis and Palestinians. Over time, of course, he evolved, but how the world suffers for the learning curve of American presidents! Clinton punished Saddam Hussein’s defiance of U.N. weapons inspectors. He bombed suspected al Qaeda targets in Sudan and Afghanistan after the bombings of American embassies in Africa in 1998. He prevented Milosevic from cleansing Kosovo of Albanians and helped push back Serb forces from Bosnia.

    Clinton was a reluctant sheriff, to borrow Richard Haass’ phrase. In his first term he was unsure about using American force abroad. By the end of his second term, he had come to terms with the responsibilities of American power. “The question we must ask is, what are the consequences to our security of letting conflicts fester and spread?,” Clinton asked in a speech in 1999. “We cannot, indeed, we should not, do everything or be everywhere. But where our values and our interests are at stake, and where we can make a difference, we must be prepared to do so.” He was talking about transnational threats and rogue states. In his second term, Clinton took a keen interest in biological weapons and pandemics. This meant using military power to prevent the proliferation of weapons of mass destruction and deter terrorists. As Madeleine Albright, Clinton’s second secretary of state, memorably put it, America was the world’s “indispensable nation.”

    Yet Clinton’s activism did not extend to Russia or China. He helped to expand the NATO alliance, but also secured debt forgiveness for the Russian federation and used his personal relationship with Russian president Boris Yeltsin to reassure him that NATO’s expansion was no threat to Moscow. Clinton also reversed his campaign promise on China and granted it most favored nation status as a trading partner, paving the way for the economic interdependence that Trump may be in the process of unraveling today. At the time, Clinton explained that “this decision offers us the best opportunity to lay the basis for long-term sustainable progress on human rights and for the advancement of our other interests with China.” This reflected the optimism of 1989-1991. What other model did China have to emulate, but our own? Allow it to prosper and over time it will reform.

    When Clinton left office, the consensus among his party’s elites was that his foreign policy mistakes were errors of inaction and restraint. Clinton did nothing to prevent the genocide in Rwanda. He waited too long to intervene in the Balkans. It seemed that Americans had gotten over their inordinate fear of interventions. Why had it taken Clinton so long? There was an activist mood in Washington before the attacks of September 11. And after hijacked commercial planes were turned into precision missiles and the towers fell, the sense that America needed to do more with its power intensified.

    *

    In the Bush years, American foreign policy fell first into the hands of neoconservatives. For their critics, they were a cabal of swaggering interlopers who twisted intelligence products and deceived a dim president into launching a disastrous war. In fact they were a group of liberals who migrated to the right and brought with them an intellectual framework and appreciation for social science that was absent from the modern conservative movement. In foreign policy they dreaded signs of American weakness or retreat, and in 1972 supported Scoop Jackson against George McGovern in the Democratic primaries. As that decade progressed, the wary and disenchanted liberals migrated to the former Democrat Ronald Reagan. In Reagan, they found a president who despised Soviet communism as much as they did.

    In the 1990s, a new generation of neocons wanted to seize the opportunity of American primacy in the world after the Soviet Union’s collapse. As Irving Kristol observed, “With power come responsibilities, whether sought or not, whether welcome or not. And it is a fact that if you have the kind of power we now have, either you will find opportunities to use it, or the world will discover them for you.” In that spirit, the neoconservatives of the 1990s advocated an activist foreign policy. They argued that the United States should help to destabilize tyrannies and support democratic opposition movements. They were not content with letting history take its course; they wanted to push it along in the direction of freedom. Their enthusiasm for an American policy of democratization was based on both moral arguments and strategic arguments.

    The focus in this period was Iraq. Neoconservatives had rallied around legislation known as the Iraq Liberation Act that would commit the American government to train and to equip a coalition of Iraqi opposition groups represented in the United States by Ahmad Chalabi, a wealthy Iraqi political figure who was trained as a mathematician in the United States. For the first half of the 1990s, the CIA funded Chalabi’s Iraqi National Congress, but he had a falling out with the agency. The Iraq Liberation Act was a way to save the opposition group by replacing a once covert intelligence program with one debated openly in Congress. It should be noted that Chalabi’s initial plan was not to convince America to invade Iraq, but to secure American training and equipment to build a rebel army comprised of Iraqis to topple Saddam Hussein. Clinton allowed the legislation to pass in 1997, but his government never fully implemented it.

    George W. Bush ironically ran his campaign in 2000 with the promise of a humble foreign policy. Condoleezza Rice memorably declared at the Republican convention that America cannot be the world’s 911. Not long afterward, 9/11 was the event that forced Bush to renege on his promise. Three days after that attack, Congress voted to authorize what we know today as the war on terror: the “endless wars” had begun. Over the last nineteen years, that authorization has justified a global war against a wide range of targets. Bush used it as the legal basis for strikes on terrorists in south Asia. Obama used it to justify his military campaign against the Islamic State, when it was a battlefield enemy of al Qaeda’s Syrian branch. And while every few years some members of Congress have proposed changes to the authorization, these efforts have yet to succeed. Today many progressives believe the war on terror deformed America into an evil empire, patrolling the skies of the Muslim world with deadly drones, blowing up wedding parties in Afghanistan, torturing suspected terrorists and aligning with brutal thugs. Even Obama has not escaped this judgment. Some of these are fair criticisms. The war on terror was indeed a war. Innocent people died. At the same time, the other side of the ledger must be counted. Since 9/11, there have been no mass-casualty attacks by foreign terrorists inside our borders. On its own terms, from the rather significant standpoint of American security, this “endless war” has produced results.

    In the first years of the war on terror, the pacifist left had little influence over the national debate. A better barometer of the country’s mood was a column, published a month before the Iraq War, by Charles Krauthammer. He denounced what he said was Clinton’s “vacation from history,” and asked whether “the civilized part of humanity [will] disarm the barbarians who would use the ultimate knowledge for the ultimate destruction.” Those words, and many others like them, helped to frame the rationale for the American invasion of Iraq. Note that Krauthammer did not write that Clinton’s vacation from history was his failure to prepare for China’s rise and Russia’s decline. It was his failure to prevent the arming of smaller rogue states and terrorist groups. Krauthammer was still living in Fukuyama’s world. And so was Bush. In his first term, Bush not only failed to challenge Russia or China, he sought to make them partners in his new global war. Bush famously remarked that he had looked into the eyes of Vladimir Putin and found a man he could trust. (“I was able to get a sense of his soul.”) Bush’s government would also designate a Uighur separatist organization as a terrorist group, giving cover to the persecution of that minority. The world learned in 2018 that China had erected a new Gulag in western China that now imprisons at least a million Uighurs.

    China and Russia did not support Bush’s Iraq war. Many Democrats did. In 2002, a slim majority of Democrats in the House opposed a resolution to authorize it, but in the Senate, 29 out of 50 Democrats voted for it. Most significant, every Democrat with presidential aspirations — from Hillary Clinton to Joe Biden — voted for the war, a vote for which they would later apologize. At the time of that vote, the ambitious Democrats who supported it did not know that opposition to that war would define their party for years to come. Neither did the establishment Democrats who opposed it. Al Gore, speaking at the Commonwealth Club of San Francisco, explained his opposition to the war: “If we go in there and dismantle them — and they deserve to be dismantled — but then we wash our hands of it and walk away and leave it in a situation of chaos, and say, ‘That’s for y’all to decide how to put things back together now,’ that hurts us.” Gore was not concerned that America may break Iraq, he was acknowledging that it was already broken. Nor was he worried about an “exit strategy.” He worried that if America went to war in Iraq under a Republican president, the war may not be endless enough. America may leave too soon.

    The Iraq war was also opposed by a group of international relations theorists who advocated for what is known as foreign policy realism. Unlike Fukuyama, the realists do not think it matters how a state chooses to organize itself. All states, according to the realists, pursue their own survival, or their national interest. Thirty-three prominent realists purchased an advertisement in the New York Times in 2002 urging Bush not to invade Iraq. They argued that the coming war would distract America from the campaign against al Qaeda and leave it in charge of a failed state with no good options to leave. It is worth noting that neither the pacifist left nor the foreign policy realists argued before the war that Saddam Hussein had no weapons of mass destruction, the liquidation of which was Bush’s justification for the war. Both camps warned instead that an American invasion of Iraq could prompt the tyrant to use the chemical and biological weapons that everyone agreed he was concealing. As the professors wrote in their open letter, “The United States would win a war against Iraq, but Iraq has military options — chemical and biological weapons, urban combat — that might impose significant costs on the invading forces and neighboring states.” The argument was that removing Saddam Hussein would further destabilize the Middle East.

    Over the course of 2003, it became clear that the casus belli for Operation Iraqi Freedom — Saddam’s refusal to come clean on his regime’s weapons of mass destruction — was wrong. The teams of American weapons inspectors sent into the country could not find the stockpiles of chemical weapons or the mobile bio-weapons labs. The Bush administration sought to portray this error as an intelligence failure, which was largely correct. And so the war’s unanticipated consequences, some of them the result of American error, eclipsed the fact that Iraqis had drafted a constitution and were voting for their leaders. In America, a great popular anger began to form, not only against the Iraq war but more generally against American interventionism. The Democrats became increasingly eager to take political advantage of it. Talk of American hubris proliferated. Progressives were growing wary of the institutions of national security, particularly the intelligence agencies.

    Republicans under Bush were also divided between an embrace of the president’s own idealism to make Iraq a democracy and the unsentimental realism of his vice president, who darkly warned after 9/11 that the war against terror would have to be fought in the shadows. Bush’s own policies were inconsistent. Sometimes he pressured dictator allies to make democratic reforms, but he also empowered those same dictators to wage war against jihadists with no mercy. In Israel, Bush supported legislative elections that resulted in empowering Hamas in Gaza. (That was in 2006, the last time Palestinians voted for their leaders.) By the end of Bush’s second term, however, great power competition had re-emerged. While America was preoccupied with the Muslim world, Russia invaded the former Soviet Republic of Georgia. Bush did what he could. He sent humanitarian supplies to Tbilisi packed on U.S. military aircraft. He tried to rally allies to support a partial ban on weapons sales to Moscow. But Russia had the good fortune of timing its aggression just as the world’s financial markets collapsed. It was also lucky that the next American president would be Barack Obama.

    *

    Barack Obama had been a state senator in Illinois during the run up to the Iraq War, when his primary rival, Hillary Clinton, was a U.S. senator. She voted for the war. He gave a speech opposing it. At the time of the election, in a political party incensed by the Iraq war, Obama’s speech in Chicago in 2002 functioned as a shield: he may have lacked Clinton’s experience, but at least he did not support Bush’s war. Back in 2002, though, Obama’s speech was barely noticed. The Chicago Tribune news story led with Jesse Jackson’s speech and made no mention of the ambitious state senator. When Obama was at the lectern, he had two distinct themes. First, he wanted the protestors to know that he, too, understood the evil of neoconservatism. “What I am opposed to is the cynical attempt by Richard Perle and Paul Wolfowitz and other armchair, weekend warriors in this administration to shove their own ideological agendas down our throats,” he said. At the same time, Obama rejected the apologies for tyrants common on the hard left. Of Saddam, he said, “He is a brutal man. A ruthless man. A man who butchers his own people to secure his own power.” But the young Obama did not think that Saddam threatened American interests. Echoing Fukuyama’s optimism, he declared that “in concert with the international community he can be contained until, in the way of all petty dictators, he falls away into the dustbin of history.”

    Obama’s patience with history, with its dustbins and its arcs, turned out to be, well, endless. His Chicago speech should have been a warning for the left wing of the Democratic Party that over time it would be disappointed by his presidency. As Obama said, he was not against war. (The tough-minded Niebuhrian speech that he delivered in Oslo when he accepted his ridiculous Nobel Prize underscored his awareness of evil in the world.) He was merely against dumb wars — or as he later put it, “stupid shit.” He had come into office when the world was growing more dangerous, and he chose to respond to these dangers with careful and scholarly vacillations. He wanted the American people to know that he was thoughtful. The most salient characteristics of his foreign policy were timidity and incoherence, and a preference for language over action.

    Thus, Obama withdrew American forces from Iraq in 2011, only to send special operators back to Iraq in 2014, after the Islamic State captured the country’s second largest city. He “surged” forces in Afghanistan in his first term, but fired the general he chose to lead them, and spent most of his administration trying, and failing, to withdraw them. He spoke eloquently about the disgrace of Guantanamo, but never closed it. He declassified a series of Justice Department memos that made specious legal arguments to allow the CIA to torture detainees, but his Justice Department never prosecuted the officials responsible, as many in his base wanted. He sided with peaceful protestors in Egypt in 2011 at the dawn of the Arab Spring and urged Hosni Mubarak to step down, but after Egypt elected an Islamist president, the military toppled him in a coup thirteen months later and Obama declined to impose sanctions. He did manage to reach a narrow deal with Iran to diminish, but not demolish, its nuclear weapons program. By this time Iran was on a rampage in the Middle East, and the windfall that its economy received from the nuclear bargain would be reinvested in its own proxy wars in Syria, Iraq and Yemen. The deal alienated America’s traditional allies in the Middle East and brought Israel closer to its Arab rivals.

    The most spectacular failure of Obama’s foreign policy, of course, was Syria. After the Arab Spring, Syrians demanded the same democratic freedoms that they saw blooming in Tunisia and briefly in Egypt. Obama supported them, at first. But the tyrant was watching: Bashar al-Assad had learned from what he considered the mistakes of Mubarak and Ben Ali. Assad was also fortunate that his patrons were Russia and Iran, who also lived in fear of popular uprisings. So began the Syrian civil war that to this day rages on. That war has flooded Europe and Turkey with refugees, with dire political consequences, and threatened for a few years in the middle of the 2010s to erase the borders established after World War I for the Middle East.

    It is not the case that Obama did absolutely nothing to support the Syrian opposition. In 2012, he approved a covert program known as Timber Sycamore, in which the CIA endeavored to build up an army of “moderate rebels” against Assad. The plan was always flawed. Obama did not want American forces to fight inside Syria and risk an open clash with Iranian and Russian forces who were on the side of the Assad regime. (Obama was reluctant to offend the Russians and he was actively seeking d.tente with the Iranians.) America clung to its passivity as Syria’s civil war and Iraq’s embrace of Shiite majoritarian rule created the conditions for the emergence of the Islamic State. A few years later, Obama authorized a Pentagon program to arm and support a largely Kurdish army fighting the Islamic State. With the help of American air power, the Kurds and U.S. special forces eventually smashed the “caliphate” during Trump’s first term in office.

    Artlessly and in accord with his principles, Obama painted himself into a corner. He called on Assad to leave, but he never used American power to assist with that mission. Obama also warned of consequences if Assad used chemical weapons, which he called a “red line.” In 2013, when Assad crossed this line, Obama threatened air strikes against Assad’s regime. The moment of truth— about Syria, about American interventionism — had arrived. Obama punted. He gave a bizarre speech in which he asserted that he had the constitutional prerogative to strike Syria without a resolution from Congress but was asking Congress to authorize the attack anyway. In his swooning memoir of the Obama White House, Ben Rhodes recalls that the president told him, “The thing is, if we lose this vote it will drive a stake through the heart of neoconservatism — everyone will see they have no votes.” Never mind the heart of Bashar al Assad! Rhodes continues: “I realized then that he was comfortable with either outcome. If we won authorization, he’d be in a strong position to act in Syria. If we didn’t, then we would potentially end the cycle of American wars of regime change in the Middle East.”

    The episode broaches the early roots of the bipartisan consensus against “endless war.” When the resolution came up for a vote, it barely got out of the Senate Foreign Relations Committee. As the Senate debated, Republican hardliners began to wobble. “Military action, taken simply to save face, is not a wise use of force,” said Senator Rubio. “My advice is to either lay out a comprehensive plan using all of the tools at our disposable that stands a reasonable chance of allowing the moderate opposition to remove Assad and replace him with a stable secular government. Or, at this point, simply focus our resources on helping our allies in the region protect themselves from the threat they and we will increasingly face from an unstable Syria.” In other words, Rubio would not support a modest air strike to impose some costs on a breach of an important international norm because it did not go far enough. The result of this twisted reasoning, and of the failure of the resolution, was the emboldening of Assad. Finally, at the last minute, Obama was saved by Assad’s most important patron. Russian foreign minister Sergei Lavrov and Secretary of State John Kerry quickly patched together a plan whereby Syria, for the first time, would declare its chemical weapons stockpiles and allow international inspectors to get them out of the country. Over time, the deal proved worthless. Assad would gas his people again and again, eroding what was once a powerful prohibition on the use of chemical weapons in the twenty-first century. But if the deal did nothing to end the misery of Syria, it did a lot to end the misery of Obama. In 2013, Obama portrayed the bargain as a triumph of diplomacy, which it was — for Putin.

    One of the first foreign policy priorities for Obama after his election was to mend relations with Moscow. This was called the “reset.” Obama was most exercised by transnational threats: climate change, arms control, fighting terrorism, Ebola. He wanted Russia to be a partner. And Russia wanted recognition that it was still a great power.

    After Obama folded on his “red line” in Syria, Putin made his move. Russian forces invaded Ukraine in 2014 to stop a democratic revolution and eventually annexed Crimea. Obama imposed a series of economic sanctions on Russian industries and senior officials, but he declined to arm Ukraine’s government or consider any kind of military response. (He worried more about escalation than injustice.) His administration’s advice to Kiev was to avoid escalation. The following year Obama did not challenge Russia when it established airbases inside Syria. He still needed the Russians for the Iran nuclear deal. By 2016, when the U.S. intelligence community was gathering evidence that Russians were hacking the Democratic National Committee and Hillary Clinton’s campaign, Obama’s White House waited until after the election to punish Moscow. Three weeks before the next president would take the oath of office, Obama announced the expulsion of thirty-five spies and modest sanctions on Russia’s intelligence services. It was a fine example of “responsible statecraft.”

    *

    The thoughtful incoherence of Barack Obama was succeeded by the guttural anarchy of Donald Trump. It was nearly impossible to discern from Trump’s campaign what his actual foreign policy would be if he won. His ignorance of international affairs was near total. He simultaneously pledged to pull America out of the Middle East and to bomb ISIS indiscriminately. He could sound like Michael Moore one minute, thundering that George W. Bush lied America into the Iraq War, and in the next minute like a Stephen Colbert imitation of a right-wing neanderthal, claiming that Mexico was deliberately sending its rapists into our country. And yet there was a theme in Trump’s hectoring confusion. He hearkened back to a very old strain of American politics. One could see it in his slogan “America First,” a throwback to the isolationism of Charles Lindbergh in the 1930s. When Trump asked mockingly what America was getting from its interventions in the Middle East or the protection its troops provided Europe through the NATO alliance, he was unknowingly channeling Senator Robert Taft and his opposition to the Marshall Plan. Past presidents, Republicans and Democrats, understood that the small upfront cost of stationing troops overseas in places such as Korea or Bahrain paid much greater dividends by deterring rivals and maintaining stability. Military and economic aid was a small price to pay for trade routes and open markets. But Trump rejected all of this.

    As president, Trump’s foreign policy has not been altogether catastrophic. (That is faint praise, I know.) He has used force in constructive flashes, such as the drone strike that killed Qassem Suleimani or the air strikes against Syrian landing strips after the regime gassed civilians. He never pulled America out of NATO as he said he would, though he declined to say publicly that America would honor the mutual defense commitments in the treaty’s charter. He pulled out of Obama’s nuclear deal with Iran, a deal whose merits were always a matter of controversy. He began to reverse the spending caps imposed during Obama’s presidency on the Pentagon’s budget. On China, the Trump administration has begun aggressively to target Beijing’s thievery and espionage and takeover of international institutions.

    Most consistently, Trump’s foreign policy has been marked by an amoral transactionalism. Modern presidents of both parties have made bargains with tyrants, but they did so sheepishly, and often they appended talk of human rights to their strategic accommodations. Trump was different. He went out of his way to pay rhetorical tribute to despots and authoritarians who flattered him — Kim Jong Un, Vladimir Putin, Xi Jinping, Viktor Orban, Jair Bolsonaro. When Trump’s presidency began, senior advisers such as General James Mattis and General H.R. McMaster tried to soften, and at times to undermine, his appetite to renounce American leadership in the world. McMaster made the president sit through a power-point presentation about life in Afghanistan before the Taliban to persuade him of the need for a small military surge there. After Trump abruptly announced the withdrawal of the small number of American forces in Syria, his advisers persuaded him that some should stay in order to protect the oil fields. And so it went until most of the first cabinet was pushed out in 2018 and 2019. The new team was more malleable to Trump’s instincts. Trump’s new secretary of state, Mike Pompeo, empowered an envoy to negotiate an American withdrawal from Afghanistan with the Taliban, without including the Afghan government, our ally, in the talks. Instead of undermining Trump’s push to leave the Iran nuclear deal, as James Mattis and Rex Tillerson had done, the president’s new team kept escalating sanctions.

    Trump was erratic. Never has foreign policy been so confusing to anyone outside (and to some inside) the White House. Trump would impetuously agree with heads of state to major policy changes before the rest of his government could advise him of his options. Since Trump shares his internal monologue with the world on twitter, these lunges became policies, until he would later reverse them just as fitfully. To take one example: the sequence of tweets that announced Trump’s deal in 2019 with Turkey to pull American support for its Kurdish allies in Syria had real consequences, even though Trump would later reverse himself. As the Turkish military prepared to enter an autonomous Kurdish region of Turkey, the Kurdish fighters who had bled to defeat ISIS were forced to seek protection from Russia, Iran, and Bashar al Assad.

    During that crisis, Trump tweeted about one of his favorite themes: “The endless wars must end.” For the first fifteen years of the post-9/11 era, that kind of talk would have been heresy for Republicans. Despite a few outliers inside the party like Ron Paul and Rand Paul, the party of Bush and Reagan supported what it called a “long war,” a multi-generational campaign to build up allies so they could defeat terrorists without American support. Until very recently, Republicans understood that as frustrating as training local police in Afghanistan and counter-terrorism commandos in Iraq often can be, the alternative was far worse, both strategically and morally. The same was true of American deployments during the Cold War. To this day there are American troops in South Korea and Germany, in part because their very presence deterred adversaries from acting on their own aggressive or mischievous impulses. But Trump disagreed. And he echoed a growing consensus. “No more endless wars” is the new conventional wisdom.

     

    III

    The Quincy Institute for Responsible Statecraft was founded in 2019 as a convergence of opposites, with money from George Soros’ Open Society Foundation and the Koch brothers. There was one thing about which the opposites agree, and that is the end of American primacy, and consequent activism, in the world. The new think tank hopes to mold the wide but inchoate opposition to “endless wars” into a coherent national strategy.

    On the surface, the Quincy Institute presents itself in fairly platitudinous terms. “The United States should respect established international laws and norms, discourage irresponsible and destabilizing actions by others, and seek to coexist with competitors,” its website says. “The United States need not seek military supremacy in all places, at all costs, for all time.” That boilerplate sounds like the kind of thing one would hear in the 2000s from what were then known as the netroots: wars of choice are bad, international law is good. But there is an important distinction. The progressives who obsessed over the neoconservatives in the Bush years argued the ship of state had been hijacked. The Quincy Institute is arguing that the institutions it once sought to protect from those ideological interlopers were themselves in on the heist. The problem is not the distortion of our foreign policy by foreign interests. The problem is the system that created our foreign policy in the first place.

    Consider this passage by Daniel Bessner on Quincy’s website: “While there are national security think tanks that lean right and lean left, almost all of them share a bipartisan commitment to U.S. ‘primacy’ — the notion that world peace (or at least the fulfillment of the “national interest”) depends on the United States asserting preponderant military, political, economic, and cultural power. Think tanks, in other words, have historically served as the handmaidens of empire.” Bessner is echoing an idea from Stephen Walt, the Harvard professor who is also a fellow at the institute. At the end of The Hell of Good Intentions, which appeared in 2018, Walt called for a “fairer fight within the system,” and recommended establishing a broader political movement and the creation of new institutions — a think tank? — to challenge what he perceives as the consensus among foreign policy elites to favor a strategy of liberal hegemony. American primacy in the world he deemed to be bad for America and bad for the world.

    The Quincy Institute hired the perfect president for such a program. A retired Army colonel and military historian who lost his son in the Iraq War, Andrew Bacevich has emerged as a more literate and less sinister version of Smedley Butler. That name is largely forgotten today, but Butler was a prominent figure in the 1930s: a retired Major General who, after his service to the country, declared that “war is a racket” and that his career as a Marine amounted to being a “gangster for capitalism.” Butler later admitted that he was approached by a cabal to lead a military coup against President Roosevelt, but he remains to this day a hero of the anti-war movement. In 2013, in Breach of Trust, Bacevich presented Butler as a kind of dissident: “He commits a kind of treason in the second degree, not by betraying his country but calling into question officially sanctioned truths.” In this respect, Butler is the model for other retired military officers who dare to challenge official lies. Not surprisingly, Breach of Trust reads like the military history that Howard Zinn never wrote. It is a chronicle of atrocities, corruption, and government lies. Like Bacevich’s other writings, it is a masterpiece of tendentiousness.

    More recently, Bacevich has sought to recast the history of the movement to prevent Roosevelt from entering World War II, known as America First. He has acknowledged that America was correct to go to war against the Nazis, but still he believes that the America Firsters have gotten a bad rap. Until Donald Trump, the America First movement was seen as a cautionary tale and a third rail. When Pat Buchanan tried to revive the term in the 1980s and 1990s, there was bipartisan outrage. After all, America First was led by Charles Lindbergh, an anti-Semite and an admirer of the Third Reich. Bacevich acknowledges this ugly provenance. And yet he chafes at Roosevelt’s judgment that Lindbergh’s movement was promoting fascism. “Roosevelt painted anti-interventionism as anti-American, and the smear stuck,” Bacevich wrote in 2017 in an essay in Foreign Affairs charmingly called “Saving America First.”

    Bacevich imparts a grain of truth. The America First movement was largely a response to the unprecedented horrors of World War I, in which armies stupidly slaughtered each other and chemical weapons were used on a mass scale. And the war was sparked by miscalculations and secret alliances between empires and smaller states in Europe: it lacked the moral and strategic purpose of defeating the Nazis and the Japanese fascists. It is quite understandable that two decades after World War I ended, many Americans would be reluctant to fight its sequel. But Bacevich goes a bit further. In his Foreign Affairs essay, he instructed that “the America First Movement did not oppose Jews; it opposed wars that its members deemed needless, costly, and counterproductive. That was its purpose, which was an honorable one.” But was it honorable? While it is true that in the 1930s major newspapers did a terrible job in covering the Third Reich’s campaign against Jews and other minorities, those persecutions were hardly a secret. Nazi propaganda in the United States was openly anti-Semitic. The war weariness of post-World War I America does not confer nobility on America First’s cause. In a recent interview Bacevich became testy when asked about that remark. “Come on now,” he said. “I think that the anti-interventionist case was understandable given the outcome of the First World War. They had reason to oppose U.S. intervention. And, again, let me emphasize, their calculation was wrong. It’s good that they lost their argument. I do not wish to be put into a position where I’m going to make myself some kind of a defender for the people who didn’t want to intervene against Nazi Germany.” Good for him.

    That exchange tells us a lot about the Quincy Institute. The think tank’s foreign policy agenda and arguments echo the anti-interventionism of the 1930s. Most of its scholars are more worried about the exaggeration of threats posed by America’s adversaries than the actual regimes doing the actual threatening. In May, for example, Rachel Esplin Odell, a Quincy fellow, complained that Senator Romney was overstating the threat of China’s military expansion and unfairly blaming the state for the outbreak of the coronavirus: “The great irony of China’s military modernization is that it was in large part a response to America’s own grand strategy of military domination after the Cold War.” In this, of course, it resembled most everything else.

    The institute has hired staff that come out of the anti-neoconservative movement of the 2000s. Here we come to a delicate matter. The anti-neoconservatives of that era flirted with and at times embraced an IR sort of anti-Semitism: the obsession with Israel and its influence on American statecraft. Like the America Firsters, the anti-neoconservatives worry about the power of a special interest — the Jewish one — dragging the country into another war. A few examples will suffice. In 2018, Eli Clifton, the director of Quincy’s “democratizing foreign policy” program, wrote a post for the blog of Jim Lobe, the editor of the institute’s journal Responsible Statecraft, that three Jewish billionaires — Sheldon Adelson, Bernard Marcus, and Paul Singer — “paved the way” for Trump’s decision to withdraw from Obama’s Iran nuclear deal through their generous political donations. It is certainly fair to report on the influence of money in politics, but given Trump’s well-known contempt for the Iran deal, Clifton’s formulation had an odor of something darker.

    Then there is Trita Parsi, the institute’s Swedish-Iranian vice president, who is best known as the founder of the National Iranian American Council, a group that purports to be a non-partisan advocacy group for Iranian-Americans but has largely focused on softening American policy towards Iran. In 2015, as the Obama administration was rushing to finish the nuclear deal with Iran, his organization took out an ad in the New York Times that asked, “Will Congress side with our president or a foreign leader?” a reference to an upcoming speech before Congress by the Israeli prime minister Benjamin Netanyahu. The National Iranian American Council’s foray into the dual loyalty canard is ironic considering that Parsi himself has been a go-between for journalists and members of Congress who seek access to Mohammad Javad Zarif, Iran’s foreign minister.

    This obsession with Israeli influence in American foreign policy is a long-standing concern for a segment of foreign policy realists, who believe that states get into trouble when the national interest is distorted by domestic politics — an affliction that is particularly acute in democratic societies which respect the rights of citizens to make their arguments to the public and to petition the government and to form lobbies. The most controversial of the realists’ scapegoating of the domestic determinants of foreign policy was an essay by Stephen Walt and John J. Mearsheimer (both Quincy fellows) that appeared in the London Review of Books in 2005. It argued that American foreign policy in the Middle East has been essentially captured by groups that seek to advance Israel’s national interest at the expense of America’s. “The thrust of US policy in the region derives almost entirely from domestic politics, and especially the activities of the ‘Israel Lobby,’” they wrote. “Other special-interest groups have managed to skew foreign policy, but no lobby has managed to divert it as far from what the national interest would suggest, while simultaneously convincing Americans that US interests and those of the other country — in this case, Israel — are essentially identical.”

    Walt and Mearsheimer backed away from the most toxic elements of their essay in a subsequent book. The essay sought to explain the Iraq War as an outgrowth of the Israel lobby’s distortion of American foreign policy. The book made a more modest claim about the role it plays in increasing the annual military subsidy to Israel and stoking American bellicosity to Israel’s rivals like Iran. They also took pains to denounce anti-Semitism and acknowledge how Jewish Americans are particularly sensitive to arguments that present their organized political activity as undermining the national interest. Good for them. But the really important point is that events have discredited their claims. The all-powerful “Israel Lobby” was unable to wield its political influence to win the fight against Obama’s Iran deal. It was not able to stop Obama’s public pressuring of Israel to accept a settlement freeze. Decades earlier, it had not been able to thwart Reagan’s sale of AWACs to the Saudis. Anyone who believes in an omnipotent AIPAC is looking for conspiracies.

    *

    Walt himself, and the Quincy Institute, now has a much more ambitious target: the entire foreign policy establishment. This is the central thesis of The Hell of Good Intentions — that the machinery of American foreign policy is rigged. It will always favor a more activist foreign policy, a more dominant military and liberal hegemony. All the pundits, generals, diplomats and think tank scholars in Washington are just too chummy with one another. A kind of groupthink sets in. (This never happens at the Quincy Institute.) The terms of foreign policy debate are narrowed. And analysts who seek an American retrenchment from the world are shunted aside.

    To prove this point, Walt spends several pages observing how former government officials land jobs at prestigious think tanks and get invited to speak at fancy dinners. The result is that no one is ever held to account for their mistakes, while the courageous truth-tellers are ignored and isolated. (At times the book reads like a very long letter by a spurned friend asking why he never got an invitation to last month’s retreat at Aspen.)

    To illustrate this desperate problem, Walt turns to the annual conference for the World Affairs Councils of America. He ticks off speakers from past years—Susan Glasser, Vali Nasr, Paula Dobriansky — and observes, “These (and other) speakers are all dedicated internationalists, which is why they were invited.” So whom does Walt want the World Affairs Councils of America to invite? “Experts with a more critical view of U.S. foreign policy, such as Andrew Bacevich, Peter Van Buren, Medea Benjamin, Glenn Greenwald, Jeremy Scahill, Patrick Buchanan, John Mueller, Jesselyn Radack, or anyone remotely like them.”

    There is so much to be said about all of these figures. Patrick Buchanan’s ugly isolationist record is well known. But consider, at the other end of the ideological spectrum, Medea Benjamin. She is the founder of an organization called Code Pink, known mostly for disrupting public meetings, which last year briefly took control of the Venezuelan embassy in Georgetown to prevent representatives of the country’s internationally recognized interim anti-Maduro government from taking over. A group of American anti-imperialists were defending the prerogatives of a dictator who had sold off his country’s resources to China and Russia while his people starved. People like Benjamin are not dissidents. They are stooges.

    In this way the hard-nosed centrist post-Iraq realists converge with the radicals of the left even as they converge with the radicals of the right. This is realism in the style not of Henry Kissinger but of Noam Chomsky. As in Chomsky, the aggression of America’s adversaries is explained away as responses to American power. And as in Chomsky, the explanation often veers into apologies for monsters. Consider “Why the Ukraine Crisis is the West’s Fault,” an essay by Mearsheimer in Foreign Affairs in 2014. There he argues that the expansion of NATO and the European Union, along with American democracy-promotion, created the conditions for which the Kremlin correctly assessed that its strategic interests were threatened in Ukraine. And after street demonstrations in Kiev resulted in the flight of the Ukrainian president, Viktor Yanukovych, to Russia, Putin had little choice but to snatch Crimea from his neighbor. “For Putin,” the realist writes, “the illegal overthrow of Ukraine’s democratically elected and pro-Russian president — which he rightly labeled a ‘coup’ — was the final straw.” Of course the heroic agitation of the Maidan was about as much of a coup as the Paris commune of 1871. But like Putin, Mearsheimer argues that this “coup” in Ukraine was supported by Washington. His evidence here is that the late Senator John McCain and former assistant secretary of state Victoria Nuland “participated in antigovernment demonstrations,” and that an intercepted phone call broadcast by Russia’s propaganda network RT found that Nuland supported Arseniy Yatsenyuk for prime minister and was positive about regime change. “No Russian leader would tolerate a military alliance that was Moscow’s mortal enemy until recently moving into Ukraine,” Mearsheimer writes. “Nor would any Russian leader stand idly by while the West helped install a government there that was determined to integrate Ukraine into the West.”

    What Mearsheimer leaves out of his essay is that Yanukovych campaigned for the presidency of Ukraine on a promise to integrate his country into the European Union, an entirely worthy goal. But he violated his pledge with no warning, and under Russian pressure; and his citizens became enraged. Nor does Mearsheimer tell his readers about the profound corruption discovered after Yanukovych fled. Ukrainians did not rise up because of the imperialist adventures of Victoria Nuland or the National Endowment for Democracy. They rose up because their elected president tried to bamboozle them by promising to join Europe only to join Russia. Mearsheimer also makes no mention of the Budapest memorandum of 1994, in which Russia, America, and the United Kingdom gave security assurances to Ukraine to protect its territorial integrity in exchange for relinquishing its Soviet-era nuclear weapons. The fact that Putin would so casually violate Russia’s prior commitments should give fair-minded observers reason to fear what else he has planned. But Mearsheimer is not bothered by Putin’s predations. Putin, Mearsheimer writes, knows that “trying to subdue Ukraine would be like swallowing a porcupine. His response to events there has been defensive, not offensive.”

    Mearsheimer’s excuses for Putin and his failure to grasp the meaning of Ukraine’s democratic uprising in 2014 illuminate a weakness in his broader theory of international relations. In Mearsheimer’s telling, the only meaningful distinction between states is the amount of power they wield. States, he writes in his book The Great Delusion, “are like balls on a billiard table, though of varying size.” He goes on to say that “realists maintain that international politics is a dangerous business and that states compete for power because the more power a state has, the more likely it is to survive. Sometimes that competition becomes so intense that war breaks out. The driving force behind this aggression is the structure of the international system, which gives states little choice but to pursue power at each other’s expense.” This is not a novel idea. Thucydides relates what the Athenians told the Melians: “the strong do what they can and the weak suffer what they must.” For Mearsheimer, it does not matter that twenty years before its invasions of Crimea and Ukraine Russia had pledged to respect and protect Ukraine’s territorial integrity. Russia was strong and Ukraine was weak. Russia’s perception of the threat of an enlarged European Union mattered, whereas the democratic choice of Ukrainians did not. Realists are not moved by democratic aspirations, which are usually domestic annoyances to high strategy. Nor are they bothered by the amorality of their analysis of history.

    As for American behavior around the world, the Thucydidean framework describes it, but — unlike Russian behavior — does not extenuate it. For the Quincy intellectuals, there is no significant difference between America and other empires. America is not exceptional. It is only a larger billiard ball. It stands, and has stood, for nothing more than its own interests. But this equivalence is nonsense. Important distinctions must be made. When France booted NATO’s headquarters out of Paris in the middle of the Cold War, Lyndon Johnson did not order an army division to march on Paris. Trump’s occasional outbursts aside, America does not ask countries that host military bases to pay tribute. After toppling Saddam Hussein, America did not seize Iraq’s oil. Compare this to the Soviet Union’s response to a dockworkers’ strike in Poland, or for that matter to the Dutch East India Company. These realists do not acknowledge the value of preserving the system of alliances and world institutions that comprise the American-led world order, or the fact that they have often enriched and secured Americaʼs allies, and at times even its adversaries. In this respect they are not only anti-interventionist, they are also isolationists, in that they believe that the United States, like all other states, naturally and in its own best interest stands alone.

    All of this is emphatically not to say that the American superpower has always acted with prudence, morality, and benevolence. There have been crimes, mistakes, and failures. There have also been national reckonings with those crimes, mistakes, and failures. No nation state has ever not abused its power. But behind these reckonings lies a larger historical question. Has America largely used its power for good? A great deal depends on the answer to that question. And the answers must be given not only by Americans but also by peoples around the world with whom we have (or have not) engaged. The valiant people on the streets of Tehran in 2009 who risked their lives to protest theocratic fascist rule shouted Obama’s name — were they wrong? About Obama they were certainly wrong: while they were imploring him for help he was brooding about American guilt toward Mossadegh. But were they wrong about America? And the Ukrainians in the Maidan, and the Egyptians in Tahrir Square, and the Kurds, and the women of Afghanistan, and the masses in Hong Kong, and the Guaido movement in Venezuela, and the Uighurs in their lagers — why have they all sought American assistance and intervention? Perhaps it is because they know that the American republic was founded on a sincere belief that the freedom enjoyed by its citizens is owed to all men and women. Perhaps it is because they have heard that the United States created, and stood at the helm, of a world order that has brought prosperity to its allies and its rivals, and even sometimes came to the rescue of the oppressed and the helpless. The case can certainly be made that America in its interventions damaged the world — the anti-interventionists make it all the time — but the contrary case is the stronger one. And contrary to the anti-interventionists, there are many ways to use American power wisely and decisively: the choice is not between quietism and shock and awe. No, the people around the world who look to us are not deluded about our history. They are deluded only about our present.

    American exceptionalism was not hubris. It was a statement of values and a willingness to take on historical responsibility. Nor was it in contradiction to our interests, though there have been circumstances when we acted out of moral considerations alone. It goes against the mood of the day to say so, but we must recover the grand tradition of our modern foreign policy. It is not remotely obsolete. Reflecting on the pandemic last spring, Ben Rhodes declared in The Atlantic, very much in the spirit of his boss, that the crisis created an opportunity to reorient America’s grand strategy: “This is not simply a matter of winding down the remaining 9/11 wars — we need a transformation of what has been our whole way of looking at the world since 9/11.” Rhodes said that he still wants America to remain a superpower. He proposed new national projects to fight right-wing nationalism, climate change, and future pandemics — all excellent objectives. He also questioned why America’s military budget is several times larger than its budget for pandemic preparedness or international aid. But what if the world has not entirely changed, pandemic and all? What if the world that awaits us will be characterized by great power rivalry and persistent atrocities? What if corona does not retire Westphalia?

    If you seek to know what the world would look like in the absence of American primacy, look at the world now. Hal Brands and Charles Edel make this point well in The Lessons of Tragedy: “It is alluring to think that progress can be self-sustaining, and that liberal principles can triumph even if liberal actors are no longer preeminent. To do so, however, is to fall prey to the same ahistorical mindset that so predictably precedes the fall.” And so the first task of those seeking to counter American unexceptionalism is to resist the urge to believe that the past is entirely over, and to reject wholesale the old ends and the old means, and therefore to scale back America’s commitments to allies and to decrease the military budget. Even when we are isolationist we are not isolated. There are threats and there are evils, and whatever should be done about them it cannot be that we should do little or nothing about them. We need to become strategically serious.

    It was as recently as 2014 that Obama dismissed ISIS as a junior varsity team, and even he was forced to reconsider his narrative that the killing of Osama bin Laden was the epitaph for the 9/11 wars, when a more virulent strain of Islamic fascism emerged in the Levant. In the summer of 2014, he sent special operation forces back to Iraq and began the air power campaign against ISIS that continued through 2019. Would ISIS have come into being if America had kept a small force inside of Iraq after 2011 and continued to work quietly with Iraq’s government to temper its sectarian instincts against the Sunni minority? It is impossible to know. What is known, though, is that in 2011 American officers and diplomats on the ground who had worked with Iraq’s security forces warned that without some American presence in the country, there was a risk that the army would collapse; and it did. This same cautionary lesson also applies to Afghanistan. No serious person should trust the Taliban’s promise that it will fight against al Qaeda if it were to take back power. And while it is true that the Afghan government is corrupt and often hapless, foreign policy consists in weighing bad and worse options. The worse option for Afghanistan is a withdrawal that leaves al Qaeda’s longstanding ally a fighting chance to consolidate power and turn the country again into a safe haven of international terrorism and again oppress its people. This is not idle speculation.

    The continuing battle against terrorism, which is a continuing threat, must not blind us, as it did George W. Bush, to the new era of great power rivalry. Americans must surrender the pleasant delusion that China and Russia will mature into responsible global stakeholders, or that outreach to Iran will temper its regional ambitions. In this respect Fukuyama was wrong and Huntington and Wieseltier were right. The pandemic has shown how China hollows out the institutions of the world order that so many had hoped would constrain and tame them. After prior pandemics, the United States invested more in its partnership with China and the World Health Organization, reasoning that as China industri alized it needed assistance to track new diseases before they were unleashed on the rest of the world. That system failed in late 2019 and 2020 not because China lacked the public health infrastructure to surveil the coronavirus. It failed because China is a corrupt authoritarian state that lied about the threat and punished the journalists, doctors, and nurses who tried to warn the world about it. This suppression of the truth cost the rest of the world precious time to prepare for what was coming. It turns out that states are not just billiard balls of varying sizes. If China were an open society, it would not have been able to conceal the early warnings. The nature of its regime is an important reason why covid19 was able to mutate into a global pandemic.

    As former Soviet dissidents or Serbian student activists can attest, tyrannies appear invincible right up to the moment they topple. This does not mean that America should always use its power to speed this process along. Nor does this mean that America should lead more regime change wars like Iraq. The best outcome for countries such as Iran, China, and Russia is for its own citizens to reclaim their historical agency and take back their societies and their governments from their oppressors. But when moments arise that reveal fissures and weaknesses in the tyrant’s regime, when there are indigenous democratic forces that are gaining ground, America must intensify and assist them. This is a matter of both strategy — the friendship of peoples is always better than the friendship of regimes — and morality. When opportunities for democratic change emerge in the world, the wiser strategy is to support the transition and not save the dictator. Again, this is not a license to invade countries or foment military coups. It is rather a recognition that any arrangements America makes with despots will at best be temporary. America’s true friends are the states that share its values. But the triumph of the open society is not at all preordained. It requires historical action, a rejection of narcissistic passivity, in an enduring struggle. This historical action can take many forms, and it is not imperialism. It is the core of the republic’s historical identity. It is responsible statecraft.

    Ancient Family Lexicon, or Words and Loneliness

    “Whoever knows the nature of the name… knows the nature of the thing itself, ” Plato observed in his Cratylus. To know is a complex verb, difficult but rich. According to the dictionary, it means “to have news of a thing,” “to know that it exists or what it is.” In classical languages, the concept of knowing was linked with being born. Thus by coming into the world others have “news” about us: their recognition of us is part of our birth.

    Knowing the roots of the words at the basis of human relationships permits us to revive a world in which individuals existed as men and women or boys and girls with no middle ground. I will explain what that means. The ancestors of these appellations (woman, girl, man, boy) denoted a particular way of being that subsequent cultures have lost. As the meaning of the words changed, the beings themselves changed. Back then, before these semantic developments, it was understood that the condition of boyhood was synonymous with immaturity, and the divide between childhood and adulthood had to be put to the test of life. Moreover, youth and old age were not personal categories but attitudes of soul and mind. What follows is a sort of Indo-European family lexicon, and a portrait of a lost world.

    Mother
    The word comes from the Indo-European mater, formed by the characteristically childish elementary root ma– and the suffix of kinship –ter. In Greek it is mētēr, in Latin mater, in Sanskrit mātar, in Armenian mayr, in Russian mat, in German Mutter, in English mother, in French mère, in Italian, Spanish and Portuguese madre, in Irish máthair, in Bosnian majika.

    Father
    The word comes from the Indo-European pater, formed by the elementary root pa- and the suffix of kinship –ter. In Greek it is patèr, in Latin pater, in Sanskrit pitar, in ancient Persian pita, in Spanish, Italian and Portuguese padre, in French père, in German Vater, in English father.

    These terms are so ancient, so primordial that they have survived the history of languages   and the geography of peoples. Since they were first uttered, these words have consistently been among the first spoken by human beings. They are solid words, like a brick house, like a mountain. It is our fathers and our mothers who teach us first to name things. It is natural that a child should first articulate ma– or pa-. There is no child who does not seek to be loved and held, who is not in need of care and protection from a mother and father. And we never forget these words; we hold them inside ourselves all the way to the end. Studies on Alzheimer’s and senile dementia patients who have spoken a second language throughout their lives, a language different from that of their country of origin, show that they refer to dear ones using their original language. Native language. Mother-tongue.

    Human
    The classical etymology of the word man — meaning a human being — comes from the Latin homo, which dates back to the Indo-European root of humus, “earth,” a result of a primor-dial juxtaposition, perhaps even opposition, between mortal creatures and the gods of heaven. In the Bible, the Creator infuses earth with soul, creating the human compound. In French the term became homme, in Spanish hombre, a root that disappears in the Germanic languages, where we have man in English and Mann in German. The usage may now seem archaic, but it contains a universal idea.

    The Greek ànthrōpos has a disputed etymology. According to some, it is linked to the words anō, “up,” athréo, “look,” and òps, “eye,” a very fine combination of roots that indicates the puniness of men faced with the immensity of the divine and bound to raise their eyes to heaven from the ground. According to others, it is a descendent of the term anèr, “male,” “husband,” corresponding to the Latin vir. In both cases, the condition of “adult man” is colored by the concepts of strength, energy, ardor — of overcoming childhood through tests of courage, which reverberate in the Latin and Greek words vis and andreìa.

    Thus we have the universal concept of a human being who is small, humble, tied to the earth on which she has her feet firmly planted until the day of her death but not entirely material, puny but bent towards heaven – and also strong, therefore heroic, because she has succeeded in enlarging herself. In order to transition from girlhood to womanhood and from boyhood to manhood, one must pass a test. Through this test — or tests: the trials of a human life — girls and boys prove the measures of their strength, tenacity, and courage and in so doing become adults. Once the test is past, their nature itself is forever altered as their name is changed — no middle ground from girl to woman, from boy to man.

    Son, Daughter
    “Son” is connected with the Latin filius, “suckling,” linked to the root fe-, “sucking,” an affective and infantile term typical of the Indo-European –dhe, “to suckle,” which is found today in some Germanic languages as in the English word daughter or in the Bosnian one dijete, “boy.”

    The further we move away from the linguistic essence, from the primeval universality of the Indo-European roots, the more complicated things become, and the more the words grow apart and differ from Romance languages   to Germanic ones. The notion of “boy” or “girl” as adolescents still unprepared for adult life does not surface until the fourteenth century. This concept is a foreign loan that dates back to the late Middle Ages and derives from the Arabic raqqās, meaning “gallop,” or “courier,” or more specifically “boy who carries letters,” a term of Maghrebian origin probably spread from Sicily through port exchanges in the Mediterranean, which was so rich in Arabisms. (We may note that this etymology has been made irrelevant by the conditions of modern work, in which many adults are treated as boys who carry letters, that  is, are employed in infantilizing jobs that do not make full use of their adult skills.)

    Young, Old
    “Young” is a very pure and powerful word, and an imprecise one, not tied to a registry concept, in the same way that “old” is not. It clearly comes from the Indo-European root yeun-, from which the Sanskrit yuvā, the Avestan yavan-, the French jeune, the English young, the Latin iuvenis, the Spanish joven, the Portuguese jovem, the Romanian juve, the Russian junyj,  the Lithuanian jánuas, the German jung. “Young” is the calf or foal tenaciously striving to balance on thin and trembling legs, trying and trying again, falling ruinously to the ground until  it stands up, bleeding and covered with straw — but ready to go, to walk, to wander. Youth is strength, a drive, an arrow already fired.

    At the opposite extreme of the life cycle is the old, the elderly, which means worn out, weary, weak, too tired to move, to go further — like a car worn down by too many roads, a car that suddenly stops, the engine melted. Elderly is the worn sole of a shoe that has walked too far. It is the hands of the elderly, like cobwebs that have caught too much wind in life. This idea comes from the Latin vetulus, a diminutive of vetus, which means “used,” “worn out,” “old.” In French it is called vieil, in Spanish viejo, in Portuguese velho, in Romanian vechi. Old age is an attitude and not an age, it means stopping, even surrender. The string of the bow collapsed, the quiver empty. 

    Love
    Love is a pledge, as the etymology shows. The notion of betrothal, the ideas of bride and bridegroom, derive from the Latin sponsum and sponsam, from the past participle of the verb spondeo, which means “to promise,” corresponding to the Greek spèndō. In French it is called époux and épouse, in Spanish and Portuguese esposo, esposa. The original meaning of those words lay in the idea of the indissolubility of the promise of love. Once made, it cannot be revoked. The trust and the faith expressed in the promise were so sacred that they were celebrated by the couple with a libation to the gods.

    In the Romance languages, however,   the meaning of that promise has slipped into the future, to the rite that has yet to happen, in the word fiancé, which derives from fides in Latin, which means “faith.” It is this faith in the promise of love, in its futurity, that gives strength to lovers such as Renzo and Lucia, made immortal by Alessandro Manzoni in I promessi sposi, who did everything possible to fulfill that promise of love contained, primordially, in the definition of “betrothed.”

    Mom.

    As I mentioned, the word comes from the Indo-European root ma-, a universal utterance of affection, which has as its basis in the elementary sequence ma-ma. This childish word has identical counterparts in all Indo-European languages, a sound of affection that extends beyond borders in the welter of different languages around the world.

    Memory is often full of italicized passages, experiences that remain fresh despite the passage of time, but sometimes deletions overshadow the italics. For a long time 

    I had forgotten the sound of the word mom. I could not say it anymore because I had not said it out loud for over fifteen years. I had even stopped thinking it.

    Stabat mater, “the mother stood” next to the son, reads a thirteenth-century religious poem attributed to Jacopone da Todi, which later became universal in the Christian liturgy to indicate the presence of the sad mother next to the suffering son. Once, beside me, the daughter, there stood my mother. We celebrated our birthday on the same day, she and I: born premature, I was, as long as we both lived, her birthday present. When I was a child we always had a double party for the “women,” as my father called us. Since she died, every birthday of mine has been cut in half. And since then I have never been sure of exactly how old I am.

    Every January I get closer and closer to the age my mother was when she died. Meanwhile, like the turtle in the paradox of Zeno, I move further and further away from that lost, skinny, lonely girl who was between the third and the fourth year of high school when her mother died of a cancer as swift as a summer: she fell ill in June and passed in September, on the first day of school. For years I never told anyone of my early loss, it was one of my surgical choices. The silence gave me relief from the empty words of the others: poor girl, so young. I discovered a new space inside me, a sorrow that I did not know before and could now explore, unseen, unheard. I was an orphan.

    It seems impossible to admit it now, like all the admissions of the “imperfect present perfect” that we are, but there was a 

    long period in which I practically stopped talking. I am fine was the only sentence in my stunted girlish vocabulary. Not until I was seventeen did I begin to understand the value that the ancients attributed to words — and I began to respect them in silence with an uncompromising loyalty, learning to say little and to keep almost everything quiet.

    After high school I moved to Milan, enrolled at the university, and started a new life, which I call my second one. For years I never said anything to the people I met, to my friends, to my boyfriends, about my mother’s death. As a daughter I was mute. Anyway, almost nobody ever asked me. My silence was unchallenged. And then, with the publication of my first book, in which I shared my passion for ancient Greek, my third life began — my linguistic life, the era of saying — the advent of the words that I use to make everything real, especially death.

    I remember the exact moment that my verbal mission, my reckoning with mortality through language, started. I was presenting my book to the students in a high school in Ostuni when, at question time, a sixteen-year-old boy asked me, with the frankness of those who believe that I must know the most intimate things in the world because I wrote a book on Greek grammar, “Why in Greek is a human being also called brotòs, or destined to die?” “Because death is part of life,” I said, almost without thinking about it. I was disconcerted by the rapidity of my response: I already knew the answer, even if I had not read it in any book or treatise. I reminded myself that I had no need of a book to know this. She had died; I had lived it. And so on that day I reclaimed the first word that I uttered in my life, like so many of the women and men who have come and will come into the world and have gone and will go out of it. They 

    gave it back to me, those high school boys. I started to say mom again.    

    My mother, mine, who went away a long time ago and whom I resemble so much, the one who taught me my first words.

    The ancients believed that there was a perfect alignment between the signifier and the signified, between word and meaning, between name and reality, owing to the power of naming, to the descriptive force of a word to denote a thing.

    The Greek adjective etymos means “true,” “real,” from which the word “etymology” was later derived. It was coined by the Stoic philosophers to define the practice of knowing the world through the origin of the words that we use — the words that makes us what we are. I fell in love with the strange study of etymology in high school, and never gave up trying to understand the world according to it, to squeeze what surrounds me out of the language that surrounds me — notwithstanding my friends’ teasing that I cannot say anything without a reference to Greek or Latin.

    Many centuries later, taking up a thought of Justinian, Dante remarked in the Vita Nuova that nomina sunt consequentia rerum, “names are consequences of things” — that is, words follow things, they are upon them, they adhere to them, they reveal reality. Reality’s debt to language is very great. Words are the gates to what is. And to what is not: the opposite is also true, that if something has no name, or is not articulated in thought or speech, then it is not there. Silence about a thing does not mean that it is not real, but without a name and without words it is unrecognized and so, in a sense, not here, not present, now and now and now again, among us.

    Much that cannot now be said was once certainly said, about things that were once here but are gone, about a reality that has been lost. Dust.

    Two years ago I read an article in The New York Times that left me with such uneasiness that I was prompted to look more deeply inside myself and the people around me. The journalist declared that these first years of the new millennium are the “era of anxiety.” “The United States of Xanax,” he called the present era in his country’s history, after the most famous pill among the anxiolytics, whose percentage of diffusion in the population, including children, is in the double digits, and whose cost at the local pharmacy is slightly higher than the price of an ice cream and slightly less than a lunch at McDonald’s. Depression — that disease of the soul that until the twenties of the last century was considered as incurable, as inconsolable, as its name, melancholia — is today no longer fashionable, said the Times. It has been usurped. The years of bewilderment in the face of the abyss sung about by Nirvana — and which led to the suicide of Kurt Cobain — are over. Instead we suffer from a different kind of disease, an anxiety that makes us disperse ourselves busily, and scatter ourselves in the name of efficiency, so as not to waste time but instead  to manage it frantically. And as we strive not to lose time, we lose ourselves.

    The author of the article cited the case of Sarah, a 37-year-old woman from Brooklyn working as a social media consultant who, after having informed a friend in Oregon that she was going to visit her over the weekend, was seized by worry and fear when her friend did not reply immediately to her email. A common experience, perhaps: how many times do we fear that we have hurt a loved one without knowing exactly how? Is such worry a sincere concern about the other, or is it a  narcissistic, self-focused guilt? How often are we out of breath as if we were running when in fact we are standing still?

    But Sarah took her worry to an uncommon extreme. Waiting for the answer that was slow to arrive and that presaged her worst fear, she turned to Twitter and her 16,000 followers, tweeting, “I don’t hear from my friend for a day — my thought, they don’t want to be my friend anymore,” adding the hashtag “#ThisIsWhatAnxietyFeelsLike.” Within a few hours, thousands of people all over the world followed her example, tweeting what it meant for them to live in a state of perpetual anxiety, prisoners of a magma of indistinct, inarticulate emotions. At the end of the day, Sarah received a response from her friend: she had simply been away from her house and had not read the email. She would be more than happy to meet her, she had been hoping to see her for so long. A few days later Sarah remarked without embarrassment to journalists who were intrigued by the viral phenomenon: “If you are a human being who lives in 2017 and you are not anxious, there is something wrong with you.”

    Is that really so? Must we surrender to this plague of anxiety? Are we supposed to forget what we know — that friendship is measured in presence and memory, and not in the rate of digital response or the speed of reply? Are we required to infect our most significant relationships with the spirit of highly efficient customer service? Is it a personal affront if a loved one or a friend allows herself half a day to live her life before attending to us? Have we so lost the art of patience that we must be constantly reassured that we have not been abandoned? Are we living out of time, out of our time, if we do not agree to be prisoners of anxiety? Must we conform and surrender and live incompletely, making others around us similarly incomplete?

    I think not. It is perverse to regard anxiety as an integral and indispensable part of our life and our contemporaneity. It is difficult to admit, especially when we are unhappy, but we come into the world to try to be happy. And to try to make others happy. 

    Sarah may have suffered from an anxiety disorder, a serious illness that required appropriate treatment, or perhaps, as she later admitted, she simply felt guilty because, too busy with her work, she had not communicated with her friend for months and was now embarrassed about her absence, about suddenly making herself heard. When we abdicate the faculty of speech, we can only reconstruct the thoughts and feelings of others by means of clues. Often we interpret them incorrectly. Silence confuses us.

    I was once like that. There was a time when anyone could read the words senza parole — “speechlessness” — on my wrist. It was the expression that I got tattooed on my skin when I lost my mother : I can’t say a word, I don’t want to speak. It was my first tattoo, an indelible warning whenever someone held out his hand to help me. I pushed away from everyone after my mother died, especially from myself. I even dyed my hair black so as not to see in the mirror a reflection which resembled the mother I no longer had.

    But “speechlessness” is now the word I hate most, because I understood later, much later, that the words you need to say are always available to you, and you have to make the effort to find them. Just as Plato said, words have the power to create, to form reality — real words, which have equally real effects on our present. As Sarah’s sad story reveals, the absence of words is the absence of reality. Without words there is no life, only anxiety, only malaise.

    I covered up that tattoo in Sarajevo, a few days before my first book was published, because I had finally found my words. When people smile at the black ink stain that wraps my right wrist like a bracelet, I smile too, because only I know what is underneath, the error that was stamped on my flesh that I have now stamped out. How much life was born after the muzzle was destroyed!

    Whatever production of ourselves we stage, there will always be a little detail — a precarious gesture, a forced laugh, an uncertainty, an imbalance — that exposes the inconsistency between what we are doing and what we really want to do.

    We are not films, there is no post-production in life, and special effects lose their luster quickly. We are perpetually a first version, opera prima, drafts and sketches of the tragedy or comedy of ourselves, as in that moment at sunset in Syracuse or Taormina when the actors entered the scene to begin the show.

    Today we all live entangled in a bizarre situation. We have the most immense repository of media in human history and we no longer know what or how or with whom to communicate. I am convinced that we have never before felt so alone. The reason is not that we are silent. Quite the contrary. We talk and talk and talk, until talking exhausts us. But the perpetual cacophony allows us to ignore that we communicate little of substance. We tend to say the bare minimum, to speak quickly and efficiently, to abbreviate, to signal, to hide, to be always easy and never complex. We seem, simultaneously, afraid of being misunderstood and afraid of being understood. The human act of saying has become synthetic, a constant pitch, a transactional practice borrowed from business in which we must persuade our interlocutors in just a few minutes to commit everything they have. Our speech is an advertisement, a performance. Joy is a performance, pain is a performance — and a speedy one. If we do not translate our sentiments into slogans and cliches, graphics and “visualizations,” if we do not express ourselves in the equivalents of summaries, slides, and abstracts, if our presentation of our feelings or our ideas exceed a commonly accepted time limit (reading time: three minutes), then we fear that nobody will have the patience to listen to us.

    We have swapped the infinity of our thoughts for the stupid finitude of 280 characters. We send notices of our ideas and notifications of our feelings, rather like smoke signals. Is there anything more like a smoke signal than Instagram stories, which are similarly designed to disappear? 

    Brevity is now the very condition of our communication. We behave like vulgar epigrammatists, electronically deforming the ancient art of Callimachus and Catullus. We condense what we have to say into each of the many chats on which we try desperately to make ourselves heard by emoticons and phrases and acronyms shot like rubber bullets that bounce here and there as in an amusement park. We refuse subordinate clauses, the complicated verbal arrangement — appropriate for the complexity of actual ideas and feelings — known as hypotaxis, fleeing from going hypò, or “below” the surface, and preferring instead to remain parà, or “next,” on the edge of the parataxis, the list of the things and people we love.

    We refuse to know each other and in the meantime we all talk like oracles.

    It is a fragile paradox, which should be acknowledged without irony (that hollow armor) and which demands love rather than bitter laughter: the less we say about ourselves, the more we reveal about ourselves. Only we do it in a skewed, precarious way. And we do it deceptively, even treasonously.

    Our brevity is only a postponement of what sooner or later will be expressed, but in a twisted way. Surely others have observed the tiny breakdowns, the personal explosions that plague any person forced to live in a perpetual state of incompleteness. Have you never seen someone who, finding herself without words, ends up screaming and madly gesticulating? Everywhere we end up sabotaging the image of perfection that we impose on ourselves with small, miserable, inhuman actions. An unjustified fit of anger on a train: a wrong seat, a suitcase that doesn’t fit, a crying baby, a dog, an insult at the traffic light, and suddenly we are hurling unrepeatable shrieks out the window before running away like thieves. Or perhaps you have observed another symptom of this unhealthy condition: anxious indecision — an unnerving slowness to order at the restaurant, you choose, I don’t know, I’m not sure, maybe yes, of course not, in front of a bewildered waiter, while we collapse as if the course of our whole life depended on the choice of a pizza. 

    Once upon a time, revolutions were unleashed to obtain freedom from a master. Today the word “revolution” is thrown around in political discourse, but in our inner lives it makes us so afraid that we prefer to oppress ourselves, to renounce the treasures of language and the strengths they confer. And so silence has become our master, imprisoning us in loneli-ness. A noisy silence, a busy loneliness. The result is a generalized anxiety that, when it explodes, because it always explodes sooner or later, makes us ashamed of ourselves. 

    When we give our worst to innocent strangers, we would like immediately to vanish, to erase the honest image of ourselves unfiltered. We tell ourselves that is only what we did there — on the subway at rush hour when an old lady cluttered us with her shopping bags, or in the line at the post office, annoyed because we lost our place while we were fiddling with the phone or with a post on Facebook in which we commented on something about which we do not care and about which we have nothing to say because there is nothing to say about it. That is not who we really are. It was a mistake. It was not representative — or so we tell ourselves.

    If we are ashamed, if we want to disappear after these common eruptions, it is for all that we have not done, for all that we have not said, to these strangers and to others we have encountered before. By remaining silent, or by speaking only efficiently, before the spectacle of life, without calling anything or anyone by name, without relishing descriptions, not only do we not know things, as Plato warned, but we do not even know ourselves.

    Who are we, thanks to our words?

    Futilitarianism or To the York Street Station

    Wednesday, April 8th…a date etched in black for socialists and progressives, marking the end of a beautiful fantasy. It was on that doleful day that Senator Bernie Sanders — acknowledging the inevitable, having depleted his pocketful of dreams — announced the suspension of his presidential campaign. It was the sagging anticlimax to an electoral saga that came in like a lion and went out with a wheeze. For months the pieces had been falling into place for Sanders to secure the Democratic nomination, only to fall apart in rapid slow motion on successive Super Tuesdays, a reversal of fortune that left political savants even more dumbstruck than usual. Taking to social media, some of Sanders’ most fervent and stalwart supporters in journalism, punditry, and podcasting responded to the news of his withdrawal with the stoical grace we’ve come to expect from these scarlet ninja. Shuja Haider, a high-profile leftist polemicist who’s appeared in the Guardian, The Believer, and the New York Times, tweeted: “Well the democratic party just officially lost the support and participation of an entire generation. Congratulations assholes.” (On Twitter, commas and capital letters are considered optional, even a trifle fussy.) Will Menaker, a fur-bearing alpha member of the ever popular Chapo Trap House podcast (the audio clubhouse of the self-proclaimed “dirtbag left”), declared that with Bernie out of the race, Joe Biden, “has his work cut out for him when it comes to winning the votes of a restive Left that distrusts and dislikes him. It’s not impossible if he starts now by sucking my dick.” Others were equally pithy.

    It fell upon Jacobin, the neo-Marxist quarterly and church of the one true faith, to lend a touch of class to the valedictory outpourings. Political admiration mingled with personal affection as it paid homage to the man who had taken them so far, but not far enough. On its website (the print edition is published quarterly) it uncorked a choral suite of tributes, elegies, and inspirational messages urging supporters to keep their chins up, their eyes on the horizon, their gunpowder dry, a song in their hearts: “Bernie Supporters, Don’t Give Up,” “We Lost the Battle, but We’ll Win the War,” “Bernie Lost. But His Legacy Will Only Grow.” In this spirit, the magazine’s editor and founder, Bhaksara Sunkara, author of The Socialist Manifesto: The Case for Radical Politics in an Era of Extreme Inequality, conducted a postmortem requiem on YouTube with his Jacobin comrades processing their grief and commiserating over their disappointment. Near the end of the ceremony, Sunkara declared that Bernie’s legacy would be as a moral hero akin to Martin Luther King, Mother Jones, and Eugene V. Debs. Which offered a measure of bittersweet consolation, but was not what Sunkara had originally, thirstily desired. “I wanted him to be fucking Lenin. I wanted him to take power and institute change.” But the Bernie train never reached the Finland Station, leaving the Jacobins cooling their heels on the platform and craning their necks in vain. 

    Politically and emotionally they had banked everything on him. “Socialism is the name of our desire,” Irving Howe and Lewis Coser had famously written, and for long fallow seasons that desire lay slumbrous on the lips until awakened by Bernie Sanders, the son of Jewish immigrants from Poland, the former mayor of Burlington, Vermont, the junior senator of that state, and lifelong champion of the underdog. Where so many longtime Washington figures had been led astray by sinecures, Aspen conferences, and unlimited canapes, Sanders had been fighting the good fight for decades without being co-opted by Georgetown insiders and neoliberal think tanks, like a protest singer who had never gone electric. He might not be a profound thinker or a sonorously eloquent orator (on a tired day he can sound like a hoarse seagull), and his legislative achievement may be a bit scanty, but his tireless ability to keep pounding the same nails appealed to youthful activists that had come to distrust or even detest the lofty cadences of Barack Obama now that he was gone from office and appeared to halo into Oprah-hood. Eight years of beguilement and what had it materially gotten them? grumbled millennials slumped under student debt and toiling in unpaid internships. What Bernie lacked in movie-poster charisma could be furnished by Jacobin, which emblazoned him as a lion in winter.

    So confident was Jacobin that the next great moment in history was within its grasp that in the winter of 2019 it devoted a special issue to the presidency of Bernie Sanders, whose cover, adorned with an oval portrait of Sanders gazing skyward, proclaimed: “I, President of the United States and How I Ended Poverty: A True Story of The Future.” Subheads emphasized that this was not just an issue of a magazine, a mere collation of ink and paper, it was the beginning of a crusade — a twenty-year plan to remake America. Avengers, assemble! At the public launch of the “I, President” issue, Sunkara rhetorically asked, “Is there a point in spending all day trying to explain, like, the Marxist theory of exploitation to some 18-year-old? Yes! Because that kid might be the next Bernie Sanders.” 

    Alas, Jacobin made the mistake of counting their red berets before they were hatched, and now the issue is fated to become a collector’s item, a poignant keepsake of what might have been. Had Sanders remained in the race and won the presidency, Jacobin would have been as credited, identified, and intimately associated with the country’s first socialist administration as William F. Buckley, Jr.’s National Review was with Ronald Reagan’s. Jacobin could have functioned as its ad hoc brain trust, or at least its nagging conscience. From that carousel of possibilities the magazine instead finds itself reckoning with the divorce of its socialist platform from its standard bearer, facing the prospect of being just another journal of opinion jousting for attention. No longer ramped up as a Bernie launch vehicle, Jacobin must tend to the churning ardor for grand-scale structural change and keep its large flock of followers from straying off into the bushes, which is not easy to do after any loss, no matter how noble. “In America, politics, like everything else, tends to be all or nothing,” Irving Howe observed in Socialism and America. And after working so hard on Bernie’s behalf, it’s hard to walk away with bupkis. 

    Jacobin possesses a strong set of jaws, however. It will not be letting go of its hold in the marketplace of ideas anytime soon. For better or ill, it will continue to set the tone and tempo on the left even in the absence of its sainted gran’pop. Since initiating publication in 2010, Jacobin has established itself as an entrepreneurial success, a publishing sensation, and an ideological mothership. It has built up its own storehouse of intellectual capital, an identifiable brand. Taking its name and sabre’d bravado from the group founded by Maximilien Robespierre that conducted the French Revolution’s Reign of Terror (an early issue featured an IKEA-like guillotine on the cover, presumably for those fancying to stage their own backyard beheadings — “assembly required,” the caption read), Jacobin located a large slumbering discontent in the post-Occupy Wall Street/Great Recession stagnancy among the educated underemployed and gave it a drumbeat rhythm and direction.

    From the outset the magazine exuded undefeatable confidence, the impression that history with a capital H was at its back. Its confidence in itself proved not misplaced. Where even before the coronavirus most print magazines were on IV drips, barely sustainable and in the throes of a personality crisis, Jacobin’s circulation has grown to 40,000 plus (more than three times that of Partisan Review in its imperious prime); it has sired and inspired a rebirth of socialist polemic (Why Women Have Better Sex Under Socialism, The ABCs of Socialism, Why You Should Be a Socialist, and the forthcoming In Defense of Looting), and helped recruit a young army of activists to bring throbbing life to Democratic Socialists of America, whose membership rolls as of late 2019 topped 56,000, with local chapters popping up like fever blisters. 

    The editorial innovation of Sunkara’s Jacobin was that it tapped into animal spirits to promote its indictments and remedies, animal spirits normally being the province of sports fans, day traders, and bachelorette parties but not of redistributionists, egalitarians, and social upheavers. Even its subscription form is cheeky: “The more years you select, the better we can construct our master plan to seize state power.” Although the ground game of socialism was traditionally understood as a conscientious slog — meetings upon meetings, caucusing until the cows come home, microscopic hair-splitting of doctrinal points — Jacobin lit up the scoreboard with rhetoric and visuals that evoked the heroic romanticism of revolution, history aflush with a red-rose ardor. The articles can be dense and hoarse with exhortations (“we must build…,” “we must insist…” we must, we must), the writing unspiced by wit, irony, and allusion (anything that smacks of mandarin refinement), and the infographics more finicky than instructive, but the 

    overall package has a jack-in-the-box boing!, a kinetic aesthetic that can be credited to its creative director, Remeike Forbes. Not since the radical Ramparts of the 1960s, designed by Dugald Stermer, has any leftist magazine captured lightning in a bottle with such flair. 

    Effervescence is what sets Jacobin apart from senior enterprises on the left such as The Nation, Dissent, New Left Review, and that perennial underdog Monthly Review, its closest cousin being Teen Vogue, Conde Nast’s revolutionary student council fan mag — the Tiger Beat of glossy wokeness. When not extolling celebrity styling (“Kylie Jenner’s New Rainbow Manicure Is Perfect for Spring”), Teen Vogue posts junior Jacobin tutorials on Rosa Luxemburg and Karl Marx, whose “writings have inspired social movements in Soviet Russia, China, Cuba, Argentina, Ghana, Burkina Faso, and more…” (most of those movements didn’t pan out so well, but they left no impact on Kylie’s manicure). 

    Jacobin recognized that hedonics are vital for the morale and engagement of the troops, who can’t be expected to keep chipping away forever at the fundament of the late-capitalist, post-industrial, Eye of Sauron hegemon. No longer would socialists be associated with aging lefties in leaky basements cranking the mimeograph machine and handing out leaflets on the Upper West Side — socialism now had a hip new home in Brooklyn where the hormones were hopping and bopping pre-corona. “‘Everybody looks fuckin’ sexy as hell,’” shouted [Bianca] Cunningham, NYC-DSA’s co-chair. ‘This is amazing to have everybody here looking beautiful in the same room, spreading the message of socialism.’” So recorded Simon van Zuylen-Wood in “Pinkos Have More Fun,” his urban safari into the dating-mating, party-hearty socialist scene for New York magazine.

    In the middle of the dance floor I ran into Nicole Carty, a DSA-curious professional organizer I also hadn’t seen since college, who made a name for herself doing tenant work after Occupy Wall Street. (DSA can feel like a never-ending Brown University reunion.) “Movements are, yeah, about causes and about progress and beliefs and feelings, but the strength of movements comes from social ties and peer pressure and relationships,” Carty said. “People are craving this. Your social world intersecting with your politics. A world of our own.”

    Jacobin’s closest companion and competitor in the romancing of the young and the restless is The Baffler, founded in 1988, at the height of the Reagan imperium, allowed to lapse in 2006, revived from cryogenic slumber in 2010, and going strong ever since. Both quarterlies publish extensive and densely granulated reporting and analytical pieces on corporate greed, treadmill education, factory farming, and America’s prison archipelago, though The Baffler slants more essayistic and art-conscious, a Weimar journal for our time. The chief difference, however, is one of temperament and morale. Where Jacobin, surveying the wreckage and pillage, holds out the promise that the cavalry is assembling, preparing to ride, The Baffler often affects a weary-sneery, everything-sucks, post-grad-school vape lounge cynicism, as if the battle for a better future is a futile quest — the game is rigged, the outcome preordained. “Forget it, Jake, it’s Chinatown.” 

    The Bafflerʼs bullpen of highly evolved futilitarians leans hard on the words “hell” and “shit” to register their scorn and disgust at the degradation of politics and culture in our benighted age by rapacious capital with the complicity of champagne-flute elitists and the good old dumb-ox American booboisie. It’s Menckenesque misanthropy (minus Mencken’s thunder rolls of genius) meets Bladerunner dystopia with a dab of Terry Southern nihilism, and it’s not entirely a warped perspective — the world is being gouged on all sides by kleptocratic plunder. But The Baffler offers mostly confirmation of the system’s machinations, the latest horrors executed in fine needlepoint, no exit from the miasma. Each issue arrives as an invitation to brittle despair. 

    Jacobin, by contrast, acts as more of an agent of transmutation, a mojo enhancer for the socialist mission. This is from “Are You Reading Propaganda Right Now?” by Liza Featherstone, which appeared in its winter 2020 issue:

    One of the legacies of the Cold War is that Americans assume propaganda is bad. While the term “propaganda” has often implied that creators were taking a manip-ulative or deceptive approach to their message — or glossing over something horrific, like World War I, the Third Reich, or Stalin’s purges — the word hasn’t always carried that baggage. Lenin viewed propaganda as critical to building the socialist movement. In his 1902 pamphlet What Is to Be Done?, it’s clear that his ideal propaganda is an informative, well-reasoned argument, drawing on expertise and information that the working-class might not already have. That’s what we try to do at Jacobin.

    It is worth asking how much these excitable Leninists actually know about their Bolshie role model. Did they notice Bernie’s response to Michael Bloomberg’s use of the word “communist” to describe him at one of the debates? He called it “a cheap shot.” Say what you will about Sanders, but he recoiled at the charge. He, at least, is familiar with Lenin’s work.

    Jacobin’s mistake was to think it could play kingmaker too. In It Didn’t Happen Here: Why Socialism Failed in the United States, Seymour Martin Lipset and Gary Marks delineated the unpatchable differences between “building a social movement and establishing a political party,” or, in this case, taking over an existing one. (As Irving Howe cautioned, “You cannot opt for the rhythms of a democratic politics and still expect it to yield the pathos and excitement of revolutionary movements.”) Political parties represent varied coalitions and competing interests, requiring expediency, horse trading, and tedious, exhausting staff work to achieve legislative ends. Lipset and Marks: “Social movements, by contrast, invoke moralistic passions that differentiate them sharply from other contenders. Emphasis on the intrinsic justice of a cause often leads to a rigid us-them, friend-foe orientation.” 

    The friend-foe antipathy becomes heightened and sharpened all the more in the Fight Club of social media, where the battle of ideas is waged with head butts and low blows. In print and online, Jacobin wasn’t just Sanders’ heraldic evangelist, message machine, and ringside announcer (“After Bernie’s Win in Iowa, the Democratic Party Is Shitting Its Pants” — actual headline), it doubled as the campaign’s primary enforcer, methodically maligning and elbowing aside any false messiah obstructing the road to the White House, ably assisted by the bully brigade of “Bernie Bros” and other nogoodniks who left their cleat marks all across Twitter. Excoriation was lavished upon pretenders who had entered the race out of relative obscurity and momentarily snagged the media’s besotted attention, such as Texas’ lean and toothy Beto O’Rourke, whose campaign peaked when he appeared as Vanity Fair’s cover boy and petered out from there (“Beto’s Fifteen Minutes Are Over. And Not a Moment Too Soon,” 

    wrote Jacobin’s Luke Savage, signing the campaign’s death certificate). 

    Pete Buttigieg received a more brutal hazing, ad hominemized from every angle. Jacobin despised him from the moment his Eddie Haskell head peeped over the parapet — that this Rhodes scholar, military veteran who served in Afghanistan, and current mayor of South Bend, Indiana had written a tribute to Bernie Sanders when he was in high school only made him seem more fishily Machiavellian in their minds. A sympathetic, personally informed profile by James T. Kloppenberg in the Catholic monthly Commonweal portrayed Buttigieg as a serious, driven omnivore of self-improvement, but in Jacobin he barely registered as a human being, derided as “an objectively creepy figure” by Connor Kilpatrick (“That he is so disliked by the American public while Sanders is so beloved…should hearten us all”), and roasted by Liza Feather-stone for being so conceited about his smarts, an inveterate showoff unlike you-know-who: “Bernie Sanders, instead of showing off his University of Chicago education, touts the power of the masses: ‘Not Me, Us.’ The cult of the Smart Dude leads us into just the opposite place, which is probably why some liberals like it so much.” 

    There was no accomplishment of Buttigieg’s that Jacobin couldn’t deride. Buttigieg’s learning Norwegian (he speaks eight languages) to read the novelist Erlend Loe would impress 

    most civilians, but to Jacobin it was more feather-preening, and un-self-aware besides: “Pete Buttigieg’s Favorite Author Despises People Like Him,” asserted Ellen Engelstad with serene assurance in one of the magazine’s few stabs at lit crit. Even Buttigieg’s father — the renowned Joseph Buttigieg, a professor of literature at Notre Dame who translated Antonio Gramsci and founded The International Gramsci Society — might have washed his hands of this upstart twerp, according to Jacobin. By embracing mainstream Democratic politics, “Pete Buttigieg Just Dealt a Blow to His Father’s Legacy,” Joshua Manson editorialized. The American people, Norwegian novelists, the other kids in the cafeteria, Hamlet’s ghost — the message was clear: nobody likes you, Pete! Take your salad fork and go home!

    Buttigieg may have betrayed his Gramscian legacy but it was small beans compared to the treachery of which another Sanders rival was capable. In “How the Cool Kids of the Left Turned on Elizabeth Warren,” Politico reporter Ruairi Arrieta-Kenna chronicled Jacobin’s spiky pivot against Elizabeth Warren, that conniving vixen. Arrieta-Kenna: “It wasn’t so 

    long ago that you could read an article in Jacobin that argued, ‘If Bernie Sanders weren’t running, an Elizabeth Warren presidency would probably be the best-case scenario.’ In April, 

    another Jacobin article conceded that Warren is ‘no socialist’ but added that ‘she’s a tough-minded liberal who makes the right kind of enemies,’ and her policy proposals ‘would make this country a better place.’” Her platform and Sanders’ shared many of the same planks, after all. 

    Planks, schmanks, the dame was becoming a problem to the Jacobin project, cutting into Bernie’s constituency and being annoyingly indefatigable, waving her arms around like a baton twirler. Warren needed to be sandbagged to open a clear lane for Bernie. Hence, “in the pages of Jacobin,” Arrieta-Kenna wrote, “Warren has gone from seeming like a close second to Sanders to being a member of the neoliberal opposition, perhaps made even worse by her desire to claim the mantle of the party’s left.” The J-squad proceeded to work her over with a battery of negative stories headlined “Elizabeth Warren’s Head Tax Is Indefensible,” “Elizabeth Warren’s Plan to Finance Medicare for All Is a Disaster,” and “Elizabeth Warren Is Jeopar-dizing Our Fight for Medicare for All,” and warned, quoting Arrieta-Kenna again, “that a vote for Warren would be ‘an unconditional surrender to class dealignment.’” When Warren claimed that Sanders had told her privately that a woman couldn’t defeat Donald Trump and declined to shake Bernie’s hand after the January 14 Democratic debate, she completed the arc from valorous ally to squishy opportunist to Hillary-ish villainess. Little green snake emojis slithered from every cranny of Twitter at the mention of Warren’s name, often accompanied by the hashtag #WarrenIsASnake, just in case the emojis were too subtle. Compounding her trespasses, Warren declined to endorse Sanders after she withdrew from the race, blowing her one shot at semi-redemption and a remission of sins. Near the end of Jacobin’s YouTube postmortem, Sunkara expressed sentiments that seemed to be universal in his cenacle: “Fuck Elizabeth Warren,” he explained, “and her whole crew.”

    Once Buttigieg and Warren dropped out of serious contention, the sole remaining obstacle was Joe Biden, whom Jacobin considered a paper-mache relic in a dark suit loaned out from the prop department and seemingly incapable of formulating a complete sentence, much less a coherent set of policies — an entirely plausible caricature, as caricatures go. Occasion-ally goofy and even surreal in his off-the-cuff remarks, Biden doesn’t suggest deep reserves of fortitude and gravitas. In February 2020, Verso published Yesterday’s Man: The Case Against Joe Biden by Jacobin staff writer Branko Marcetic, its cover photograph showing an ashen Biden looking downcast and abject, as if bowing his weary head to the chopping block of posterity. But on the first Super Tuesday, the Biden candidacy, buoyed by the endorsement by the formidable James Clyburn and the resultant victory in South Carolina, rose from the dusty hallows and knocked Sanders’ sideways. It was the revenge of the mummy, palpable proof that socialism may have been in vogue with the media and the millennials but rank and file Democrats, especially those of color, weren’t interested in lacing up their marching boots. For them, the overriding imperative was not Medicare for All or the Green New Deal but denying Donald Trump a second term and the opportunity to reap four more years of havoc and disfigurement. In lieu of Eliot Ness, Joe Biden was deemed the guy who had the best shot of taking down Trump and his carious crew. 

    For a publication so enthralled to the Will of the People and the workers in their hard-won wisdom, it’s remarkable how badly Jacobin misread the mood of Democratic voters and projected its own revolutionary ferment on to it — a misreading rooted in a basic lack of respect for the Democratic Party, its values, its history, its heroes (apart from FDR, since Sanders often cited him), its institutional culture, its coalitional permutations — all this intensified with an ingrained loathing for liberalism itself. From its inception, Jacobin, like so many of its brethren on the Left, has displayed far more contempt and loathing for liberals, liberalism, and the useless cogs it labels “centrists” than for the conservatives and reactionaries and neo-fascists intent on turning the country into a garrison state with ample parking. It has a softer spot for hucksters, too. It greeted libertarian blowhard podcaster Joe Rogan’s endorsement of Sanders as a positive augury — “It’s Good Joe Rogan Endorsed Bernie. Now We Organize” — and published a sympathetic profile of the odious Fox News host Tucker Carlson. This has been its modus operandi all along. In a plucky takedown of the magazine in 2017 called “Jacobin Is for Posers,” Christopher England noted, “It can claim two issues with titles like ‘Liberalism is Dead,’ and none, henceforth, that have shined such a harsh light on conservatism.” For Jacobin, liberalism may be dead or playing possum but it keeps having to be dug up and killed again, not only for the exercise but because, England writes, “conservatism, as its contributors consistently note, can only be defeated if liberalism is brought low.” Remove the flab and torpor of tired liberalism and let the taut sinews of the true change-maker spring into jaguar action. 

    Which might make for some jungle excitement, but certainly goes against historical precedent. “In the United States, socialist movements have usually thrived during times of liberal upswing,” Irving Howe wrote in Socialism and America, cautioning, “They have hastened their own destruction whenever they have pitted themselves head-on against liberalism.” Tell that to Jacobin, which either didn’t learn that lesson or considered it démodé, irrelevant in the current theater of conflict. With the Democratic Party so plodding and set in its ways, a rheumy dinosaur that wouldn’t do the dignified thing and flop dead, the next best thing was to occupy and replenish the host body with fresh recruits drawn from young voters, new voters, disaffected independents, blue-collar remnants, and pink-collar workers. Tap into this vast reservoir of idealism and frustration to unleash bottoms-up change and topple the status quo, writing fini to politics as usual. Based on 2016 and how strongly Sanders ran above expectations, this wasn’t a reefer dream.

    The slogan for this campaign was “Not Me. Us,” and it turned out there were a lot fewer “us” this time around. “Mr. Sanders failed to deliver the voters he promised,” wrote John Hudak, a deputy director and senior fellow at the Brookings Institution, analyzing the 2020 shortfall. “Namely, he argued that liberal voters, new voters, and young voters would dominate the political landscape and propel him and his ideas to the nomination. However, in nearly every primary through early March, those voters composed significantly smaller percentages of the Democratic electorate than they did in 2016.” It wasn’t simply a matter of Sanders competing in a more crowded field this time, Hudak reported. In the nine primaries after Warren’s withdrawal, when it became a two-person race, “Mr. Sanders underperformed his 2016 totals by an average of 16.0%, including losing three states that he won in 2016 (Idaho, Michigan, and Washington).” How did Jacobin miss the Incredible Sanders Shrinkage of 2020? 

    It became encoiled in its own feedback loop, hopped up on its own hype. “Twitter — a medium that structurally encourages moral grandstanding, savage infighting, and collective action — is where young socialism lives,” van Zuylen-Wood had observed in “Pinkos Have More Fun,” and Twitter, to state the obvious, is not the real world, but a freakhouse simulacrum abounding with trolls, bots, shut-ins, and soreheads. Jacobin and its allies so dominated online discourse that they didn’t comprehend the limits of that dominance until it hit them between the mule ears. They fell victim to what has come to be known as Cuomo’s Law, which takes its name from the New York gubernatorial contest in 2018 between Andrew Cuomo and challenger Cynthia Nixon, a former cast member of Sex and the City and avowed democratic socialist. On Twitter, Nixon had appeared the overwhelming popular favorite, Cuomo the saturnine droner that no one had the slightest passion for. But Cuomo handily defeated Nixon, demonstrating the disconnect between online swarming and actual turnout: ergo, Cuomo’s Law. 

    Confirming Cuomo’s Law, Joe Biden probably had less Twitter presence and support than any of the other major candidates, barely registering on the radar compared to Sanders, and yet he coasted to the top of the delegate count until the coronavirus hit the pause button on the primary season. Sanders’ endorsement of Biden in a joint livestream video on April 13th not only conceded the inevitable but delivered a genuine moment of reconciliation that caught many off-guard, steeped in the residual rancor of 2016. Whatever his personal disappointment, Sanders seems to have made peace with defeat and with accepting a useful supporting role in 2020; he refuses to dwell in acrimony. The same can’t be said about many of the 

    defiant dead-enders associated with Jacobin, who, when not rumor-mongering about Biden’s purported crumbling health, cognitive decline, incipient dementia, and basement mold, attempted to kite Tara Reade’s tenuous charges of sexual harassment and assault at the hands of Biden into a full-scale Harvey Weinstein horror show, hoping the resultant furor would dislodge Biden from the top of the ticket and rectify the wrong done by benighted primary voters. For so Jacobin had written and so it was said: “If Joe Biden Drops Out, Bernie Sanders Must Be the Democratic Nominee.” 

    Like Norman Thomas, the longtime leader of the Socialist Party in America, Bernie Sanders bestowed a paternal beneficence upon the left that has given it a semblance of unity and personal identity. He is the rare politician one might picture holding a shepherd’s crook. The problem is that identification with a singular leader is an unsteady thing for a movement to lean on. Long before Thomas died in 1968, having run for the presidency six times, the socialist movement had receded into gray twilight, upstaged by the revolutionary tumult on campuses and in cities. Jacobin is determined to make sure history doesn’t reprise itself once Sanders enters his On Golden Pond years. Preparing the post-Bernie stage of the socialist movement, a pair of Jacobin authors, Meagan Day and Micah Uetricht, collaborated on Bigger Than Bernie: How We Go from the Sanders Campaign to Democratic Socialism (Verso), a combination instruction manual and inspirational hymnal.

    The duo doesn’t lack for reasons to optimize the upside for the ardent young socialists looking to Alexandria Ocasio-Cortez as their new scoutmaster. The coronavirus crisis has laid bare rickety infrastructure, the lack of preparedness, near-sociopathic incompetence, and widespread financial insecurity that turned a manageable crisis into a marauding catastrophe, making massive expansion of health coverage, universal basic income, and debt relief far more feasible propositions. The roiling convulsions following the death of George Floyd once again exposed the brutal racism and paramilitarization of our police forces. A better, more humane future has never cried out more for the taking. But there is a catch: it can be seized only in partnership with liberal and moderate Democrats, no matter how clammy the clasping hands might be, no matter how mushy the joint resolutions, and this will be galling for Jacobin’s pride and vocation, making it harder for them to roll out the tumbrils with the same gusto henceforth. The magazine, after conducting introspective postmortems (“Why the Left Keeps Losing — and How We Can Win”) and intraparty etiquette lessons (“How to Argue with Your Comrades”), finds itself feeling its way forward, with the occasional fumble. When Bhaskar Sunkara announced on Twitter that he intends to cast his presidential vote for Green Party candidate Howie Hawkins (who he?), one of those showy public gestures that leaves no trace, he received pushback from fellow comrades in The Nation (“WTF Is Jacobin’s Editor Thinking in Voting Green?”) and elsewhere. Clarifying his position in The New York Times, where clarifications learn to stand up tall and straight, Sunkara assured the quivering jellies who read the opinion pages that “contrary to stereotypes, we are not pushing a third candidate or eager to see Mr. Trump’s re-election. Instead we are campaigning for core demands like Medicare for All, saving the U.S. Postal Service from bipartisan destruction, organizing essential workers to fight for better pay and conditions throughout the coronavirus crisis and backing down-ballot candidates, mostly running on the Democratic ballot line… Far from unhinged sectarianism, this is a pragmatic strategy.”

    Jacobin pragmatism? This is a historical novelty. By November we will know if they are able to make it to the altar without killing each other. It’s hard to settle once you’ve had a taste of Lenin.

    Night Thoughts

    Long ago I was born.
    There is no one alive anymore
    who remembers me as a baby.
    Was I a good baby? A
    bad? Except in my head
    that debate is now
    silenced forever.
    What constitutes
    a bad baby, I wondered. Colic,
    my mother said, which meant
    it cried a lot.
    What harm could there be
    in that? How hard it was
    to be alive, no wonder
    they all died. And how small
    I must have been, suspended
    in my mother, being patted by her
    approvingly.
    What a shame I became
    verbal, with no connection
    to that memory. My mother’s love!
    All too soon I emerged
    my true self,
    robust but sour,
    like an alarm clock.

    Mahler’s Heaven and Mahler’s Earth

    Gustav Mahler: the face of a man wearing glasses. The face attracts the attention of the viewer: there is something very expressive about it. It is a strong and open face, we are willing to trust it right away. Nothing theatrical about it, nothing presumptuous. This man wears no silks. He is not someone who tells us: I am a genius, be careful with me. There is something energetic, vivid, and “modern” about the man. He gives an impression of alacrity: he could enter the room any second. Many portraits from the same period display men, Germanic and not only Germanic men, politicians, professors, and writers, whose faces disappear stodgily into the thicket of a huge voluptuous beard, as if hiding in it, disallowing any close inspection. But the composer’s visage is naked, trans-parent, immediate. It is there to speak to us, to sing, to tell us something.

    I bought my first recording of Gustav Mahler many decades ago. At the time his name was almost unknown to me. I only had a vague idea of what it represented. The recording I settled on was produced by a Soviet company called Melodiya — a large state-owned (of course) company which sometimes produced great recordings. There was no trade in the Soviet Union and yet the trademark Melodya did exist. It was the Fifth Symphony, I think — I’ve lost the vinyl disc in my many voyages and moves — and the conductor was Yevgeny Svetlanov. For some reason the cover was displayed in the store window for a long time; it was a modest store in Gliwice, in Silesia. Why the display of Mahler’s name in this provincial city which generally cared little for music? 

    It took me several days before I decided to buy the record. And then, very soon, when I heard the first movement, the trumpet and the march, which was at the same time immensely tragic and a bit joyful too, or at least potentially joyful, I knew from this unexpected conjunction of emotions that something very important had happened: a new chapter in my musical life had opened, and in my inner life as well. New sounds entered my imagination. At the same time I understood — or only intuited — that I would always have a problem distinguishing between “sad” and “joyful,” both in music and in poetry. Some sadnesses would be so delicious, and would make me so happy, that I would forget for a while the difference between the two realms. Perhaps there is no frontier between them, as in the Schengen sector of contemporary Europe.

    The Fifth Symphony was my gateway to Mahler’s music. Many years after my first acquaintance with it, a British conductor told me that this particular symphony was deemed by those deeply initiated in Mahler’s symphonies and Mahler’s songs as maybe a bit too popular, too accessible, too easy. “That trumpet, you know.” “And, you know, then came Visconti,” who did not exactly economize on the Adagietto from the same symphony in the slow, very slow shots in Death in Venice, where this music, torn away from its sisters and brothers, the other movements, came to serve a mass-mystical, mass-hys-terical cultish enthusiasm, floating on the cushions of movie theaters chairs. Nothing for serious musicians, nothing for scholars and sages…. But I do not agree. For me the Fifth Symphony remains one of the living centers of Gustav Mahler’s music and no movie will demote it, no popularity will diminish it, no easily manipulated melancholy in a distended Adagietto will make me skeptical about its force, its freshness, its depth. 

    As for that trumpet: the trumpet that I heard for the first time so many years ago had nothing to do with the noble and terrifying noises of the Apocalypse. It was nothing more than an echo of a military bugle — which, the biographers tell us, young Gustav must have heard almost every week in his small Moravian town of Jihlava, or Iglau in German, which was the language of the Habsburg empire, where local troops in their slightly comic blue uniforms would march in the not very tidy streets to the sounds of a brass orchestra. Yet there was nothing trivial or farcical about this almost-a-bugle trumpet. It told me right away that in Mahler’s music I would be exposed to a deep ambivalence, a new complication — that the provincial, the din of Hapsburgian mass-culture, will forever pervade his symphonies. This vernacular, this down-to-earth (down to the cobblestones of Jihlava’s streets) brass racket, always shadows Mahler’s most sublime adagios. 

    The biographical explanation is interesting and important, but it is not sufficient. An artist of Mahler’s stature does not automatically or reflexively rely on early experiences for his material. He uses them, and transposes them, only when they fit into a larger scheme having to do with his aesthetic convictions and longings. The strings in the adagios seem to come from a different world: the violins and the cellos in the adagios sound like they are being played by poets. But then in the rough scherzo-like movements we hear the impudent brass. From the clouds to the cobblestones: Mahler may be a mystical composer, but his mysticism is tinged with an acute awareness of the ordinary, often trite environment of all the higher aspirations.

    His aesthetic convictions and longings: what are they? Judging from the music, one thing seems to be certain: this composer is looking for the high, maybe for the highest that can be achieved, for the religious, for the metaphysical — and yet he cannot help hearing also the common laughter of the low streets, the unsophisticated noise of military brass instruments. His search for the sublime never takes place in the abstract void of an inspiration cleansed of the demotic world which is his habitat. Mahler confronts the predicament well known to many artists and writers living within the walls of modernity but not quite happy with it, because they have in their souls a deep yearning for a spiritual event, for revelation. They are like someone walking in the dusk toward a light, like a wanderer who does not know whether the sun is rising or setting. They have to decide how to relate to everything that is not light, to the vast continent of the half trivial, half necessary arrangements of which the quotidian consists. Should they ignore it, or attempt to secede from it? But then what they have to say will be rejected as nothing more than lofty rhetoric, as something artificial, as unworldly in the sense of unreal. They will be labeled “reactionary” or, even worse, boring. Anyway, aren’t they to some degree made from the same dross that they are trying to overcome, to transcend? 

    And yet if they attach too much importance to it, if they become mesmerized by what is given, by the empirical, then the sheer weight of the banality of existing conditions might crush them, flatten them to nothingness. The dross, right. But let us be fair about modernity: it has plenty of good things as well. It has given us, among other things, democracy and electricity (to paraphrase Lenin). Any honest attitude toward modernity must be extremely complex. Modernity, for better and worse, is the air we breathe. What is problematic for some artists and thinkers is modernity’s anti-metaphysical stance, its claim that we live in a post-post-religious world. Yet there are also artists and thinkers who applaud modernity precisely for its secularism and materialism, like the well-known French poet who visited Krakow and during a public discussion of the respective situations of French poetry and Polish poetry said this: “I admire many things in present- day Polish poetry, but there is one thing that makes me uneasy — you Polish poets still struggle with God, whereas we decided a long time ago that all that is totally childish.” 

    To be sure, they — the anti-moderns, as Antoine Compagnon calls them — may also become too bitter and angry, so that their critique of the modern world can go too far and turn into an empty gesture of rejection. In his afterword to a collection of essays by Gerhard Nebel — the German conservative thinker, an outsider, once a social-democrat, always an anti-Nazi, after World War II a marginal figure in the intellectual landscape of the Bundesrepublik, a connoisseur of ancient Greek literature, someone who saw dealing with die Archaik as one of the remedies against the grayness of the modern world — Sebastian Kleinschmidt presents such a case. He admires the many merits of Nebel’s writing, his vivid emotions, his intolerance of any routine, of any Banausentum or life lived far away from the appeal of the Muses, his passionate search for the real as opposed to the merely actual — but he is skeptical of Nebel’s overall dismissal of modern civilization, since it is too sweeping to be persuasive, too lacking in nuances and distinctions. Perhaps we can put the problem this way: there is no negotiation involved, no exchange, no spiritual diplomacy.

    When coping with modernity, with those aspects of it which insist on curbing or denying our metaphysical hunger, we must be not only as brave as Hector but also as cunning as Ulysses. We have to negotiate. We need to borrow from modernity a lot: since we encounter it every day, how could we avoid being fed and even shaped by it? The very verb “to negotiate” is a good example of the complexity of the situation. It comes from from negotium, from the negation of otium. Otium is the Latin word for leisure, but for contemplation too. Thus the verb to negotiate denotes a worldly activity that tacitly presupposes the primacy of unworldly activities (because the negation comes second, after the affirmation).

    In French, le négoce means commerce, business. We can add to it all the noise of the market and the parliament. When we negotiate, we have no otium. But it is also possible to negotiate in order to save some of the otium. We can negate otium for a while but only in order to return to it a bit later, once it has been saved from destruction. As I say, we must be cunning. 

    By the way, the notion of otium that gave birth to the verb “to negotiate” is not a marginal category, something that belongs only to the annals of academia, to books covered by dust. For the Ancients it was a central notion and a central activity, the beginning and the end of wisdom. And even now it plays an important role in a debate in which the values of modernity are being pondered: those who have problems with the new shape of our civilization accuse it of having killed otium, of having produced an infinity of new noises and activities which contribute to the end of leisure, to the extermination of contemplation. 

    But can we discuss Mahler’s music along with poetic texts by, say, Yeats and Eliot, along with the other manifestoes of modernism? Talking about music in a way that makes it seem like philosophy or a philosophical novel, a kind of Zauberberg for piano and violin, is certainly flawed. Questions are methodically articulated in philosophy and, though never fully answered, they wander from one generation to another, from the Greeks to our contemporaries. Does art need such questions? Does music need them? The first impulse is to say no, art has nothing to do with this sort of intellectual inquiry. Isn’t pure contemplation, separated from any rational discourse, the unique element of art, both painting and music, and perhaps poetry as well? 

    But maybe pure contemplation does not need to be so pure. We do not know exactly how it works (another question!), but we do know that art always takes on some coloring from its historic time, from the epoch in which it is created. Art obviously has a social history, and earthly circumstances. And yet impure contemplation is still contemplation. Let us listen for a minute to the words of a famous painter, an experienced practitioner — to Balthus in his conversations with Alain Vircondelet, which were conducted in the last years of the painter’s life:

    Modern painting hasn’t really understood that painting’s sublime, ultimate purpose — if it has one — is to be the tool or passageway to answering the world’s most daunting questions that haven’t been fathomed. The Great Book of the Universe remains impenetrable and painting is one of its possible keys. That’s why it is indubitably religious, and therefore spiritual. Through painting, I revisit the course of time and history, at an unknown time, original in the true sense of the word. That is, something newly born. Working allows me to be present on the first day, in an extreme, solitary adventure laden with all of past history.

    How fascinating: a great painter tells us that in his work he used not only his eye and his hand but also his reason, his philosophical mind; that when he painted he felt the presence of great questions. Even more: he tells us that the pressure of these questions was not inconsequential, that it led him to spirituality. We know that Mahler, in a letter to Bruno Walter, also mentioned the presence of great questions and described his state of mind while being in contact with the element of music in this way: “When I hear music, even when I am conducting, I often hear a clear answer to all my questions — I experience clarity and certainty.”

    Certainly, the questions that sit around a painter or a composer like pensive cats are very different from those which besiege a philosopher. Do they require a response? Here is one more authority: in a note serving as a preface to the publication of four of his letters about Nietzsche, Valery remarked that “Nietzsche stirred up the combativeness of my mind and the intoxicating pleasure of quick answers which I have always savored a little too much.” The irony of it: “the intoxicating pleasure of quick answers” in a thinker who, as we know, was so proud of his philosophizing with a hammer. Of course, this one sentence comprises in a nutshell the entire judgment that mature Valéry passed on Nietzsche — the early temptation and the later rejection of such a degree of “the combativeness of the spirit.” And it confirms our intuition: the questions that accompany art, painting, music, and poetry cannot be answered in a way similar to debates in philosophy seminars, and yet they are an invisible and inaudible part of every major artistic exertion.

    In a way, Mahler’s doubleness of approach seems completely obvious; the brass and the strings attend each other, and need each other, in the complex patterns of his symphonies. I have read that in his time he was accused by many critics of triviality in his music. They claimed that his symphonies lacked the dignity of Beethoven’s symphonies, the depth of great German music. What they ferociously attacked as trivial is probably the thing that I admire so much in Mahler’s music — the presence of the other side of our world, the inclusion of its commonness and its coarseness, of the urban peripheries, of village fairs, of the brass — the quotation of provincial life, of public parades and military marches, almost like in Nino Rota’s scores for Fellini. Very few among Mahler’s contemporaries were able to see the virtue of it.

    The charge of triviality also had anti-Semitic undertones and followed in the footsteps of Wagner’s accusation, in his “Judaism in Music,” that Jewish composers were not able to develop a deep connection with the soul of the people, and were limited to the world of the city only, gliding slickly on the surface. Jewish composers apparently could not hear the song of the earth, argued such critics. How wonderful, then, that Mahler triumphed in his own Song of the Earth! Jewish composers were accused — among the many sins of which they were accused — of introducing modern elements into their music. Never mind that one of the principal modernizers of Western music was Wagner himself. 

    I have yet to understand why Mahler has for so long, from the very beginning, been so overwhelmingly important for me, so utterly central to the evolution of my soul. Once, in speaking with some American friends, I asked them who “made” them, in the sense of a master, a teacher, un maître à penser, and the reason was I wanted to tell them that Gustav Mahler made me. It was an exaggeration, I know, and a bit precious. I had other masters as well. And yet my statement was not false. Did it have to do only with the sonorities of his symphonies, with the newness of his music, the unexpected contrasts and astonishing passages swinging between the lyric to the sardonic? Was it the formal side uniquely? For many years I resisted the temptation to translate my deep emotional bond to his music — the deep consonance between Mahler’s work and my own search in the domain of poetry — into intellectual terms, maybe fearing that too much light shed on it would diminish its grip on my imagination. I still hold this superstitious view, but I also suspect that there may be some larger intellectual benefit to be gained from an exploration of my obsession.

    For everyone who has a passionate interest in art and in ideas, sooner or later a problem arises. When we look for truth and try to be honest, when we try as a matter of principle to avoid dogmatism and any sort of petrification, any blind commitment to this or that worldview, we are, it seems, necessarily condemned to deal with shards, with fragments, with pieces that do not constitute any whole — even if, consciously or not, we strive for the impossible “whole.” But then if we also harbor a love for art — and it is not at all unusual to have these two passions combined in a single individual — a strange tension appears: in art we deal with forms which, by definition, cannot be totally fragmentary. To be sure, at least since the Romantic moment we have been exposed to fragments, and accustomed to fracture, in all kinds of artistic enterprises, from music and poetry to painting — but even these fragments tend to acquire a shape. If we juxtapose them with the “truth fragments,” with Wittgensteinian scraps of philosophical results, an integrated pattern is created by virtue of some little embellishment, by a sleight of hand; a magician is at work who tends to forget the search for truth because the possibility of a form, a more or less perfect form, suddenly attracts him more strongly than the shapelessness of a purely intellectual assessment. 

    These two dissimilar but related hunts, one for truth, one for form, are not unlike husky dogs pulling a sled in two slightly different directions: they are sometimes able to achieve an almost-harmony. The sled fitfully moves forward, but at other times the competing pressures threaten to endanger the entire expedition. So, too, are our mental hunts and journeys, forever hesitating between a form that will allow us to forget the rather uncomfortable sharpness of truth and a gesturing for truth that may make us forget the thrill of beauty and the urge to create, at least for the time being.

    This brings us back to Mahler. The doubleness in his music that I have described may be understood as reflecting the ambiguity of the double search for truth and form. Mahler was a God-seeker who recognized the ambivalence of such a quest in art. He was torn between the search for the voluptuousness of beauty and the search for the exactness of truth.

    Hartmut Lange, a German writer living in Berlin, a master of short prose, told me once that Mahler’s Song of the Earth, which he listens to all the time and adores in a radical way, “is God.” I was taken aback. The deification of this almost-symphony, which I also ardently admire, made me feel uneasy. But I find it more than interesting that this great music can be associated with, and even called, God. This suggests a quasi-religious aspect of the music, and even a sober secularist cannot escape at times placing the work within the circle nearing the sacred.

    Among the many approaches to the sacred we may distinguish two: one which consists in searching, in a quest, and is conducted in a climate of uncertainty and even doubt, and another which proclaims a kind of sureness, a positive certainty, a eureka-like feeling that what was sought has been found. In our tormented and skeptical time it is not easy to find examples of such a positive and even arrogant attitude, at least not within serious culture. Among the great modern poets and writers only few were blessed by certainty. Even the great Pascal had his doubts, and so much earlier. Gustav Mahler belongs to the seekers, not the finders. The quest is his element, and doubt is always near.

    It is true for both poetry and music: whenever one approaches an important work, one is much more outspoken when it comes to discussing the elements within it that will yield to the intellectual or even dialectical categories that the reader or listener cherishes. The other ingredients, especially those that represent pure lyricism and thus are at the very heart of the work in question, are hardly graspable, at least in words. What can we say? It is beautiful, it pierces my soul, or some other platitude of the sort. Or we can just sigh to signal our delight. Sighing, though, is not enough; it is too inarticulate, and in print it evaporates altogether. This is the misery of writing about art: the very center of it remains almost totally ineffable, and what can be rationally described is rather a frame than the substance itself. 

    A frame that enters into dialogue with its period, with its cultural and historical environment, can be much better described than the substance of a symphony or a painting. The nucleus of a work, or of an artist’s output, is less historical, less marked by the sediments of time, and therefore mysterious. It is also more personal, more private. This is certainly the case with Mahler’s music, whose very core constitute those lyric movements, those endless ostinati that we find everywhere, first in his songs, in Lieder eines fahrenden Gesellen and the other lieder, then in his symphonies, and supremely in their adagios, and then finally in the unsurpassable Lied von der Erde. And the Ninth Symphony! I don’t have in mind only the final Adagio but also the first movement, the Andante comodo, which displays an incredible vivacity and, at the same time, creates an unprecedentedly rich musical idiom — a masterful musical portrayal of what it means to be alive, with all the quick changes and stubborn dramas, the resentments and the raptures, that constitute the exquisite and weary workshop of the mind and the heart.

    But let us not forget, when we celebrate the lyric sections, the sometimes simple melodies, and the long ostinati, let us not forget all the intoxicating marches, the half sardonic, half triumphant marches that originated in a small Moravian town but then crossed the equator and reached the antipodes. These marches give Mahler’s music its rhythm, its vigor, its muscle. There is nothing wan in Mahler’s compositions, nothing pale on the order of, say, Puvis de Chavannes; instead they display, even in their most tender and aching passages, an irreversible vitality. The marches propel the music and give it its movement, its strolls and dances and strides. The “vulgar” marches convey the mood of a constant progression, maybe even of a “pilgrim’s progress.” Nothing ever stagnates in Mahler compositions, they are on the move all the time. 

    It’s unbecoming to disagree with someone who was a great Mahler connoisseur and also contributed enormously to the propagation of his work, but it is hard to accept Leonard Bernstein’s observation that the funeral marches in Mahler‘s symphonies are a musical image of grief for the Jewish God whom the composer abandoned. The problem is not only that there is scant biographical evidence for such an interpretation. More importantly, the marches are more than Bernstein says they are. They represent no single emotion. Instead they oscillate between mourning and bliss and thus stand (or walk or dance) high above any firm monocausal meaning.

     In the Song of the Earth, it is the sixth and last movement, der Abschied, the Farewell, that crowns Mahler’s entire work. Musicologists tell us that its beauty consists mainly in the combination of a lyrical melodic line with the rich chromaticism of the orchestra. But obviously such an observation can barely render justice to the unforgettable charm of this sensual music which unwillingly bids farewell to the earth; we hear in this work the tired yet ecstatic voice of the composer who knew how little life was left to him. Perhaps only in Rilke’s 

    Duino Elegies can we find an example of a similar seriousness in embracing our fate, an instance of a great artist finally abolishing any clear distinction between sadness and joy.  

    There is a fine poem written in the early 1980s by the Swedish poet and novelist Lars Gustafsson. It is called “The Stillness of the World Before Bach” and it caught the attention of many readers. Here is part of it:

    There must have been a world before
    the Trio Sonata in D, a world before the A minor partita,
    but what kind of a world?
    A Europe of vast empty spaces, unresounding,
    everywhere unawakened instruments,
    where the Musical Offering, the Well-Tempered Clavier
    never passed across the keys.
    Isolated churches
    where the soprano line of the Passion
    never in helpless love twined round
    the gentler movements of the flute […]

    [translated into English by Philip Martin]

    Of course there were many voices and many composers before Bach, and not at all “a Europe of vast empty spaces.” What would Palestrina, Gabrielli, and Monteverdi say? What would the monks say who created and developed Gregorian chant? Still, in Gustafsson’s poem we immediately recognize some deeper truth. I imagine that in a similar poem in which Gustav Mahler would replace Johann Sebastian Bach, the poet would describe not “a Europe of vast empty spaces” but rather a Europe of cities, great and small ones, of empty Sunday streets, of empty parks, of waiting rooms.

    The Mahler gesture resembles in some respect the Bach achievement, but it is very different too. Bach was a genius of synthesis, who appeared after centuries of the development of Western art and on this fertile soil built a great edifice of music. There is less synthetic energy in Mahler’s creation; the significance of his work seems to reside in its spiritual implication. Mahler, more than any of his contemporaries, tries to graft onto this lay world of ours a religious striving, to convey a higher meaning to a largely meaningless environment without ever forgetting or concealing the obvious features of a secular age.

    The Sludge

    I was never more hated than when I tried to be honest….
    I’ve never been more loved and appreciated than when I tried
    to “justify” and affirm someone’s mistaken beliefs; or when
    I tried to give my friends the incorrect, absurd answers they
    wished to hear. In my presence they could talk and agree with
    themselves, the world was nailed down, and they loved it.
    They received a feeling of security.

    RALPH ELLISON, INVISIBLE MAN

    One Friday afternoon, in a carpeted alcove off the main sanctuary of my school, a Jewish school in the suburbs of Philadelphia, my class collected in a circle as we did every week. A young, liberally perfumed Israeli woman in a tight turtleneck sweater read to us from a textbook about the exodus from Egypt. I asked her why our ancestors had been enslaved to begin with, and then wondered aloud whether it was because only former slaves can appreciate freedom. I remember the feeling of the idea forming in my very young mind, and the struggle to articulate it. Clumsily, with a child’s vocabulary, I suggested to my teacher that Jewish political life began with emancipation, and that this origin ensured that gratitude to God would be the foundation of our national identity. Could that have been God’s motivation? I don’t remember her answer, only her mild bemusement, and my impression that she did not have the philosophical tools or the inclination to engage with the question. I was left to wonder on my own about the nature of slavery, the distant memories that undergird identity, and God’s will; without a teacher, without a framework. I was by myself with these questions. 

    Of course, we were not gathered in that schoolchildren’s circle to study philosophy. We were studying the Biblical tale not in order to theorize about the nature of slavery and freedom, or to acquire a larger sense of Jewish history, but because it was expected of us, and every other grade in the school, this and every week since the school’s founding, to study the weekly portion of the Torah, because that is what Jewish students in a Jewish school of that denomination do. I had mistaken a social activity for an intellectual one. The norms of a community demanded this conversation of us, because otherwise the community would be suspect. People would whisper that graduates of our school lacked the capacity for full belonging within their particular Jewish group, because we had failed to receive the proper training in membership. The overarching objective of our education was initiation. The prayers that we were taught to say before and after eating, and upon waking up in the morning, and going to the bathroom, and seeing a rainbow, and on myriad other quotidian occasions, served the same purpose. These were not theological practices; we were not taught to consider the might and creative power of the God whom we were thanking — the meanings of what we recited, the ideas that lay beneath the words. We uttered all those sanctifying words because it was what our school’s permutation of the Jewish tradition taught Jews to do. We were performing, not pondering. 

    Divine commandments were the sources and accoutrements of our liturgies and rituals. But we lingered much longer over the choreography than over the divinity. The substance of our identity was rules, which included the recitation of certain formulas for certain concepts and customs. And our knowledge of the rules, how or whether we obeyed them, would signal what sort of Jews we were. The primary purpose of this system was to provide talismans that we could use to signal membership. In the context of my religious education, the meaning of the symbols was less important than how I presented them. Badges were more central than beliefs. The content of the badges — the symbols and all the concomitant intellectual complications — was left alone. Marinating within that culture inculcated in me an almost mystical reverence for my religion and for its God because it placed them in a realm outside of reason. I could not interrogate them: holiness is incommensurate with reason. Without the indelible experience of that schooling in anti-intellectualism, the beauties and intoxicants of tradition would be inaccessible to me. Even now, when I witness expressions of fine religious faith, I am capable of recognizing and honoring them because of that early training.

    The anti-intellectualism had another unwitting effect: the indifference of my community to the cerebral and non-communal dimensions of the way we lived meant that I could develop my own relationship with them.  Since they were unconcerned with the aspects of religious life to which I most kindled, I was free to discover them independently. They didn’t care what I thought, so I set out to think. In this manner I began to acquaint myself with fundamental human questions, to feel my way around and develop the rudiments of ideas about morality, slavery, love, and forgiveness. My academic syllabi were rife with references to these themes, but they were rarely discussed directly. They were like so many paintings on the wall: we would walk by them a hundred times a day and never stop and look. As children we became comfortable in their presence, but we did not exactly study them together, so I studied them alone, without the commentaries that would harden them into a catechism.  

    In a certain ironic sense, I was lucky. When someone is taught to think about fundamental human questions within a group, her conception of those themes will be shaped by the group. The goal of that sort of group study, perhaps not overtly articulated but always at work, would be to initiate her into a particular system of particular people, to provide her with a ready-made attitude and a handy worldview, to train her to think and speak in the jargon of that worldview, and to signal membership within the company of those who espouse it.

    If language is a condition of our thoughts, it is also a source of their corruption. Thinking outside a language may be impossible, but thought may take place in a variety of vocabularies, and the unexamined vocabularies, the ones that we receive in tidy and dogmatic packages, pose a great danger to clear and critical thinking. My good fortune was that I was not socialized philosophically. My religious tradition was not presented to me as a philosophical tradition. I was not inducted into a full and finished vernacular that would dictate or manipulate how I would think. And I was young enough not to have become so sensitive to political or cultural etiquettes that they would inhibit or mitigate independent reflection and study. The space in my head into which I retreated to think was built and outfitted mainly by me, or so it felt; and there, in that detached and unassisted space, I became accustomed to the looming awareness that these themes were too complicated for me to really understand (an awareness which provoked an ineradicable distrust for communal ideological certainties). Yet this did not diminish my desire to spend time there. My relationship with my burgeoning ideas felt privileged, the way a child feels playing with plundered high heels or lipstick without the context to understand the social significations that those instruments may one day carry. If I misunderstood them, if they baffled me, there was no reason to be embarrassed. My sense of possibility was large and exciting, because it was unburdened by the adult awareness that convictions have social consequences by which they may then be judged. 

    My limited field of human experience — the people I knew, the fictional and historical figures to which I had been introduced — comprised all the materials with which I could conduct my solitary musings. I studied the rhythms and tendencies of human interactions. I watched the way that other people responded to each other, the way they held themselves when they were alone or in society. This stock of knowledge informed how I thought people in general do and ought to behave. (My theory of slavery and emancipation was a product of this discipline: for example, I noted that I got anxious for recess when in school but bored by endless freedom on the weekend or vacation. We appreciate freedom when we are enslaved: is that what Scripture wanted me to understand? Well, that was consistent with my experience.) My inquiries were catalyzed and sustained by pure curiosity about human beings and in retrospect they seem to have been relatively untainted by my community’s biases. Perhaps I am idealizing my beginnings, but I really do have the memory of an open mind and a pretty level playing field. Like the adolescent heroines in Rohmer’s films, I genuinely wanted to know how people are so I could figure out how I should be.

    The effects of this solitary and informal mental life were permanent. Having developed the space in my head independent of a received blueprint, my intellectual methods would always be fundamentally unsocialized. Despite the external pressures, I have never successfully unlearned these attitudes. I don’t doubt that there were many influences from my surroundings, from my community and my culture, that I was absorbing without recognizing them, but still I felt significantly on my own and, as I say, lucky. But I was also quite lonely. The loneliness intensified as I got older and my family became more religious. The high school that I attended was much more traditional than my earlier schools had been. There were more rules, endless esoteric rituals and cultural habits that I had to learn in order to convince myself and others that I was one of them, that I belonged there. I failed often. There was so much that I didn’t know, and, more to the point, there was something about the weather around me that perpetually exposed my difference. No matter how hard I tried to remake myself into a member, to dismantle and rebuild the space in my head, everyone could sense that the indoctrination was not taking. I recited the script with a foreign accent. 

    In a flagrant, chronic, and no doubt annoying manifestation of otherness, I would badger my teachers and peers for reasons and explanations. Why were we — I was a “we” now – obeying all these rules? I was not in open revolt: I sensed that our tradition was rich and I was eager to plumb the treasures that I had been bequeathed. But it seemed a gross dereliction to obey the laws without considering their purpose. My intentions were innocent, perhaps even virtuous, but my questions were discomfiting anyway. Even now I often recall a particularly representative afternoon. A group of girls in my grade were discussing the practice called shmirat negiah, the strict observance of physical distance between the sexes, which prohibits men and women who are not related from touching one another. I wondered: Why had the rule been written to begin with? When did Jews begin to enforce it? What kind of male-female dynamic did it seek to cultivate? Did such emphatic chasteness in preparation for marriage help or harm that union? These were reasonable questions, except that in a context of orthodoxy they could be received as subversive. A girl I admired — a paragon of membership — complained that the practice made her awkward and scared of men, and that she could not understand why her mother enforced it. “Why don’t you just ask your mother why she thinks you ought to do it?” I finally asked. “Because,” she sighed, “she’ll just tell me that I have to because that is what Jews do.” My mind recoiled. Why on earth would a mother shirk the opportunity (and the responsibility) to help her child grapple with such an important question? Why wouldn’t she consider the law itself a catalyst for conversations about such primary themes? Yet even as I asked myself these questions, I knew the answer. Membership mattered more than meaning.

    But surely that attitude did not govern all human communities. This could not be all there was. Somewhere, I assumed, there were institutions in which people directly addressed the ideas I wondered about on my own. Somewhere there were groups in which the exploration of meaning was an essential feature of membership. In the secular world, which I naively called “the real world,” I imagined intellectual camaraderie would be easier to find. Surely secular people, when they talk about justice, sex, mercy, and virtue, must be interested in seriously engaging those themes. In the real world, surely, there would be no orthodoxies, and people would have no reason to incessantly analyze one another’s behaviors in order to grant or deny them legitimacy. They would not spread petty rumors about neighbors failing to uphold the code or refuse to eat at the tables of those who were not exactly like them, as the worst members of my origin bubble did. They would not, forgive me, cancel each other.

    Of course I was wrong. As it turns out, the secular world also has liturgies, dogmas, ostracisms, and bans. It, too, hallows conformity. It has heretics, and it even has gods: they just don’t call them that. In college I discovered the temples of the progressives, the liberals, the conservatives, and more. Each has a vernacular of its own, comprised of dialects and rituals which serve to establish membership, welcome members, and turn away outsiders. In this realm of proud secularity, my religious upbringing proved unexpectedly useful. It had prepared me to identify the mechanisms of group power, and the cruel drama of deviance and its consequences. (What is cancellation, if not excommunication?) It turned out that all too often in the real world, the open world, the democratic world, the enlightened world, when people talk about fundamental human questions they are far more interested in signaling membership and allegiance than in developing honest answers to them. 

    It is true that many of these questions are hard to answer. The intensity with which people hold convictions belies their complexity. Independent and critical reasoning is not for the faint of heart, and the length and difficulty of the search may eventually make skeptics or cynics of them. It is much simpler to memorize a script, and to establish a quasi-mystical allegiance to ones politics. Holiness is incommensurate with reason, remember. Still, the demands of a nuanced politics are not, I think, why people are reluctant to wrestle with ideas on their own. There are advantages to wholesale worldviews and closed systems. They provide something even more alluring than conviction: solidarity. They are a cure not only for perplexity but also for loneliness. A group with which to rehearse shared dogma, and to style oneself in accordance with the aesthetic that those dogma evoke: this is not a small thing. Thus the answer to a philosophical or moral question becomes…community. We choose our philosophy on the basis of our sociology. This is a category mistake — and the rule by which we live.  

    In a different world, most people would readily admit ignorance or doubt about political or cultural subjects the same way that my young peer would have had no reason to refrain from hugging friends of the opposite gender if Jewish custom did not forbid it. If their group ignored the subject, so would they. Most would not be ashamed of their confusion because intellectual confusion is not a common fear. But isolation is. We dread marginality more than we dread error. After all, the social costs of idiosyncrasy or independence are high. We fear finding ourselves at our screens, watching others retweet or like or share one another’s posts without a cohort of our own in which to do the same. Who does not wish to be a part of a whole? (Identity politics is the current name for this cozy mode of discourse.) In my experience, when most people talk about politics, they are largely motivated by this concern, which compromises the integrity of these conversations. They disguise a social discourse as an intellectual discourse.

    I call this phony discourse the sludge. The sludge is intellectual and political kitsch. It is a shared mental condition in which all the work of thinking has already been done for us. It redirects attention away from fundamentals by converting them into accessories, into proofs of identity, into certificates of membership.

    In a sludge-infected world, in our world, if someone were to say, “that fascist presides over a hegemonic patriarchy,” her primary purpose would be to communicate to her interlocutor that she is woke, trustworthy, an insider, an adept, a spokesperson, an agent of a particular ideology, proficient in its jargon. She would also be indicating the denomination of progressivism to which she subscribes, thus erecting the ideological boundaries for the conversation. If someone else were to say, of the same person, that he is a “cosmopolitan” or a “globalist” or a “snowflake” she would be doing the same thing in a different vernacular. (They would both use the terms “liberal” and “neoliberal” as slurs, probably without a firm sense of what either one means.) In the context of these two conversations, whether or not the individual in question is a snowflake or a fascist is as good as irrelevant. The subject of the conversation is just an occasion for manifesting group solidarity. Righteousness is an accoutrement of the code. In fact, asking either person to justify the assumptions inherent in her statement would be as irregular as asking me to justify my faith in God after witnessing me thank Him for the apple I am about to eat. She would answer with her equivalent of “that’s just what Jews do.” In both these cases, belonging is prior to belief. 

    The effect of sludge-infected language is that quite often the focal point of debates about politics or philosophy is not at all the literal subject at hand. Members are conditioned 

    to present as if they care about the substance of a particular ideology. Learning to present as if you care about something is very different from learning to actually care about something. Caring is difficult, it is a complicated and time-consuming capacity which requires discipline, openness, and analysis. This is not a trivial point. Imagine a sludge-infected romantic relationship (or just look around you) — if, instead of taking a close and patient interest in her lover’s needs, a woman simply asked herself, “What are the kinds of things that people who are in love do?,” and having done those things, considered herself well acquitted of these duties and therefore in love. She may tell him that she loves him, and she may be loving or supportive in a generic kind of way, but she will not really know him. Details about his inner life, about his insecurities and his demons, will not interest her. Romantic success, for her, would be to appear from the outside as if they have created a successful partnership. She will have treated love programmatically, in accordance with the expectations of her social context. Who her lover is when he is not playing the role she has assigned to him will remain mysterious. When tragedy strikes, they will be forced to recognize that they do not know or trust each other. 

    Sludge-infected politics are similarly behavioral and unsettling. Practitioners exploit opportunities for genuine expressions of devotion as occasions to signal membership. Consider the effect of the sludge on antiracism. Suppose we were taught to present as antiracists rather than to seriously consider the imperatives of antiracism (or, again, just look around you). Antiracism (like feminism, like Zionism, like socialism, like isms generally) is difficult to cultivate and strengthen. It requires work and must be consciously developed. It is the result of many individual experiences and sacrifices, highs and lows, of sustained and thoughtful interest and introspection. If we consider ourselves acquitted of  our responsibility to antiracism merely by posting #handsup-dontshoot at regular intervals on social media, perhaps garnering a host of likes and followers, the duties of an honest and reflective antiracism will remain unacknowledged (and the sentiment to which that slogan refers will be cheapened). Our antiracism would be not internal but external, not philosophical but stylistic.

    If a person is a dedicated antiracist, over the years she will come to better appreciate the enormity of the battle against racism. She will develop the minute concerns and sensitivities of a veteran. She will realize that the world is not made up only of friends and enemies. She will know that sometimes, in order to do good, one must work alongside unlikely allies, and that purists are incapable of effecting sustainable change. The very language she uses to discuss her mission will be informed by this knowledge. Indeed, it would strike her as shabby and disloyal to regurgitate common slogans when speaking about the specific, discomfiting realities of which she has intimate knowledge and which she is serious about mitigating. She will choose more precise and shaded words, her own words, careful words. The novice will listen to her and think, “I would never have thought about it that way.” If, by contrast, a person is motivated by the pressure to appear as a loyal soldier, she will never gain this wisdom. Her concerns will be only about the rituals, the liturgies, and the catechisms of a particular politics, however just the cause. Outsiders will recognize her language from Twitter or Facebook or other digitized watering holes, and of course they will ratify it, but she will have gained all that she ever really sought: admiration and affirmation.

    In this manner, movements that purport to exist in service to certain values may perpetuate a status quo in which those values, demanding and taxing, are named but never seriously treated. We ignore them, and pretend — together, as a community — that we are not ignoring them. Every time a self-proclaimed “n-ist” presents as an “n-ist,” every time a tweet or a post racks up a hundred likes in service to that presentation, she can tell herself she has fulfilled the responsibilities of her “n-ism” and so she will not feel further pressure to do so. 

    Consider two examples. First, a college student with two thousand followers on Instagram who attends every Black Lives Matter protest armed with placards, and who posts regularly about white privilege and the guilty conscience of white America. Suppose this woman’s antiracism manifests itself primarily as a crippling guilt in the face of systemic inequity from which she benefits: her service to antiracism is not nonexistent, or merely “performative,” since she does force her followers to think about uncomfortable subjects (though it is quite likely that her followers already agree with her, but never mind), and she does contribute to the increasing awareness that these injustices must be named and reckoned with now.

    It is good that our college student marched. But compare her to a white septuagenarian who has moved into an increasingly gentrifying neighborhood, who is well off and even a member of the American elite, who has the cell phone numbers of more than a few important people. She has never once felt guilty for having been born into power and privilege. She is not a marcher. Now imagine that this woman, out of mere decency, involves herself in the everyday lives of her black neighbors (something which most people like her fail to do). She is who they turn to when forced to confront a system which she can manipulate for them, which they cannot navigate without her. She is the one they call when, say, one of their sons is unjustly arrested (again), or when the school board threatens to cut the district’s budget (again), because they trust that she will work tirelessly on their behalf. She learns over time, through direct experience, about the blisters and lacerations of racism, and about how to preempt and treat them. Owing to her skin color and her tax bracket, she, like our college student, profits from systemic inequity, but, unlike our college student, she takes regular and concrete actions to help the disadvantaged. Her actions are moral but not ideological. She is not a tourist in the cause and the cause is not a flex of her identity. Yet she is regularly in the trenches and she is alleviating hardship. 

    Which of these women has more ardently and effectively fought against racism? I have no objection to activism, quite the contrary, but it must be constantly vigilant against declining into the sludge. (Of course neither the good neighbor nor the tweeting marcher are engaged, strictly speaking, in politics; at the very least they both must also vote.) Sludge-like discourse is not a new phenomenon, of course — prior to the mass revulsion at the murder of George Floyd there was the convulsion known as #MeToo, which exposed some terrible abuses and established some necessary adjustments but was mired in the sludge and the culture of holy rage. And there is another historical revolution to consider: in all the centuries of thought distorted by community, there has never been a greater ally and amplifier of this phenomenon than the new technology. It is uncannily ideal for such shallowness and such conformism, and the best place to go to prove your purity. Owing to it, the sludge has become unprecedentedly manic and unprecedentedly ubiquitous. For all its reputation as an engine for loneliness and isolation, the internet is in fact the perfect technology of the herd. Consider Twitter, the infinitely metastazing home of the member-ships and the mobs. For demagogues and bigots and liars and inciters it has solved once and for all the old problem of the transmission and distribution of opinion. The echo-chambers of the righteous progressives and the righteous reactionaries exist side by side in splendid defiance of one another, drunk on themselves, on their likes, retweets, shares, and followers (the latter a disarmingly candid appellation). All these echo chambers — these braying threads — are structurally identical. Authority is granted to those with the highest numbers. The xenophobic “influencer” with the most followers is granted power for precisely the same reason, and according to the same authority, as the justice warrior with the most followers. And followers are won according to the same laws in all realms: those who are proficient in the vernacular, who can convince others that they are full members, that they understand the code and its implications best, they are the ones to whom the like-minded flock. The priests of one temple wrathfully say, “You are sexist” and those of another wrathfully say “You are un-American” in the same way members of my old community would wrathfully say, “You are a sinner.” It all means the same thing: get out. 

    The sludge does not govern all discourse in America, but a horrifying amount of our “national conversation” is canned. And instead of discussing actual injustices we have endless conversations about how to discuss such things. What can be said and what cannot be said? Why talk about slavery when you can talk about the 1619 Project? Why talk about the nuances and ambiguities endemic to any sexual encounter when you can talk about #MeToo? Why complicate the question for yourself when you can join the gang? Every time we choose one of these options over the other, we demonstrate what kind of knowledge matters to us most.

    And one of the most pernicious effects of this degradation of our discourse occurs in our private lives — in personal relationships. Increasingly in conversations with friends I recognize a thickening boundary, a forcefield that repels us from the highly charged subject of our discussion. We bump up against it and decide not to go there, where integrity and trust would take us. At the point of impact, when honesty collides with membership and shrinks away, I sometimes feel as if I am being pushed back not just from the subject matter but also from the friend herself. She begins to speak in a pastiche of platitudes borrowed from the newsletters clogging her (and my) inbox. I don’t seem to be talking to her anymore, I can’t get through to her own thoughts, to her own perspective — which, I stubbornly insist, lies somewhere beneath the slogans and the shorthands. All too often I find myself following suit. Neither one of us is willing to express our respective curiosities and anxieties on matters related to politics. We just bat the keywords around and pretend we are really in dialogue with each other. He declares that the world will end if Biden is elected, she declares that the world will end if Trump is elected, and I am expected not to ask “Why?” Instead I am being invited to join him or to join her, and the more hysterically, the better.

    Once this parameter, this border wall, has been erected, taking it down would require a troublesome break from social convention. One of us would have to be disruptive, even impolite, to pull us out of the sludge-slinging which prohibits intellectual and verbal independence. And so usually we carry on within those boundaries, interacting as representatives of a cohort or a movement, not as intellectually diligent citizens with a sense of our own ignorance and an appetite for what the other thinks. We become paranoid about discursive limits. Ever present in our conversation is the danger that if one of us deviates from the etiquette, the other will accuse her of being offensive, or worse. The wages of candor are now very high. We have made our discourse too brutal because we are too delicate.

    So we obey the rules in which we have trained ourselves, and look for safety in numbers. We invoke the authority of dogma, hearsay, and cliche. We substitute popularity for truth. We quote statistics like gospel, without the faintest sense of their veracity, as if numbers can settle moral questions. We denounce the character of people we have not met simply because others — in a book group, a twitter thread, a newspaper column, or a mob — say they are no good. The actual interpretation of concepts such as climate change or race or interventionism is less significant than the affiliations that they denote. And when the conversation is over, we are where we were when it began, left to shibboleths and confirmed, as Lionel Trilling once complained about an earlier debasement, in our sense of our own righteousness. But this must not be the purpose of conversation, public or private. It is disgraceful to treat intellectual equals as if they cannot be trusted with our doubts. It is wrong to celebrate freedom of thought and freedom of speech and then think and speak unfreely. “Polarization” is just another name for this heated charade. In an open society, in American society, one should not be made to feel like a dissident for speaking one’s own mind.

    Abolition and American Origins

    The turbulent politics of the present moment have reached far back into American history. Although not for the first time, the very character of the ideals expressed in the Declaration of Independence and the Constitution have been thrown into question by the hideous reality of slavery, long before and then during the founding era and for eighty years thereafter; and then by slavery’s legacy. In this accounting, slavery appears not as an institution central to American history but as that history’s essence, the system of white supremacy and economic oligarchy upon which everything else in this country has been built, right down to the inequalities and injustices of today.

    More than forty years ago, when a similar bleak pessimism was in the air, the pioneering African American historian Benjamin Quarles remarked on that pessimism’s distortions. The history of American slavery could never be properly grasped, Quarles wrote, “without careful attention to a concomitant development and influence — the crusade against it,” a crusade, he made clear, that commenced before the American Revolution. Quarles understood that examining slavery’s oppression without also examining the anti-slavery movement’s resistance to it simplifies and coarsens our history, which in turn coarsens our own politics and culture. “The anti-slavery leaders and their organizations tell us much about slavery,” he insisted — and, no less importantly, “they tell us something about our character as  a nation.” 

    If we are to speak about the nation’s origins, we must get the origins right. As we continue to wrestle with the brutal, and soul-destroying power of racism in our society, it is essential that we recognize the mixed and mottled history upon which our sense of our country must rest. In judging a society, how do we responsibly assess its struggle against evil alongside the evil against which it struggles? With what combination of outrage and pride, alienation and honor, should we define our feelings about America?    

    On November 5, 1819, Elias Boudinot, the former president of the Continental Congress, ex-U.S. Congressman, and past director of the U.S. Mint, wrote to former President James Madison, enclosing a copy of the proceedings of a meeting held a week earlier in Trenton, New Jersey, opposing the admission of Missouri to the Union as a slave state. The crisis over Missouri — which would lead to the famous Missouri Compromise the following year — had begun in the House of Representatives in February, but Congress had been out of session for months with virtually no sign of popular concern. In late summer, Boudinot, who was 79 and crippled by gout, mustered the strength to help organize a modest protest gathering in his hometown of Burlington, long a center of anti-slavery. The far larger follow-up meeting in Trenton was truly impressive, a “great Assemblage of persons” that included the governor of New Jersey and most of the state legislature. The main speaker, the Pennsylvania Congressman Joseph Hopkinson, who was also a member of the Pennsylvania Abolition Society, had backed the House amendment that touched off the crisis, and his speech in Trenton, according to one report, “rivetted the attention of every auditor.” Boudinot, too ill to travel to the state capital, agreed nevertheless to chair a committee of correspondence that wrote to dozens of prominent men, including ex-President Madison, seeking their support. 

    If Madison ever responded to Boudinot’s entreaty, the letter has not survived, but no matter: Madison’s correspondence with another anti-slavery advocate made clear that he was not about to support checking the future of slavery in Missouri. Boudinot’s and the committee’s efforts did, however, meet with approval from antislavery notables such as John Jay. It also galvanized a multitude of anti-Missouri meetings all across the northern states, pressuring Congress to hold fast on restricting slavery’s spread. “It seems to have run like a flaming fire through our middle States and causes great anxiety,” Boudinot wrote to his nephew at the end of November. The proslavery St. Louis Enquirer complained two months later that the agitation begun in Burlington had reached “every dog-hole town and blacksmith’s village in the northern states.” The protests, the largest outpouring of mass antislavery opinion to that point in American history, were effective: by December, according to the New Hampshire political leader William Plumer, it had become “political suicide” for any free-state officeholder “to tolerate slavery beyond its present limits.”  

    Apart from indicating the scope and the fervor of popular antislavery opinion well before the rise of William Lloyd Garrison, two elements in this story connect in important ways to the larger history of the antislavery movement in the United States, one element looking forward from 1819, the other looking backward. Of continuing future importance was the breadth of the movement’s abolitionist politics, as announced in the circular of the Trenton mass meeting. Although it aimed, in this battle, simply to halt the extension of slavery, the anti-Missouri movement’s true aim, the circular announced, was nothing less than the complete destruction of slavery in the United States. “The abolition of slavery in this country,” it proclaimed, was one of “the anxious and ardent desires of the just and humane citizens of the United States.” It was not just a matter of requiring that Missouri enter as a free state: by blocking human bondage from “every other new state that may hereafter be admitted into the Union,” it would be only a matter of time before American slavery was eradicated. Just as important, the abolitionists took pains to explain that restricting slavery in this way fell within the ambit of Congress’ powers, “in full accordance with the principles of the Constitution.” Here lay the elements of the antislavery constitutionalism — asserting congressional authority over slavery in places under its jurisdiction — that would evolve, over the ensuing thirty-five years, into the Republican Party’s program to place slavery, as Abraham Lincoln put it, “in the course of ultimate extinction.” 

    The second connection, looking backward, was embodied by Elias Boudinot. Some historians have linked Boudinot’s antislavery enthusiasm in 1819 to his Federalist politics; more persuasive accounts see it as a natural outgrowth of a deeply religious humanitarianism that had led him, after his retirement from politics and government, to help found the American Bible Society and become a champion of American Indians. The most recent comprehensive study of the Missouri crisis depicts him as something of a throwback, “the quintessential antiegalitarian patrician Federalist” with a pious humanitarian streak who had lingered long enough to play a part in the commencement of the nation’s crisis over slavery.

    In fact, Boudinot had already had a long career not only as an antislavery advocate but also as an antislavery politician. He first threw himself seriously into antislavery politics in 1774 when, as a member of the colonial assembly, he worked closely with his Quaker colleague and abolitionist leader Samuel Allinson in ultimately unsuccessful efforts to hasten total abolition in New Jersey. In 1786, Boudinot joined with another antislavery politician, Joseph Bloomfield, in founding the New Jersey Society for Promoting the Abolition of Slavery; and after several years of indifferent activity, the Society presented a gradual emancipation plan that Bloomfield, elected New Jersey’s governor in 1803, signed into law the following year. Boudinot, meanwhile, was elected to the first U.S. Congress in 1789, where he denounced slavery as an offence against the Declaration of Independence and “the uniform tenor of the Gospel.” In all, if the antislavery arguments of the 1850s dated back to the Missouri crisis, then the antislavery politics that brought about that crisis dated back to the Revolutionary era.

    These two connections — the history of the antislavery constitutionalism that surfaced in the Missouri crisis and the history of antislavery politics dating back to the Revolution — deserve an important place in our account of our origins. I have argued, in a recent book, that by refusing to recognize the legitimacy of property in man in national law, the Federal Convention in 1787 left open ground upon which antislavery politics later developed at the national as well as the state level. Those politics emerged, to be sure, out of the local struggles that dated back before the American Revolution. But the ratification of the Constitution, even with that document’s notorious compromises over slavery, left room for the rise of antislavery politics on the national level. And the origins of those politics, as I wish to make clear here, lay in the efforts by antislavery agitators and their allies in Congress, beginning in the very first Congress, to find in the Constitution the authority whereby the national government could abolish slavery or, at the very least, hasten slavery’s abolition.

    These national antislavery politics, it needs emphasizing, developed by fits and starts, and only began to gather lasting strength in the 1840s. The abolitionists enjoyed just a few significant successes at the national level during the twenty years following the Constitution’s ratification, and they endured some important defeats. These were some of the leanest years in the history of antislavery politics. But that the abolitionists won anything at all, let alone anything significant, contradicts the conventional view that southern slaveholders thoroughly dominated national politics in the early republic. The abolitionists did occasionally prevail; and just as important, in doing so they discovered and began to refine the principles and stratagems of antislavery constitutionalism that would guide antislavery politics through to the Missouri crisis and then, further refined, to the Civil War.

    Reviewing the early history of these abolitionist politics — running from the birth of the federal government in 1789 until the abolition of American involvement in the Atlantic slave trade in 1807 — is part of a broader re-evaluation currently underway of what Manisha Sinha has called “the first wave” of abolitionist activity that lasted from the Revolutionary era through the 1820s. Scholarship by a rising generation of historians, including Sarah Gronningsater, Paul J. Polgar, and Nicholas P. Wood, as well as Manisha Sinha, have begun to revise completely the history of antislavery in this period. They have more or less demolished, for example, the once dominant view of northern emancipation as a grudging and even conservative undertaking, led by polite gentlemen unwilling to take their antislavery too far. When completed, the work of these scholars and others will, I am confident, become the basis for a new narrative for the history not just of antislavery but of American politics from the Revolution to the Civil War. But there is a lot of work left to do.

    Prior to the 1750s, there was very little in the way of antislavery activity among white Americans, with the exception of the Quakers, and it took even the Quakers several decades of struggle among themselves before they turned misgivings about slavery into formal instructions to abandon the institution. Amid an extraordinary moral rupture at mid-century, wider antislavery activity began in earnest. Initially, these efforts emphasized limited public efforts to change private behavior, relying on moral suasion to hasten manumissions, but soon enough some antislavery reformers turned to politics in more forceful ways. In 1766 and 1767, Boston instructed its representatives in the colonial assembly to push for the total eradication of slavery. In 1773, a Quaker-led campaign against the slave trade, captained by Anthony Benezet, the greatest antislavery agitator of the time, swept through the middle colonies and touched New England; and in that same year several Massachusetts towns petitioned the assembly to abolish the slave trade and initiate gradual emancipation. Black abolitionists, including Felix Holbrook and Prince Hall in Massachusetts, initiated their own petition drives, supplementing the freedom suits that would kill slavery in Massachusetts outright in the mid-1780s. Bills for the gradual abolition of slavery were debated in New Jersey in 1775 and in Connecticut in 1777; Vermonters approved the first written constitution ever to ban adult slavery in 1777; and by 1780 ascendant radical reformers in Pennsylvania led by George Bryan prepared to enact the first gradual emancipation law of its kind in history.  

    By then, political abolitionists had begun organizing their own institutions. On April 14, 1775 — five days before the battles of Lexington and Concord — a group consisting chiefly of Quakers formed the Society for the “Relief for Free Negroes Unlawfully Held in Bondage,” the first society with antislavery aims anywhere in the world. Although the Revolution soon disrupted the group, it reorganized in 1784 as the Pennsylvania Society for the Promotion of the Abolition of Slavery; three years later, the society named Benjamin Franklin — conspicuously a non-Quaker — as its president. In 1785, the New-York Manumission Society appeared, dedicated to the same basic goals. By 1790, two more states, Rhode Island and Connecticut, had approved gradual emancipation. Slavery was ended in Massachusetts by judicial decree in 1783, had crumbled in New Hampshire; and at least six more abolitionist societies had formed from Rhode Island as far south as Virginia (where, in 1785, an abolition law was debated to supplement a widened manumission law enacted in 1782). In 1794, the state societies confederated as the American Convention for Promoting the Abolition of Slavery and Improving the Condition of the African Race.

    Abolitionist politics at the national level would await the framing and ratification of the Federal Constitution in 1787-1788. Since the Articles of Confederation had afforded the national government no authority over national commerce, let alone either slavery or the Atlantic slave trade, national abolitionist politics barely existed. The one exceptional effort came in 1783, when a small Quaker delegation from the Philadelphia Yearly Meeting delivered to the Confederation Congress, then sitting in temporary exile in Princeton, a petition signed by some five hundred Quakers, asking in vain for a prohibition of the Atlantic trade. With the calling of the Federal Convention in 1787, though, both of the then-existing abolitionist societies, in Philadelphia and New York, mobilized to send petitions. Benjamin Franklin, a delegate to the convention as well as president of the Pennsylvania Abolition Society decided on tactical grounds against presenting his group’s forceful memorial opposing the Atlantic slave trade, while the New-York Manumission Society failed to complete its broader antislavery draft before learning that slavery as such would not be debated at the convention.

    To comprehend the national abolitionist politics that followed these developments requires a closer look at the Constitution’s paradoxes and contradictions concerning slavery. None of the framers’ compromises over slavery that many historians cite as the heart of the supposedly proslavery Constitution were nearly as powerful in protecting slavery as an assumption that was there from the start: that whatever else it could do, the federal government would be powerless to interfere with slavery in the states where it existed — a doctrine that became known as the federal consensus. This assumption, far more than the three-fifths clause or the Atlantic slave trade clause or the fugitive slave clause or anything else, was the basis of the slaveholders’ confidence that the Constitution had enshrined human bondage. But if the federal government could not abolish slavery outright, then how might it be done, short of hoping that the slaveholders of South Carolina and Georgia would suddenly see the light — a prospect that the South Carolinians and Georgians made clear was not in the offing anytime soon? Once the abolitionists had launched the campaign for emancipation in the North, this would be their great conundrum — but they seized upon it immediately, with actions as bold as their demands. In doing so, they fostered a convergence of radical agitation and congressional politics that would have enduring if as yet unforeseen repercussions.   

    Far from discouraging abolitionist activity, the ratification of the Constitution, even with its notorious compromises over slavery, bolstered it. Above all, the framers’ granting to the new national government, over furious southern objections, the authority to abolish the nation’s Atlantic slave trade, even with a twenty-year delay, struck many and probably most abolitionists and their political allies as a major blow for freedom. This should not be surprising: as historians note too rarely, it was the first serious blow against the international slave trade undertaken anywhere in the Atlantic world in the name of a national government; indeed, the American example, preceded by the truly inspiring anti-slave agitation led by Anthony Benezet, encouraged the rise of the British movement to end the Atlantic trade, formally organized in 1787. Some leading American abolitionists described the Constitution as nothing less than, in the words of the framer James Wilson, “the foundation for banishing slavery out of this country.” Ending the trade had long been considered the vital first step toward eradicating slavery itself; and it seemed at the least highly probably that, as soon as 1808 arrived, Congress would do so. More immediately, though, members of the Pennsylvania Abolition Society wanted to see if Congress would entertain extending its constitutional authority beyond the slave trade provision.

    The first great confrontation over slavery in national politics was a famous but still largely misunderstood conflict in the House of Representatives during the First Congress’ second session in New York, the nation’s temporary capital, in 1790. Through a friendly congressman, the Pennsylvania Abolition Society presented a petition to the House of Representatives, above the signature of its aging president Franklin, bidding the representatives to “step to the very verge of the powers vested in you” and to abolish slavery itself, not simply the Atlantic slave trade. (At the request of John Pemberton of the PAS, two groups of Quakers had already sent milder petitions referring only to the trade.) Paying no attention to the federal consensus, the PAS petition specifically cited the preamble of the Constitution that empowered the new government to “promote the general Welfare and secure the blessings of Liberty to ourselves and our Posterity,” which they contended authorized far-reaching if unspecified congressional action against slavery. Without telling Congress exactly what to do, the petitioners bid the representatives to look beyond the federal consensus to find ways they could attack slavery — to the extent, quite possibly, of disregarding the federal consensus entirely.

    A fierce on-and-off debate over the next three months ended with Congress affirming the federal consensus as well as the ban on congressional abolition of the Atlantic trade until 1808. The outcome is often portrayed fatalistically as a crushing defeat for the abolitionists, sealing the immunity of slavery in the new republic while calling into question the rights of abolitionists even to petition the Congress — an effort undertaken, in one historian’s estimation, by naïve and “psychologically vulnerable” reformers, unprepared “for the secular interest politics of a modern nation.”

    In fact, although the petition (along with the two others from regional Quaker meetings) did not gain the sweeping reforms it sought, it was decidedly not a failure. For one thing, the mobilization behind it, far from weak-kneed, was the first auspicious political protest of any kind to be directed at the new national government. Strikingly modern in its strategy and its tactics, the abolitionists blended insider maneuvering and hard-headed direct appeals to members of Congress with popular propagandizing and political theater of a kind associated with the protest movements of much later decades. The campaign was spearheaded by a delegation of eleven Quaker lobbyists from Philadelphia, including John Pemberton and Warner Mifflin, who were certainly the opposite of naïve and vulnerable. As a consequence, the congressional deliberations over the petitions took a surprisingly radical turn, and in  the end the effort secured important political as well as practical gains.

    Lower South slaveholders reacted with predictable fury as soon as congressmen friendly to the abolitionists introduced the petitions on the floor of the House. The slaveholders’ diatribes asserted that the constitutional ban on congressional abolition of the Atlantic slave trade until 1808 meant that the Constitution barred any federal interference with slavery whatsoever. Given the federal consensus, meanwhile, the slaveholders called the petitions unconstitutional on their face and demanded they be rejected without further debate. But despite the inflation of their numbers in the House by the three-fifths clause, the proslavery forces were badly outnumbered. (“Alass — how weak a resistance against the whole house,” one resigned South Carolina congressman wrote.) By a vote of 43 to 11, the House approved sending the radical petitions to a special committee for consideration.

    Working hand-in-hand with members of the special committee, the abolitionists immediately supplied them with a small library of abolitionist writings, while they arranged, through the Speaker of the House, an ally, to distribute additional abolitionist propaganda to the rest of the chamber. The Quaker lobbyists then advised the committee on its report behind the scenes, sharing drafts and submitting their own suggestions while backing up the PAS petition’s claim that the “General Welfare” section of the Constitution’s preamble gave Congress some unspecified powers over slavery. The committee narrowly turned aside that suggestion — by a single vote, John Pemberton reported — and agreed that the Congress could not ban the Atlantic slave trade before 1808. Yet it also asserted, contrary to lower South protests, that the federal government could regulate the trade as it saw fit at any time. More portentously, the members included wording that the Constitution empowered Congress to abolish slavery outright after 1808 — making the special committee’s report perhaps the most radical official document on slavery approved by any congressional entity before the Civil War.   

    When the report reached the House, the abolitionists swung into action as both agitators and what today we would call lobbyists. Quakers crowded the House gallery to witness the debate, their presence in Quaker gray suits and broad-brimmed black hats inciting and unnerving the southerners. Outside the hall, the abolitionists pursued individual congressmen right down to their lodging houses and taverns and eating places to make their case. Mifflin began a letter-writing campaign, addressed both to individual congressmen and to the House at large. The abolitionists also arranged with allies in the New-York Manumission Society to have a full record of the House debates printed along with antislavery articles in the New York Daily Advertiser, as well as to distribute pamphlets vividly describing the horrors of the slave trade.  

    Finally the House affirmed Congress’ powerlessness over slavery where it existed and over the Atlantic trade before 1808, and a revised report removed the select committee’s language about abolishing slavery itself after 1808. Yet the outcome was hardly a one-sided triumph for the proslavery southerners. The lower South failed utterly in its initial effort to dismiss the petitions without debate. Evidently, contrary to the slaveholders, Congress might well have some authority over slavery worth debating. In the course of arguing that point, moreover, several House members had affirmed that, short of abolishing slavery outright, Congress might restrict slavery in various ways quite apart from the slave trade, including, James Madison remarked, banning slavery from the national territories, where, he declared, “Congress have certainly the power to regulate slavery.” And over howls from lower South slaveholders, the final report affirmed that Congress could legislate over specific matters connected to the Atlantic trade before 1808 — issues that, as we shall see, the abolitionists would agitate successfully. In all, the federal consensus stood, but at the same time the House majority repulsed the proslavery forces and backed the abolitionists on whether slavery was immune from federal authority.

    Over the ensuing decade, the abolitionists, far from discouraged, redoubled their national efforts, despite some serious setbacks. The Southwest Territory — what would become the state of Tennessee — was admitted to the Union with slavery in 1790, with little debate. A coterie of antislavery congressmen could not stave off passage of the Fugitive Slave Act of 1793. Five years later, a spirited antislavery effort to bar slavery from Mississippi Territory was defeated by a wide margin. 

    And yet the abolitionists had reason to remain optimistic. At the state level, the New York legislature, under intense abolitionist pressure, finally passed a gradual emancipation law in 1799 and New Jersey followed five years later, completing the northern “first emancipation.” In part as a response to the Fugitive Slave Act, the American Convention of Abolition Societies was up and running in 1794. There were various signs, from a proliferation of freedom suits in Virginia to the spread of antislavery opinion in Kentucky and Tennessee, that the upper South was seriously questioning slavery. In national politics, antislavery congressmen, numbering about a dozen and led by a few northerners who worked closely with the abolitionists, made good in 1794 on the victory wrung from the abolitionist petition debates four years earlier, passing a law that outlawed the use of any American port or shipyard for constructing or outfitting any ship to be used for the importing of slaves. 

    Five years later the Reverend Absalom Jones, a prominent abolitionist and mainstay of Philadelphia’s free black community, helped lead an even more propitious effort. Late in 1799, a group of seventy free men of color in Philadelphia, headed by Jones, sent yet another petition to the House of Representatives. The drafters of the petition, as Nicholas Wood has shown, were John Drinker and John Parrish, prominent local Quaker abolitionists who had long worked closely with Jones and other black abolitionists; the signers included members of various black congregations, including Jones’ St. Thomas African Episcopal Church, the majority of them unable to sign their names. 

    The petitioners asked for revisions of the laws governing the Atlantic slave trade as well as the Fugitive Slave Law of 1793. But they also went further, as far as the PAS petitioners had in 1790, pressing for — as the abolitionist congressman Nicholas Waln observed when he introduced the petition to the House — “the adoption of such measures as shall in due course emancipate the whole of their brethren from their present situation.” Stating that they “cannot but address you as Guardians of our Civil Rights, and Patrons of equal and National Liberty,” the petitioners expressed hope that the House members 

    will view the subject in an impartial, unprejudiced light. — We do not ask for the immediate emancipation of all, knowing that the degraded State of many and their want of education, would greatly disqualify for such a change; yet humbly desire you may exert every means in your power to undo the heavy burdens, and prepare the way for the oppressed to go free, that every yoke may be broken.

    As if brushing aside the House’s decision in 1790, the abolitionists, citing once again the Constitution’s preamble, wanted Congress to probe once more the document’s antislavery potential. The idea that Congress had untapped antislavery powers was emerging as a core abolitionist argument. And, though the sources are silent, this portion of the petition may have also had tactical purposes. In 1790, the defeat of grand claims about emancipation proved the prelude to the House affirming Congress’ authority over more specific issues connected to slavery. Roughly the same thing would happen this time.

    Southern slaveholders and their New England allies reacted with predictable wrath. John Rutledge, Jr. of South Carolina thanked God that Africans were held in slavery, then railed against the “new-fangled French philosophy of liberty and equality” — he was talking about Thomas Jefferson and his supporters — that was abroad in the land. Rutledge’s fellow Federalist, the notorious Atlantic slave trader John Brown of Rhode Island, attacked the petition’s effort to restrain American participation in the trade, while another New England Federalist, Harrison Gray Otis, sneered that most of the petitioners were illiterate and thus unable to understand what they had endorsed, and that receiving their memorial would mischievously “teach them the art of assembling together, debating, and the like.”  

    The next day, the House considered a resolution condemning those portions of the petition “which invite Congress to legislate upon subjects from which the General Government is precluded by the Constitution.” The resolution passed 85 to 1, a crushing repudiation of the idea that Congress possessed implied powers to interfere directly with slavery where it already existed. Even the abolitionist congressman who presented the free blacks’ petition ended up voting with the majority.

    But that was only part of the story. The core of antislavery Northerners fiercely rebutted the proslavery outbursts. George Thacher, a Massachusetts Federalist and longtime antislavery champion in the House, repudiated the racist attacks on the petitioners, upheld the right of constituents to a redress of grievances regardless of their color, and condemned racial slavery as “a cancer of immense magnitude, that would some time destroy the body politic, except a proper legislation should prevent the evil.” Moreover, once the condemnation resolution predictably passed — Thacher’s was the sole vote in opposition — the House was free to act on the petitioners’ more specific demands, which it swiftly did, sending the petition to committee — thereby, among other things, affirming the right of free blacks to petition Congress.

    The committee assigned to consider the petition sympathized with its section on the fugitive slave law — free blacks, its report contended, were “entitled to freedom & Protection” — but the slaveholders and their allies prevailed on that issue on jurisdictional grounds. On the slave trade, however, Congress took action. After a heated debate, the House, with the concurrence of the Senate, approved by a wide margin the Slave Trade Act of 1800, banning even indirect involvement by Americans with the shipping of Africans for sale in any foreign country while also authorizing naval vessels to seize ships that were in violation. While it expanded enforcement of the restrictive law enacted six years earlier, the new law reinforced expectations that the Atlantic slave trade to the United States would be entirely abolished at the earliest possible date in 1808. 

    The scale of this antislavery victory should not be exaggerated — indeed, three years later South Carolina would re-open its own slave trade with a vengeance — but neither should it be scanted. Most immediately, within a year, under the new law’s provisions, the man-of-war U.S.S. Ganges seized two illegal slave schooners off the coast of Cuba and discovered more than one hundred and thirty African captives, men, women, and children, in chains, starving and naked; once freed, the Africans obtained apprenticeships and indentures from the Pennsylvania Abolition Society. The free black petition debate also marked a highpoint in the efforts by the antislavery congressmen, first to restrict and regulate the Atlantic slave trade prior to its abolition, and then to reform and restrict the Fugitive Slave Law. 

    More broadly, that same small but resolute group took up new antislavery battles and established an antislavery presence that from time to time became an antislavery majority. This was not just the agitation of an elite. It must be emphasized that the congressmen acted in coordination with dense interregional as well as interracial networks of antislavery activists, organized in state abolition societies, churches and church committees, mutual aid societies, fraternal groups, and more. With such popular backing, year after year, antislavery congressmen voiced defiantly antiracist as well as antislavery sentiments on the floor of the House, exploring the Constitution in search of antislavery meanings, trying to find in it whatever powers they could whereby the federal government could limit slavery’s expansion leading to its eventual eradication. Some of their successes were defensive, as when they defeated efforts to augment the Fugitive Slave Act, to otherwise restrict the rights of free blacks, and to repeal the Northwest Ordinance’s ban on slavery in Illinois and Indiana. But the antislavery forces in Congress could be aggressive as well. 

    In 1804, once again bidden by abolitionist petitions, the Senate approved a provision that would have effectively shut the domestic slave trade out of the entire Louisiana Territory, obtained from France a year before, while the House, stunningly, passed a bill that banned outright further introduction of slavery into the territory. The House provision failed to gain approval from the Senate, and the efforts to keep slavery out of Louisiana proved futile, but the passing success was a signal that the antislavery presence in Congress had grown since 1790. Fittingly, the effort in the House was led by a sharp-witted and acid-tongued member from New Jersey named James Sloan, a Jeffersonian Republican who had cut his political teeth as a member of the New Jersey Abolition Society and as its delegate to the American Convention. A permanent goad to the southern slaveholders, including those in his own party, Sloan would cause an uproar in the House in 1805 by proposing a plan for gradual emancipation in the District of Columbia — yet another effort to find places in the Constitution giving the federal government the authority to attack slavery.  

    Finally, in 1807, at the earliest date stipulated by the Constitution, Congress approved the abolition of the Atlantic slave trade to the United States. With the bill supported by most of the large Virginia delegation, whose slaveholders stood benefit, the outcome was a foregone conclusion, but the antislavery members had to beat back several efforts to soften the law, including one proposal by the states-rights dogmatist John Randolph which in effect would have recognized slaves as property in national law. “Hail! Hail, glorious day,” the New York black abolitionist minister Peter Williams, Jr., an ally of the New-York Manumission Society, exclaimed at the city’s celebration.

    This high point in the politics of early American abolitionism would also prove a turning point. Although national agitation continued, there was a noticeable decline in enthusiasm in the ranks, at least outside Pennsylvania, once New York and New Jersey had completed their emancipation laws. A powerful racist backlash instigated by the Haitian Revolution and then by reactions to northern emancipation jolted the existing abolitionist societies and paved the way for the emergence of the American Colonization Society. Just as their British counter-parts perfected the massive petition campaigns required to shake Parliament into abolishing Britain’s Atlantic slave trade, also achieved in 1807, the American movement began to falter. Above all, the dramatic shift in the Southern economy that came with the introduction of the cotton gin in 1793 and the consequent renaissance of plantation slavery dramatically changed the terms of antislavery politics, dispelling forever the original abolitionist hope that the closing of the Atlantic trade would doom American slavery.

    Northern antislavery opinion did rebound after 1815 and reached a political flashpoint during the Missouri crisis of 1819-1820. But the abolitionist organizations, including the American Convention, although still alive and active, were becoming less of a factor in guiding events in Congress than they had been at the end of the eighteenth century. By now, with the expansion of mass mainstream party politics, popular mobilizations in the form of an impromptu Free Missouri movement did more to embolden antislavery congressmen than did the abolitionist’s continued memorials, petitions, and lobbying efforts. And then, in the wake of the Missouri crisis, shaken mainstream politicians sealed what amounted to a bipartisan consensus to prevent slavery from ever again entering into national political debates. With national politics seemingly closed to antislavery agitation, the old Quaker abolitionist strategy of working directly with sympathetic officeholders and political leaders began to look feeble. 

    But the fight had been irreversibly joined. The established abolitionist movement’s strategies left an important legacy on which later antislavery political movements would build. Even as the early abolitionist movement sputtered out, it played a part in shaping abolitionism’s future. In forming as sophisticated a political movement as they did, the early abolitionists created a practical model for organized political agitation in the new republic, antedating the political parties that arose thereafter. Although the effectiveness of that model declined after 1800 or so, it never disappeared; and elements of it would remain essential to later abolitionist politics, including the transformation of abolitionist petitioning into monster popular campaigns, along the lines that British abolitionists had pioneered after 1787. 

    The legacy was even more important with respect to antislavery ideology and strategy. If the initial impetus of the early abolitionists, dating back to 1775, had been to politicize antislavery sentiment, in order to make direct claims on government, so the abolitionists of the early republic perpetuated the idea that politics was the only sure means to achieve slavery’s eradication. In national politics, after the ratification of the Constitution, that meant, above all, advancing antislavery interpretations of the framers’ work. Although the most expansive ideas about Congress’ authority over slavery met with ever firmer resistance, the idea that Congress possessed numerous implicit or indirect powers to hasten slavery’s demise remained.

    Consider again the petition from the free men of color of Philadelphia in 1799. In addition to asking Congress to find the authority to abolish slavery, the petition included its own innovative antislavery interpretation of the Constitution to demonstrate that the Fugitive Slave Law was unconstitutional: as “no mention is made of Black people or Slaves” in the Constitution, the document observed, it followed that “if the Bill of Rights or the declaration of Congress are of any validity,” then all men “may partake of the Liberties and unalienable Rights therein held forth.” The assertion got nowhere, but it had been made, and as long as abolitionists kindled a basic optimism about the Constitution’s antislavery potential, they would sustain their belief that political efforts, and not moral suasion alone, would bring the complete abolition of American slavery. 

    This optimism peaked again during the Missouri crisis, when abolitionists seized upon federal control of the territories and the admission of new states as an instrument to commence slavery’s abolition. The optimism persisted through the 1820s, even as the colonization movement flourished and even as mainstream political leaders built a new system of national politics based on two opposed intersectional national parties — a party system deliberately designed to keep antislavery agitation at the margins. In 1821, a sometime colonizationist, the pioneering abolitionist editor Benjamin Lundy, offered a comprehensive seven-point plan to abolish slavery under the Constitution that began with banning slavery in the national territories and abolishing the domestic slavery trade. Four years later, Lundy joined with the abolitionist and political economist Daniel Raymond in trying to establish an antislavery political party in Maryland. After that failed, Lundy persuaded the American Convention to pick up the dropped thread of James Sloan’s earlier agitation in the House and pressure Congress to use its authority to abolish slavery and the slave trade in the District of Columbia. He then had the idea of mounting a mass petition campaign to support the demand; and in 1828, working in coordination with a Pennsylvania Abolition Society member, congressman Charles Miner, who had announced his intention to work for abolition in the district, he forced the issue to the floor of the House. Younger PAS members warmed to the campaign and kept it going; so would, somewhat ironically in retrospect, the young editor whom Lundy later picked up as his assistant and brought into the abolitionist cause, none other than William Lloyd Garrison. 

    The optimism would be badly battered in the 1830s and 1840s. Some members of a new generation of radical abolitionists, led by Garrison, would conclude that there was no hope of achieving abolition and equality in a political system attached to a proslavery U.S. Constitution — a “covenant with death” and “agreement with hell,” in Garrison’s famous condemnation. Only moral suasion backed with militant protest, Garrison declared, would advance the cause; moral purification would have to precede political action. Taking the long view, this represented as much a regression as an advance, back to the anti-political stance of the more pious of the Quaker abolitionists in the 1750s and 1760s. Garrison’s absolutist high-mindedness forthrightly but perversely lifted the cause above the grimy necessities of actual politics. 

    Yet for all of Garrison’s fiery and intrepid polemics, he and his followers were a minority inside the abolitionist movement, increasingly so after 1840. The abolitionist majority never relinquished the idea, passed on from the first-wave abolitionists, that Congress, by acting wherever it could against slavery, would hasten slavery’s destruction. Inside Congress, meanwhile, a luminary with antislavery convictions but no previous antislavery record, John Quincy Adams, led a small group of colleagues in a guerilla war against the gag rule and finally prevailed in 1844. Adams, the ex-president turned congressman, was a singular figure in American politics, unlike any before or since; and the 1840s were not the 1820s or the 1790s. But Adams, who came to work closely with abolitionists, in his way reprised the roles of George Thacher, James Sloan, and Charles Miner, becoming the face of antislavery inside the Capitol — “the acutest, the astutest, the archest enemy of slavery that ever existed,” in the view of his fiercely proslavery Virginia rival Henry A. Wise.

    By the time he collapsed and died on the floor of the House in 1848, opposing the American war with Mexico, Adams had also helped turn antislavery politics back toward issues concerning federal power over slavery in the territories — the very issues that, within a decade, led to the formation of the Republican Party. The abolitionists search for the constitutional means to attack slavery, begun in 1790, culminated in the agitation over Kansas, the convulsions that followed the Dred Scott decision in 1857, and everything else that led to the Civil War. All of which is a vast and complicated story, making the final connection between the antislavery politics of Anthony Benezet and Benjamin Franklin with those of Frederick Douglass and Abraham Lincoln. The important point, in the consideration of American origins, is that the early American abolitionists, audacious in their own time, formulated the essentials of a political abolitionism that, however beleaguered and often outdone, announced its presence, won some victories, and made its mark in the national as well as state politics of the early republic. It was not least owing to this constitutive achievement of American democracy that in the relatively brief span of fifty years, some of them very violent, slavery would be brought to its knees.

    Which brings us back to Benjamin Quarles’ observations, about the concomitant development of American slavery and American antislavery. The struggle for justice is always contemporaneous with injustice, quite obviously, and the power of injustice to provoke a hostile response is one of the edifying lessons of human life. Once joined, that struggle forever shapes both sides: there is no understanding the growth of pro-slavery politics, leading to the treason of secession, without reference to the growth of anti-slavery politics, just as anti-slavery politics makes no sense absent pro-slavery politics. But the history of anti-slavery in America, even during its most difficult periods, is not merely a matter of edification. It is also a practical necessity, a foundation for political action. It presents contemporary anti-racism with a tradition from which it can draw its ideas and its tools. It is a barrier against despair, and a refreshment of our sense of American possibility. The struggle against slavery was hard and long, and it was won. The struggle against racism is harder and longer, and it has not yet been won. But as our history shows, it has certainly not been lost.

    Loosed Quotes

    THE SECOND COMING 

    Turning and turning in the widening gyre
    The falcon cannot hear the falconer;
    Things fall apart; the centre cannot hold;
    Mere anarchy is loosed upon the world,
    The blood-dimmed tide is loosed, and everywhere
    The ceremony of innocence is drowned;
    The best lack all conviction, while the worst
    Are full of passionate intensity.

    Surely some revelation is at hand;
    Surely the Second Coming is at hand.
    |The Second Coming! Hardly are those words out
    When a vast image out of Spiritus Mundi
    Troubles my sight: somewhere in sands of the desert
    A shape with lion body and the head of a man,
    A gaze blank and pitiless as the sun,
    Is moving its slow thighs, while all about it
    Reel shadows of the indignant desert birds.
    The darkness drops again; but now I know
    That twenty centuries of stony sleep
    Were vexed to nightmare by a rocking cradle,
    And what rough beast, its hour come round at last,
    Slouches towards Bethlehem to be born?

           W.B. YEATS

    Turning and turning in the widening gyre
    The falcon cannot hear the falconer;
    Things fall apart; the centre cannot hold;

    ….

    The best lack all conviction, while the worst
    Are full of passionate intensity.

    In every crisis they appear, those famous and familiar lines from “The Second Coming,” written in 1919 by W. B. Yeats. Journalists and critics alike seem to take them as final assertions of Yeats’ own beliefs. Such innocent judgments do not ask why those lines open the poem, or for how long their assertions remain asserted. The poem itself has become lost behind the quotability of its opening lines. And Yeats, it seems, wants to be a pundit.

    In our ready “yes, yes” to those lines, we think we are accepting the judgment of a sage, but by the time we reach the close of the poem — which is a question, not an assertion — we are driven to imagine the changing states of the writer composing this peculiar poem, and we raise questions. What feelings required Yeats to change his bold initial stance, and in what order did those feelings arise? In order to understand this poem, to free it from its ubiquitous misuses, and to restore it to both its opening grandeur and its subsequent humiliation, those are the questions that we must answer. 

    Yeats was an inveterate reviser of his own ever-laborious writing: recalling his difficulty in composing “The Circus Animals’ Desertion,” he confesses, “I sought a theme and sought for it in vain,/ I sought it daily for six weeks or so.” (Mention of that poem in his letters of the time prove this no exaggeration: I counted the weeks.) What was the obstacle suspending his progress? (He spends the poem finding out.) In “Adam’s Curse” he remarks in frustration, “A line will take us hours maybe.” Hours to do what? “To articulate sweet sounds together.” Yeats puts the sequence of sounds first; he composed by ear. Are the resulting sounds always “sweet” in the ordinary sense of the word? Not at all; but they are “sweet” in the internal order of rhythms and styles as the poem evolves. When the poet has articulated its theme, its sounds, and its lines to the best of his powers, the ear registers its satisfaction.  

    “The Second Coming” is a lurid refutation of the lurid Christian expectations of the Second Coming of Christ, which Jesus himself foretells in Matthew 29-3:

    Immediately after the distress of those days, the sun will be darkened, and the moon will refuse her light, and the stars will fall from heaven, and the powers of heaven will rock; and then the sign of the Son of Man will be seen in heaven; then it is that all the tribes of the land will mourn, and they will see the Son of Man coming upon the clouds of heaven, with great power and glory.

    Yeats proposes a surreal alternative to Jesus’ prophecy, proposing that on the Last Day we will see not Christ in majesty but a menacing, pitiless, and coarse beast who “slouches toward Bethlehem to be born.” “After us, the savage god,” Yeats had said as early as 1896. He watched through the decades, appalled by the sequential horror of world events: The World War from 1914-1918; the failed Easter Rising in Ireland in 1916; the Bolshevik Revolution in 1917. And his first assertions in “The Second Coming” are indeed thoughts prompted by such political upheavals (and by earlier ones — Marie Antoinette appears in the drafts).   

    But what sort of assertions does he choose to express his thoughts? After the octave of assertions, there is a break not entirely accounted for, since the whole poem is not written in regular stanzas, and there are no further breaks. The compressed sentiments preceding the break are undermined by the unexplained and increasing mystery of the poet’s phrases, bringing the reader into the perplexity of the poet. The whole octave is full of riddles: What is a gyre? Whose is the falcon? What is the centre the center of? Why all the passive verbs? Who loosed the anarchy? Whose blood, loosed by whom, has dimmed what tide? What is meant by the ceremony of innocence? Who are the best and who are the worst? Such abstract language, such invisible agents, and such unascribed actions persist in Yeats’ opening declarations, down to the period that closes the octave.

    The quotability of Yeats’ opening passage derives, of course, from the total and unmodified confidence of its initial reportage, impersonal and unrelenting, offering a naked list of present-tense events happening “everywhere.” Stripped to their kernels, these are Yeats’s truculently unmitigated hammer-blows of grammar:

    The falcon cannot hear
    Things fall apart
    The centre cannot hold
    Mere anarchy is loosed
    The blood-dimmed tide is loosed
    Everywhere the ceremony of innocence is drowned
    The best lack all conviction
    The worst are full of passionate intensity

    The break, after Yeats’ introductory eight-line block, leads an educated reader to expect that a six-line block will follow, completing a sonnet. Yet the poet finds himself unable to maintain his original jeremiad, which has been aggressive, omniscient, panoramic, and prophetic. Yeats “begins over again,” and utters in the fourteen lines following the break a complete second “sonnet,” a rifacimento of the one originally intended, in which he rejects his earlier rhetoric of impersonal omniscience as inauthentic from his human lips. Who is he to speak as though he could see the world with the panoramic scan proper only to God? That so many successive writers have been eager to reissue his lines reveals how greatly the human mind is seduced by the vanity of the unequivocal. Can we requote without unease what the poet himself immediately rejected?

    Although “The Second Coming” begins with an attempt at couplet-rhyme, soon — as Peter Sacks has pointed out to me — the couplets begin to disintegrate, as though they themselves were intent on demonstrating how “things fall apart.” After the break, Yeats reveals in its wake a second attempt at a fourteen-line sonnet, one exhibiting a traditional “spillover” octave of nine lines (implying overmastering emotion in the writer) before a truthful closing “sestet” of five lines, making up the desired fourteen. The second, revisionary octave replaces the certainty of the poet’s original octave with the self-defensive uncertainty of “Surely.” Longing for a revelation more humanly reliable than an unsupported façade of godlike prophecy, Yeats insistently utters his second “Surely,” one no less dubious than the first. The second “Surely” attempts to locate a cultural myth to which he can attach the vision vouchsafed to him in a revelation arising within his human consciousness. “Surely the Second Coming is at hand. / The Second Coming!”

    For the first time in the poem, we hear Yeats speaking in the first person, declaring that “a vast image out of Spiritus Mundi / Troubles my sight.” The poet is the sole spectator of this vast image, and he claims that it stems not from his own bodily sense of sight but from the World Spirit, a universal Spiritus Mundi always potentially able to rise into human awareness. (Poets so often describe the initial inspiration for a poem as something coming unbidden that the reader is not troubled by Yeats’ myth of a World-Spirit supplying the image for his revelation.) The poet has decided that it is more honest, more tenable, to write in the first person, to present himself as one whose imagination has reliably generated a telling and trustworthy “vast image” of his historical moment. He has forsaken his impressive but fraudulent rhetoric of omniscience for an account of his private inspiration.  

    Once Yeats has repudiated his initial “divine” posture as a guaranteed seer-of-everything-everywhere, he can take on, in the first person, his limited historical image-making self and create with it a “human” sestet for his newly “remade” sonnet. Admitting the fallibility of any transient metaphorical image, he acknowledges that his image vanishes, “the darkness drops again,” and he is left alone. Yet he grandly maintains, in spite of his abandoning a prophetic stance, that he now definitely “knows” something.

    The “something” turns out to be a single historical fact: the exhaustion of Christian cultural authority after its “twenty centuries” of rule. His “vast image” — its nature as yet unspec-ified — has shown him that Christianity will be replaced by a counter-force, a pagan one. Drawing on his reading of Vico and Herbert Spencer, Yeats believed that history exhibited repetitive cycles of opposing forces. Just as Christianity overcame the preceding centuries of Egypt and Greece, now it is time for some power to defeat Christianity.

    In his private “revelation” the poet has seen the Egyptian stone sphinx asleep “somewhere” in sands of a desert. (The uncertain “somewhere” admits the loss of the initial “everywhere” of Yeats’ prophetic opening.) The “stony sleep” of the Sphinx has lasted through the twenty centuries of Christianity, but now Fate has set an anticipatory cradle rocking in Bethlehem, birthplace of the previous god, and a sphinx-like creature rouses itself to claim supremacy:

    The darkness drops again; but now I know
    That twenty centuries of stony sleep
    Were vexed to nightmare by a rocking cradle…

    Although the poet “knows” that Christianity is undergoing the nightmare of its death-throes, he cannot declare with any confidence what will replace it. He can no longer boast “I know that…”: he can merely ask a speculative question which embodies his own mixed reaction of fear and desire to the vanishing of a now outworn Christianity, the only ideological system he has ever known. What will replace the Jesus of Bethlehem, he asks, and invents a brutal and unaesthetic divinity, a sphinx seen in glimpses — “with lion body and the head of a man, / A gaze blank and pitiless as the sun.” The desert birds (formerly, it is implied, perched at rest on the immobile stone of the Egyptian statue) are now disturbed by the unexpected arousal of the “slow thighs” beneath them. The indignant birds, their movement in the sky inferred from their agitated cast shadows, “reel about,” disoriented, projecting, as surrogates, the poet’s own indignation as he guesses at the future parallel upheaval of his own world. Unable to be prophetic, unable now even to say “Surely,” the poet ends his humanly authentic but still unsatisfied sestet with a speculative question, one that fuses by alliteration “beast” and “Bethlehem” and “born”:

    And what rough beast, its hour come round at last,
    Slouches towards Bethlehem to be born?

    A conventional reading of the poem might take us this far. But no one, so far as I know, has commented that the culminating and ringing phrase, “Its hour come round at last,” is an allusion to Jesus’ famous statement to his mother at the wedding feast at Cana. When she points out to her son that their host has run out of wine, he rebukes her as he had once done in his youth when she had lost him in Jerusalem and found him preach-ing to the rabbis in the temple: “Wist yet not that I must be about my Father’s business?” (Luke 2: 48-49) At Cana, Jesus is even harsher as he tells his mother that he is not yet willing to manifest his divinity: “Woman, what have I to do with thee? mine hour is not yet come.” Not answering her son’s austere question, she simply says to the servants, “Whatsoever he saieth unto you, do it.” He tells them to fill their jugs with water, yet when they pour it is wine that issues, as, in silent obedience to his mother, Jesus performs his first miracle, even though to do so means changing his own design of when he will reveal his divinity. The evangelist comments: “This beginning of miracles did Jesus in Cana of Galilee, and manifested forth his glory” (John 2: 4-5,11). Unlike Jesus, who wished to delay his hour of divine manifestation, Yeats’ rough beast has been impatiently awaiting his own appointed hour, and it has come. His allusion to Jesus’ “Mine hour is not yet come” establishes a devastating parallel between the rough beast’s presumed divinity and that of Jesus, as the poet quails before the savage god of the future. 

    One senses there must be a literary bridge between the glorious “hour” of Jesus and the hideous hour of the rough beast. As so often, one finds the link in Shakespeare. In Henry V, Shakespeare alludes to Jesus’ remark, but adds the malice and impatience that will be incorporated by Yeats in his image of the rough beast. A French noble at Agincourt describes, in prospect, the vulturous hovering of crows waiting to attack the corpses of the English who will have died in battle. Eager for their expected feast on English carrion, “their executors, the knavish crows, / Fly o’er them, all impatient for their hour.” We know that the rough beast has been, like the crows, “all impatient for [his] hour,” because, once loosed on the world, he knows that his appointed hour, long craved by him, has come “at last.” Yeats had been alluding to Jesus’ words about the appointed hour ever since 1896: in his youthful poem “The Secret Rose,” a benign apocalypse is ushered in by the idealized romance symbol of the rose. He even remembered — writing in 1919 — his original inscription of the longing word “Surely” in the envisaged victory of the Secret Rose:

    Surely thine hour has come, thy great wind blows,
    Far-off, most secret, and inviolate Rose?

    “Surely thine hour has come;” “Surely a revelation is at hand”: apocalyptic symbols thread their way through Yeats’ life-work. In the same volume as “The Secret Rose,” we find a contrastively violent version of the End Times, drawing on the sinister Irish legend of a battle in “The Valley of the Black Pig” ushering in what Yeats called “an Armageddon which shall quench all things in the Ancestral Darkness again.” Just as the brave warrior Cuchulain — in Yeats’ deathbed poem, “Cuchulain Comforted” — must be reincarnated as a coward to complete his knowledge of life, so the serene beauty of the Secret Rose must, to be complete, coexist with a twin, a wildness of apprehension. Maud Gonne, whom Yeats loved in frustration all his life, incarnated for him the conjunction of wildness and beauty:

    But even at the starting post, all sleek and new,
    I saw the wildness in her and I thought
     A vision of terror that it must live through
    Had shattered her soul.

    Maud had already appeared in 1904 as the paradoxical “wild Quiet,” “eating her wild heart” (an image of wild love borrowed from the opening sonnet of Dante’s La Vita Nuova). She is the female companion to another apocalyptic creature, the Sagittarius of the zodiac; he is a Great Archer poised, his bow drawn, in the woods of Lady Gregory’s estate. He, like Shakespeare’s predatory birds, “but awaits his hour” to loose arrows upon a degenerate Ireland, where English archaeologists are sacrilegiously excavating sacred Tara and the ignorant Dublin masses are actually celebrating the coronation in England of Edward VII:

    I am contented for I know that Quiet
    Wanders laughing and eating her wild heart
    Among pigeons and bees, while that Great Archer,
    Who but awaits His hour to shoot, still hangs
    A cloudy quiver over Pairc-na-lee.

    By 1919, in “The Second Coming,” the Yeatsian apocalyptic symbol has shed its early romance component of the idealized Rose, has lost the starry constellation of the vengeful zodiacal Archer, and, in the hour of its Second Coming, has become “a vision of terror” like the one Yeats saw in the young Maud’s soul. Yeats had thought of calling his poem “The Second Birth,” but by renaming it “The Second Coming,” he ensured that in spite of the rocking cradle, all his recurrences of “Mine hour is not yet come” recall the self-manifestation of Jesus not as a child, but as the adult of Cana, the miracle-worker who will return to the world at the end of time.  

    “The Second Coming” is in fact a thicket of allusions. A hybrid one pointing to Spenser’s Faerie Queene and Milton’s Paradise Lost adds an opaque quality to the mythical dimension of the rough beast: he cannot be accurately described. Yeats presents him vaguely as “a shape,” borrowing from Spenser the concept of Death’s resistance to visual representation and from Milton the shapeless word “shape.” In Spenser’s first Mutability Canto, after a procession of months representing the passage of time, Death, symbol of the end of time, appears both seen and unseeable, “Unbodièd, unsouled, unheard, unseen”:

    And after all came Life, and lastly Death;
    Death with most grim and griesly visage seene,
    Yet is he nought but parting of the breath;
    Ne ought to see, but like a shade to weene,
    Vnbodièd, vnsoul’d, vnheard, vnseene.

    Imitating his master, the “sage and serious poet, Spenser,” Milton has his Satan meet Death, equally indescribable except by the word “shape” and its successive ever-less-visible negations (Milton substitutes “shadow” for Spenser’s Hades- issued “shade.”) Death confounds even Satan:

    The other shape,  If shap
    e it might be call’d that shape had none
    Distinguishable in member, joynt, or limb,
    Or substance might be call’d that shadow seem’d,
    For each seem’d either; black it stood as Night,
    Fierce as ten Furies, terrible as Hell,
    And shook a dreadful Dart; what seem’d his head
    The likeness of a Kingly Crown had on.

    Retaining the word “shape” but changing the concept of the shapeless shadowy “shape” inherited from his predecessors, Yeats attempts to describe in disarticulated images the nameless figure of his own chimerical “vast image” with a “lion body and head of a man,”: he adds a description of its gaze “blank and pitiless as the sun,” sexualizing it by the “slow thighs” unattached to any completed bodily description, and debasing it by its “slouching” motion, its lurching advance as it gradually reactivates its stony limbs. So grotesque is the figure, so unnameable by any visual word, that Yeats rejects even his own impotent efforts at specialized description, tethering his final question to the vague words “rough beast,” offering nothing but its genus. It is a generalized “beast” rather than a recognized species, let alone an individual creature.

    There are, then, four evolving motions successively represent-ing Yeats’ mind and emotions in “The Second Coming.” We see first an impersonal set of prophetic declamations; these are replaced by a first-person narration of the appearance of the troubling “vast image” coming to replace the Chris-tian past; this, disappearing, is replaced by a “factual” account of the obsolescence of Christianity (“Now I know”); but after this flat declaration of secure knowledge, Yeats can muster no further direct object of what he “knows.” Instead, he launches a final speculative query (“And what rough beast”). These four feeling-states — impersonal omniscience; a first-person boast of a private “revelation”; a “true” historical judgment as to the nightmarish dissolution of the Christian era; and a blurred query uttered in fear — mimic the poet’s changes of response as he attempts to write down an accurate poem of this life-moment. A desire for authentically human speech has made him turn away from his initial confident (and baseless) soothsaying to a personal, transitory, (and therefore uncertain) private “revelation.” He tries finally to attain to truth in judging the end of the Christian era.  

    But what truth can he declare of what is to come? He acknowledges — in a move wholly unforeseen in the strong and quotable opening octave — how limited his “knowledge” actually is. The “darkness” of fear cannot be resoundingly swept away by a transitory image from an unknowable source: opacity drops again. By the end, Yeats must forsake his proposed prophetic and visionary and historical styles and resort to a frustrated human voice that confesses the helplessness of the human intellect and the humiliation of admitting incomplete knowledge. At the inexorable approach of an unknowable, shapeless, coarse, and destructive era, “the darkness drops again.” 

    It is not mistaken, however, to think of the resounding opening summary list as “Yeats’ views” as he begins the poem. He even quotes himself in a letter of 1936 to his friend Ethel Mannin, anticipating the next war: “Every nerve trembles with horror at what is happening in Europe, ‘the ceremony of innocence is drowned.’” The sentiments are genuine, but in a poem something more has to happen than the static observation of a moment in time. A credible artifact has to be constructed, the “sweet sounds” have to be articulated, and a persuasive structure has to be conceived. Since Yeats had lost faith in both Blakean denunciation and Shelleyan optimism by the time he wrote “The Second Coming,” he had gained the humility to confess, at the end of the poem, the limits of human knowledge and human vision. Though his diction is still grand in his closing, he is no longer boasting his seer-like knowledge, no longer claiming a unique private vision, no longer able to assuage the nightmare of the End Times of Christianity. To admit Yeats’ final acknowledgment of human incapacity is essential to perceiving his overreaching in his earlier claims to prophetic power and visionary insight. 

    Painful as it is to see the truncated opening lines — however memorable — become all that is left of the poem, and of Yeats’ character, in popular understanding, it is more painful to see the disappearance of the human drama of the poem in itself as it evolves, in its desire for authentically human speech and an authentic estimation of human powers — better and truer things than arrogant and stentorian utterances of omniscience. In repudiating his first octave of omniscience, making a break, and then having to write a different “sonnet” to attain a more accurate account of himself and his time, Yeats repeats, by remaking his form, his disavowal of the vain human temptation to prophecy. “Attempting to become more than Man, we become less,” said Blake, in what could serve as an epigraph to Yeats’ intricate and terrifying and regularly misread poem.

    On Indifference

    What blurt is this about virtue and about vice?
    Evil propels me and reform of evil propels me, I stand indifferent,
    My gait is no fault-finder’s or rejecter’s gait,

    I moisten the roots of all that has grown.

    WALT WHITMAN


    The Olympian gods are not our friends. Zeus would have destroyed us long ago had Prometheus not brought fire and other useful things down to us. Prometheus was not being benevolent, though. He was angry at Zeus for having locked away the Titans and then for turning on him after Prometheus helped secure his rule. We humans were just pawns in their game. The myths teach that we are here on sufferance, and that the best fate is to be ignored by these poor excuses for divinities. On their indifference depends our happiness. Fortunately we have only minimal duties towards them, so once the ashes from the sacrifices are swept away, the libations mopped up, the festival garlands recycled, we are free to set sail.

    The Biblical God requires more attention. Though he is sometimes petulant, his providential hand is always at work for those who choose to be chosen. Providence comes at a price, though. We are obliged to fear the Lord, to obey his commandments, and to internalize the moral code he has blessed us with. For purists, this can mean that virtually every hour of every day is regulated. But that is not how the Bible’s protagonists seem to live. They love, they fight, they rule kingdoms, they play the lyre, and only when they lust after a subject’s wife and arrange for his death in battle does God stop the music and call them to account. And repentance done, the band strikes up again. The covenant limits human freedom, but it also self-limits God’s. Our to-do list is not infinite. Once we have fulfilled our duties, we are left to explore the world. We good here? Yeah, we’re good.

    Tut, tut child! Everything’s got a moral, if only you can find it.

    THE DUCHESS, ALICE IN WONDERLAND


    But as a Christian my work is never done. I must have the vague imitatio Christi ideal before my eyes at all times and must try to answer the riddle, what would Jesus do?, in every situation — and bear the guilt of possibly getting the answer wrong. Kierkegaard was not exaggerating when he said that the task of becoming a Christian is endless. It can be brutal, too. Jesus told his disciples they must be ready at any moment to drop every-thing if the call comes, adding, if any man come to me, and hate not his father, and mother, and wife, and children, and brethren, and sisters, yea, and his own life also, he cannot be my disciple.

    Saint Paul’s God has boundary issues. More busybody than Pied Piper, he is always looking into our hearts, parsing our intentions, and demanding we love him more than we love ourselves. That master of metaphor Augustine found a powerful one to describe the new regime: Two cities have been formed by two loves: the earthly city was created by self-love reaching the point of contempt for God, the Heavenly City by the love of God carried as far as contempt of self. He hastened to add that the earthly city plays a necessary role in mortal life, offering peace and comfort in the best of times. But over the millennia — such is the power of metaphor over reason — zealots hedging their bets have concluded that if we are to err, it is better to fall into self-loathing than discover any trace of pride within. A moral scan will always turn up something. And so they lock themselves into panopticons where they serve as their own wardens and where nothing is a matter of spiritual indifference.

    Subsequent Christian theologians raised doubts about this rigorist picture of the Christian moral life. In the Middle Ages they debated whether there might be such things as “indifferent acts,” that is, acts that have no moral or spiritual significance. Scratching one’s beard was a common example used by the laxists. Aquinas conceded the point concerning beards, but otherwise declared that if an action at all involves rational deliberation it cannot be indifferent, since reason is always directed towards ends, which can only be good or evil. Q.E.D. And so the class of genuinely indifferent acts was left quite small in official Catholic teaching. That sat just fine with a monastic and conventual elite already devoting their lives to self-abnegating spiritual exercises, accompanied by tormenting doubts about whether such exercises were prideful. But they were a class apart. Ordinary clerical functionaries led more lenient lives, which is how we got cardinals with concubines and with Titian portraits of themselves hanging over the fireplace. Vigilance was not their vocation.

    In the Protestant view, that was precisely the problem. Protestantism, and Calvinism in particular, brought back moral rigorism and then democratized it. Now every burgher was expected to frisk himself while meditating on the terrifying mystery of predestination. The anxiety only increased when Protestants faced the choice among different and hostile denominations. Was there only one true church? Or were certain dogmatic disputes among denominations matters of indifference to God? Combatants in the Wars of Religion said no: true Christians must not only walk the right walk, they must talk the right talk. But, over time, as the denominations proliferated like tadpoles in a pond, and the doctrinal differences among them became more abstruse, the rigorist line became more difficult to maintain. Perhaps the Lord’s house has many mansions after all.

    That thought is exactly what Catholic critics of the Reformation, worried about. If we concede that there are many Christian paths to salvation, people will ask whether there are also non-Christian religious paths. If we concede that there are, they will then ask whether there are decent and admirable non-religious paths to moral perfection. And if we concede that there are — here is the crucial leap — they will be tempted to ask whether there might also be decent and admirable ways of life that do not revolve around moral perfection. The danger would not be that people would abandon morality altogether; no self-declared anti-moralist, not even Nietzsche, has ever renounced the words must and ought. It would be that they would start considering morality to be just one dimension of life among others, each deserving its due. It would mean the end of morality’s claim to be the final arbiter of what constitutes a life well lived.

    The gradient on this slope of questioning is steep. Montaigne slid to the bottom of it while the Wars of Religion were still raging and has been dragging unsuspecting readers along with him ever since. He did not openly state the case against the imperialism of conscience; a bon vivant, he was in no rush to become a bon mourant. Instead he wrote seemingly lighthearted essays full of anecdotes that subtly held up the rigorist life to ridicule or revulsion, implying that there must be a better way to live, without specifying exactly what that might be. He only pointed to himself as a genial, indeed irresistible, exemplar of tolerant, urbane contentment.

    Pascal, Montaigne’s greatest reader, immediately discerned the threat that the Essays posed to the Christian moral edifice: Montaigne inspires indifference about salvation, without fear and without repentance. Atheism is refutable, but indifference is not. The scholastic debate over indifferent acts had presumed a desire to get our moral houses in order. The Reformation and Counter-Reformation debates over justification presumed a desire to get our theological houses in order. Montaigne’s indifferentism, as it came to be called, made all well-ordered houses look menacing or faintly ridiculous. That is why indifferentism was denounced along with liberalism as modern “pests” by Pope Pius IX in his Syllabus of Errors of 1864. He understood that there is nothing more devastating to dogma than a shrug of the shoulders.

    It is nonsense and an antiquated notion that the many can do
    wrong. What the many do is God’s will. Before this wisdom all
    people have had to this day bowed down — kings, emperors,
    and excellencies. Up to now all our cattle have received
    encouragement through this wisdom. So God is damned well
    going to have to learn how to bow down too.

    KIERKEGAARD


    Americans’ relation to democracy has never been an indifferent one — or a reasoned one. For us it is a matter of dogmatic faith, and therefore a matter of the passions. We hold these truths to be self-evident: has ever a more debatable and consequential assertion been made since the Sermon on the Mount? But for Americans it is not a thesis one might subject to examination and emendation; even American atheists skip over the endowed by their Creator bit in reverent silence. We are in the thrall of a foundation myth as solid and imposing as an ancient temple, which we take turns purifying like so many vestals. We freely discuss how the mysterium tremendum should be interpreted and which rituals it imposes on us. But the oracle has spoken and is taking no further questions.

    Which is largely a good thing. Not long ago there was breezy talk of a world-historical transition to democracy, as if that were the easiest and most natural thing in the world to achieve. Establish a democratic pays légal, the thinking went, and a democratic pays réel will spontaneously sprout up within its boundaries. Today, when temples to cruel local deities are being built all over the globe, we are being reminded just how rare a democratic society is. So let us appreciate Americans’ unreasoned, dogmatic attachment to their own. Not everything unreasoned is unwise.

    But neither are all good things entirely good. This is what the dogmatic mind has trouble grasping. If some end — the rule of the saints, say, or the dictatorship of the proletariat — is deemed to be worth pursuing, the dogmatist needs to believe it is the only and perfect good, carrying no inherent disadvantages. Blemishes must be ignored so as not to distract the team. But once problems become impossible to ignore, as inevitably they will be, they must be explained. And so they will be attributed either to alien, retrograde forces that have infiltrated paradise, or to insufficient zeal among believers in pursuing the good. The dogmatic mind is haunted by two specters: the different and the indifferent.

    Americans’ dogmatism about democracy strengthens their attachment to it, but it weakens their understanding of it. The hardest thing for us is to establish enough intellectual distance from modern democracy to see it in historical perspective. (While virtually every American university has courses on “democratic values,” I am unaware of any that offers one on “undemocratic values,” despite the fact that almost all societies from the dawn of time to the present have been governed by them.) The Framers had experience with monarchy and had studied the failed republics of the European past. They looked upon democracy as one political form among others, a means to particular ends, with strengths and weakness like any other political arrangement. But once Americans in later generations came to know nothing but democratic life, democracy became the end itself, the summum bonum from which all discussion and debate about means must flow. When Americans ask how can we make our democracy better? what they are really asking is how can we make our democracy more democratic? — a subtle but profound difference.

    Our dogmatism shows up in other ways, too. Spend some time abroad and you start to notice that Americans rarely express mixed feelings about their country as other peoples do about theirs. We oscillate humorlessly between defensive boosterism and self-flagellation, especially the latter over the past half century. Today there is nothing more American than condemning American democracy or declaring ourselves alienated from it. Yet the only charge we can think of leveling against it is that of failing to be democratic enough. No one appreciates the irony except the alert foreign observer with a sense of humor, like the divine Mrs. Trollope. Foreign anti-Americanism is always, at some level, anti-democratic, which is what can make it enlightening, and useful to us. American anti-Americanism is hyper-American and earnest as dust. We find it virtually impossible to get outside ourselves. We breed no Tocquevilles, we must import them.

    Other countries claim to revere democracy, and many do. But few think of democracy as a never-ending moral project, a world-historical epic. And none have considered it their divine duty to bring democracy to the unbaptized. The Protestant stamp on the American mind is so deep that collectively we take on the mantle of the Pilgrim Church marching towards a redemption in which all things will be made new. For much of our history the sacred individual task of becoming a more Christian Christian ran parallel to the sacred collective task of becoming a more democratic democracy. Note that I do not say liberal democracy. For there is nothing liberal about Americans when they are on the march. Which is why when conscription begins, the indifferent, who for whatever reason do not feel like marching just now or have other destinations in mind, beat a retreat. Some have sought refuge in rural solitude, some in the American metropolis, some in foreign capitals. Anywhere where they might be free of the unremitting imperative to become a better person or a better American. Anywhere where they could simply become themselves.

    The thesis that huge quantities of soap testify to our greater
    cleanliness need not apply to the moral life, where the more
    recent principle seems more accurate, that a strong compulsion
    to wash suggests a dubious state of moral hygiene.

    ROBERT MUSIL


    A hand goes up in the audience: But we are no longer a Protestant country! We are a secular one that has gotten over religious conformism. What on earth are you talking about?

    Thank you for that question. In one decisive respect we have indeed moved beyond Protestantism: we no longer believe we are fallen, sinful creatures. The Protestant divine was severe with his flock and occasionally with his country, but he was also severe with himself. He was a busybody because his God was a busybody who put everyone, including the clergy, under divine scrutiny. There is none righteous, no, not one, says Saint Paul. What a terrible way to start the day.

    But in other respects we have retained vestiges of our Protestant heritage and even exaggerated them. Hegel foresaw this. Considering the moral and religious psychodynamics of his time, he observed that the Dialectic has a sense of humor: toss Calvin out the front door and Kant sneaks in through the back. No sooner had the empiricism and skepticism of the Enlightenment disenchanted nature, draining it of moral purpose, than German idealism surreptitiously reestablished the principles of Christian morality on abstract philosophical grounds. And no sooner had Kant midwifed that rebirth than the moral impulse floated free of his universalist strictures and became more subjective, less subtle, more excitable, less grounded in ordinary existence. In a word, it became Romantic. The saints are dead; long live the “beautiful souls.”

    What is a beautiful soul? For Schiller, who coined the term, it was a person in whom the age-old tension between moral law and human instinct had been overcome. In a beautiful soul, he wrote, individual deeds are not what is moral. Rather, the entire character is…The beautiful soul has no other merit, than that it is. Schiller imagined individuals who so fully incarnate the moral law that they have no need of moral reasoning and who experience no struggle to surmount the passions. This beautiful soul does not really act morally, it simply behaves instinctively — and such behaving is good. (Ring a bell? And God saw every thing that he had made, and, behold, it was very good.) A disciple of Kant, Schiller took the moral law to be by definition universal. What he did not anticipate was that the notion of a beautiful soul could inspire a radical impudence in anyone convinced of his or her own inner beauty. Who would not want to be crowned a moral Roi Soleil, absolved in advance of guilt, self-doubt, repentance, and expressions of humility? Who would not want to learn that the definition of righteous-ness is self-righteousness?

    So, in answer to the question, yes, in one sense America is a post-Protestant nation. The uptight Bible-thumping humbug of yore has been shamed off the public square — but only to make room for networks of self-righteous beautiful souls pronouncing sentence from the cathedras of their inner Vaticans. What no one seems to recognize is that they are an atavism, a blast from the past, not a breeze from a progressive future. Like their ancestors, they are prone to schisms and enter civil wars with the giddiness of Knights Templar descending on Palestine. Yet they are bound together by an unshakeable old belief that when it comes to making the world a better place there are no indifferent acts, no indifferent words, no indifferent thoughts, and no rest for the virtuous. Our beautiful souls are Marrano Christians as radical as old Saint Paul. They just don’t know it. Yes, the Dialectic really does have a sense of humor.

    “Ah,” Miss Gostrey sighed, “the name of the good American
    is as easily given as taken away! What is it, to begin with, to be
    one? And what’s the extraordinary hurry?”

    HENRY JAMES


    America is working on itself. It is almost always working on itself because Americans believe that life is a project, for individuals and nations. No other people believes this quite the way we do. There is no Belgian project, no Kenyan project, no Ecuadoran project, no Filipino project, no Canadian project. But there is an American project — or rather a black box for projects that change over time. We are always tearing out the walls of our collective house, adding additions, building decks, jackhammering the driveway and pouring new asphalt. We are seldom still and never quiet. And when we set to work we expect everyone to pitch in. And that means you.

    Which can put you in an awkward position. Let’s say you are unhappy with the project of the moment. Or you approve of it but think it should be handled differently. Or you appreciate the way it is handled but don’t feel particularly inclined to participate right now. Or you even want to participate but resent being dragooned into it or learning that others are being punished for not joining in. Or say that you simply want to be left alone. In any other country these would be considered entirely reasonable sentiments. But not in America when it is at work on itself.

    The projects of our moment may sound radical, but they are just extensions of the old principles of liberty, equality, and justice. That certainly speaks in their favor. What is new, thanks to our beautiful souls, is that the task of making this a better America has now been conflated with that of making you a better person. In the Protestant age, the promotion of Christian virtue ran parallel to the promotion of democracy but usually could be distinguished from it. Bringing you to accept Jesus as your personal savior had nothing necessarily to do with bringing you to accept William Howard Taft as your national savior. The first concerned your person, the second concerned your country.

    In the age of the beautiful soul our evangelical passions have survived and been transferred to the national project, personalizing it. Beautiful souls believe that one’s politics emanate from an inner moral state, not from a process of reasoning and dialogue with others. Given that assumption, they reasonably conclude that establishing a better politics depends on working an inner transformation on others, or on ostracizing them. And thanks to the wonders of technology, the scanning of other people’s souls has never seemed easier.

    These wonders have also landed us in a virtual, and global, panopticon. It has no physical presence, it exists solely in our minds. But that is sufficient to maintain a subtle pressure to demonstrate that we are all fully with the newest American projects. In periods of Christian enthusiasm in the past, elites would make ostentatious gestures of faith in order to ward off scrutiny. They would fund a Crusade, commission an altarpiece, make a pilgrimage, join a confraternity, or sponsor a work of theological apologetics. Virtue-signaling is an old human practice. Today the required gestures are of a political rather than spiritual nature. We have all, individuals and institutions, learned how to make them by adapting how we speak, how we write, how we present ourselves to the world, and — most insidiously — how we present the world to ourselves. By now we hardly notice that we are making such gestures. Yet we certainly notice when the codes are violated, even inadvertently; the reaction is swift and merciless. Such inadvertence, even due to temperament or sensibility, is read as indifference to building a more democratic America, which ranks very high on the new Syllabus of Errors.

    It is of vital importance to art that those who are made its
    messengers should not only keep their message uncorrupted, but
    should present themselves before their fellow men in the most
    unquestionable garb.

    THE CRAYON (1855)


    Aristocracies are aloof and serene. American democracy is needy and anxious. It wants to be loved. It is like a young puppy that can never get enough petting and treats. Who’s a good boy? Who’s a very good boy? And if you repeat this often enough, eventually the dog will lick your face, as if to say, and you’re a good boy too! The rewards for satisfying this neediness, and the penalties for failing to satisfy it, are powerful incentives to conform in just about every sphere of American life, no more consequentially than in intellectual and artistic matters. Every society, every religion, every form of government offers such incentives. Since ancient times worldly intellectuals and artists have understood that they are never entirely free from the obligation to genuflect occasionally, and the clever ones learn how to wink subtly at their audiences to signal when they are doing just that. L’art vaut une messe. Romanticism in the nineteenth century was the first movement to fuel the fantasy of complete autonomy from society, only to itself become a dogma that all thinkers and artists were expected to profess.

    It is one thing, though, to self-consciously genuflect when necessary — and then, just as self-consciously, to stand up when mass is over and return to your workplace. It is quite another to convince yourself that kneeling is standing. Or that you must turn your workplace into a chapel. What Tocque-ville meant by the “tyranny of the majority” was exactly this infiltration of public judgment into individual conscious-ness, changing our perceptions of and assumptions about the world. It is not really “false consciousness,” which is the holding of false beliefs that enhance the power of those who dominate others. Rather it is a kind of group conscious-ness that morphs and re-morphs arbitrarily like cumulus clouds. False consciousness obscures precise class interests. The tyranny of the majority obscures the interests, feelings, thoughts, and imagination of the self.

    What is so striking about the present cultural moment is how many Americans who occupy themselves with ideas and the imagination — writers, editors, scholars, journal-ists, filmmakers, artists, curators — seem to be suffering from Stockholm Syndrome. Rerouted from their personal destinations toward a more moral and democratic America, they are losing the instinct to set their own course. They no doubt believe in what they are doing; the question is whether they are in touch enough with themselves to feel any healthy tension between their presumed political obligations and whatever other drives and inclinations they might have.

    Talk to creative young people today and prepare yourself for the patter celebrating the new collective journey, which they have no trouble linking to their personal journeys, however short those still are. The rhetoric of identity is very useful here because it has both individual-psychological and political meaning, blurring the distinction between self-expression and collective moral progress. That is also why identity-talk has become the lingua franca of all grant-making and prize-giving bodies in the United States. The committees are much more comfortable exercising judgment based on someone’s physical characteristics and personal story than exercising aesthetic and intellectual judgment based on the work. Little do the well-meaning young people drawn into this game suspect that they are not advancing into a more progressive twenty-first century. They have simply been rerouted back to the nineteenth century, where they must now satisfy a newer, hipper class of Babbits. Or, worse, become their own Babbits, convincing themselves that their creative journeys really are and ought to be part of a collective moral journey.

    This is not to say that art has nothing to do with morality. Morality in the broadest sense, the fate of having to choose among conflicting ends and questionable means, is one of art’s great subjects, particularly the literary arts. But the art of the novelist is not to render categorical moral judgments on human action — that’s the prophet’s job. It is to cast them into shadow, to explore all the ruses of moral reasoning. Literature and art are not sustenance for the long march toward national redemption. They have nothing whatsoever to do with “giving voice” or “telling our stories” or “celebrating” anyone’s or any group’s achievements. That is to confuse art with advertising copy. The contribution of literature and art to morality is indirect. They have the power to remind us of the truth that we are mysteries to ourselves, as Augustine put it. Literature is not for simpletons. Billy Budd was not written for Billy Budds. It was written for grown-ups, or those who would become one. Which is why the status of literature and the other arts has never been terribly secure in the land of puer aeternus.

    In the American grain it is gregariousness, suspicion of privacy,
    a therapeutic distaste in the face of personal apartness and
    self-exile, which are dominant. In the new Eden, God’s creatures
    move in herds.

    GEORGE STEINER


    For some, art and reflection have always served as a refuge from the world. In America, the world more often serves as a refuge from art and reflection. We are only too happy when the conversation turns from such matters to those thought to be more practical, more pedagogical, more ethically uplift-ing, or more therapeutic. The history of anti-intellectualism in America is less one of efforts to extinguish the life of the mind than to divert it toward extraneous ends. (See On the Usefulness of the Humanities for Electrical Engineering, 3 vols.) Such efforts reflect a perverse sublimation of the eros behind all creative activity, redirecting it from the inner life of the creative person toward some activity that can be judged in public by commit-tees. The result, in intellectual and artistic terms, is either propaganda or kitsch. And we are drowning in both.

    Censorship in America comes and goes. Self-censorship does too, depending on the public mood at any particular time. The most persistent threat to arts and letters in America is amnesia, the forgetting of just what it is to cultivate an individual vision or point of view in a place where thinking, writing, and making are judged to be necessarily directed toward some external end. The barriers to becoming an individual in individualistic America should never be underestimated. Tocqueville’s deepest insight was into the anxieties of democratic life brought on by the promise and reality of autonomy. Freedom is an abyss; the urge to turn from it is strong. The tyranny of the majority is less a violent imposition than a psychologically comprehensible form of voluntary servitude.

    In such an environment, maintaining a state of inner indifference is an achievement. Indifference is not apathy. Not at all. It is the fruit of an instinct to moisten the roots of all that has grown, as Whitman put it, and experience one’s self and the world intensely without filters, without having to consider what ends are being served beyond that experience. It is an instinct to hit the mute button, to block out whatever claims are being made on one’s attention and concern, confident that heaven can wait. It is an instinct for privacy, far from the prying eyes and wagging tongues of beautiful gods and beautiful souls. It is a liberal instinct, not a democratic one.

    Liberalism, Judith Shklar once wrote, is monogamously, faithfully, and permanently married to democracy — but it is a marriage of convenience. That is exactly right. The liberal indifference of Montaigne was a declaration of independence from the religious zealots of his time. But zealotry is zealotry, and democracy has its own zealots. We may look more kindly on their aims but they are no less a potential threat to inner freedom than our homegrown messiahs are. The indifferent appreciate democracy to the extent that it guarantees that freedom; they distrust and resist it the moment they are invited down to the panopticon for a little chat. They are not anti-democratic or anti-justice or reactionary. They understand that a liberal democracy requires solidarity and sacrifice. and reforms, sometimes radical ones. They wish to be good citizens but feel no obligation to cast down their nets and join the redemptive pilgrimage. Their kingdom is not of this continent.

    It is a paradox of our time that the more Americans learn to tolerate difference, the less they are able to tolerate indifference. But it is precisely the right to indifference that we must assert now. The right to choose one’s own battles, to find one’s own balance between the True, the Good, and the Beautiful. The right to resist any creeping Gleichschaltungthat would bring a thinker’s thoughts or a writer’s words or an artist’s or filmmaker’s work into alignment with a catechism. Dr. Bowdler be damned.

    America is working on itself. Let it work, and may some good come of it. But the indifferent will politely decline the invitation to shake pom-poms on the sidelines or join a Battle for The American Soul just now. Why now? Because the illiberal passions of the moment threaten their autonomy and their self-cultivation, and have formed a generation that fails to see the value of those possessions. That is the saddest part. Perhaps a later one will again find it inspiring to learn what the early modernist writers and artists who fled the country believed: that America’s claim on us is never greater than our claim on ourselves. That democracy is not everything. That morality is not everything. That nothing is everything.

    “From 2020”

    1.
    The first half having been
    given up to space, I decided
    to devote my remaining
    life to time, this thing we live
    in fishily or on like moss
    or the spores of a stubborn
    candida strain only to be
    gored or gaffed, roots
    fossicked out by rake or have
    our membranes made so permeable
    by -azole drugs the contents
    of the cell flood everywhere.

    The bubble gun I’d bought
    on Amazon had come, so
    flushed, time’s new novitiate,
    I stood outside the door
    in velour slippers with a plastic
    wedge, from M&S, the toes
    gone through, and practised
    pulsing softly on the trigger,
    pushing dribbly hopeless sac
    shapes out, dead embryos
    that, managed all the same
    to right themselves to spheres,
    and bob as bubbles do, the colour
    of a rainbow minced or diced
    into the ornamental tree, or else
    just brim the fatal fence, most
    out of reach of the toddler
    capering side to side to keep
    his balance on the grass, one
    snotty finger prodding like a
    rapper turned jihadist’s threat
    of threat and all, ten seconds in,
    unskinned of radiance,
    re-rendered air.

    This would have been in that
    sad hobbled stretch of week
    between a Sunday Christmas
    and new year, my friends all
    40+, harassed by infants, joylessly
    still slugging Côte de Beaune
    and fennel-roasted nuts, the liver
    detox books not downloaded
    to app but only browsed by phone
    in the dark mornings, slitless.
    (I lay there worrying at my own
    which had the meaty bigness
    underrib of foie gras entier.
    The pillow case smelled horsey,
    sheets unchanged, the laundry
    everywhere, mountainously.)

    It wasn’t till my birthday,
    Jan 3, when schools went back,
    search engines saw a volume
    spike for ‘custody’ and gifs of
    sullen cats with emery boards
    explained the dead-eyed un-
    sheathed fear produced by credit
    card repayment plans and pissing
    on ketosis sticks that the month
    could manifest the rawness
    of new year: poverty then,
    and mock exams; now, enzyme
    supplements, and softening
    the 11s, scooped one layer
    deeper by all that red wine,
    by summer’s oxidative damage.

    2.
    The dry trees lolled in drunken
    groups outside front gates,
    waiting for the council van
    to come. Today, which was
    my birthday, macerated shit
    in nappies from the 24th,
    threaded by the bin in links,
    by twisting, like short sausages
    or poodles fashioned from
    balloons, was binned along
    with bean tails, tonic bottles,
    nails, a mini Lamborghini’s
    snapped-off wheels, a magnum
    bitter round the rim with old
    champagne (that halitosis smell),
    and twenty near-identical
    reception Christmas cards:
    a stippled snow-hung tree
    a bloated, ravaged robin.

    My son propped on one hip,
    front door ajar, both shivering
    in the not yet dawn, the heating
    just about to crackle on —
    raised up his palm in silent
    pleasure at the work being done.
    One man, his shoulders dewy
    with reflective strips, waved
    back and called him by his name
    — the weekly ceremony —
    until he bristled in my arms
    legs stiffening with joy.

    3.
    Downstairs I mixed some Movicol
    into warm juice and saw a
    squirrel run across the grass,
    freeze skinny as a meerkat
    on the mostly mud I’d tried
    to reseed twice last summer.
    (After moss killer, waiting,
    something ferrous, the shady
    lawn seed recommended by
    a friend eventually produced,
    as if by staple gun, a few sparse
    fiercely emerald reeds which died.)

    Both boys had scrambled over
    look! and when they turned away
    behind the mouth and nose
    breath diamonds, fading,
    the squirrel was spray-digging,
    pelleting again, even though
    he must have polished off his nuts
    by Halloween. We’d seen him,
    bushier then, a baby really,
    slyly going back and back,
    as we did on school coach trips
    to the battlefields of Ypres
    ripping through the Monster Munch
    long before the sickening ferry
    with its waffle smell and slot
    machines, the textbook poppy
    fields we’d seen on Blackadder,
    now stretching flatly, forever.

    I suppose the squirrel didn’t know
    the days would stick like curtains
    catching on the outer edge
    of the metal track, the yellow
    fleur de lys a half inch less
    wide open every morning.
    I knew that I could probe it,
    hey Siri, do most squirrels
    make it through to spring in their
    first year of life in urban
    environments, but the fact
    that I was always ladling
    porridge as he dug, donating
    raisins, doing calligraphy
    with smooth or crunchy
    peanut butter — there was
    that whole jack-o-lantern
    month, involving apricots,
    when it rained — only added
    to my sense of having been
    complicit in his losses:
    the bad grass, the Amazon
    deliveries that kept coming
    in white Toyota vans, the
    part-thawed corn cobettes
    siloed in their own brown bag,
    spongy with a mortuary
    softness that repelled me.
    He’d seen all that.

    The boys must be upstairs
    — a long withdrawing roar of
    Avalanche! the scuff of
    falling cushions — so I grabbed
    a handful of cashews and stood,
    unseen outside the window,
    scattering them contritely on
    the mud, around the reeds
    now colourless, and the small
    quill of his wavering tail.

    4.
    It being my birthday I was
    standing there, lost in the screen,
    the screen the same for reading
    on and writing this,
    for writing to, for finding out
    how many steps I’d taken
    yesterday/ in March last year,
    when I had spotted, bled,
    the algorithm always and
    upbraidingly concerned
    with sensed decline: a higher
    average headphone volume,
    deafness beckoning,
    and fewer steps, an upward
    trend in weight from these slack days
    around the year’s end picking
    at the Roses box, and making
    desperate cupcakes from a
    bbe last August box mix
    (the dribbly icing misty
    on the spoon, the wafer dog
    — a fireman — loosely hanging on)
    morbid obesity, then death.

    Its view of future time was,
    In a sense, so frictionless
    I envied it — that whole fin-
    de-siècle confidence:
    if history wasn’t progress
    it was Untergang, Déclin,
    the line traced out as if a ball
    dropping from the balltoss
    met the racket’s sweetspot
    swoof and whipped across the net
    and up, and up, so rather
    than returning it evaded
    satellites, fine meteorites,
    the rain, all things held still
    or left to fall, by gravity,
    and just went up and up,
    and quietly on. In China
    health authorities alarm
    as virus tally reaches 44
    in capital of Hubei province
    Wuhan, I could have read,
    if I’d read every piece of news
    that day. I didn’t, of course.

    5.
    Later, as we watched the moth’s
    drab plates of wing contracting
    on the windowpane or rented
    house’s limewashed skirting board,
    my son would talk of new year
    as the time when we had supper
    in the living room and ‘I
    was very ‘cited’. After baths,
    bedtime, the news, the news,
    Zoom wine with friends whose distant
    houses were still lapped, dustily,
    by sun, I lay unblinking
    on the bed, bean-fed again
    (shakshuka, quesadillas,
    cannellini mulched to paste:
    the NYT was camping poverty)
    and worked the chalky residue
    two paracetamols (expired)
    had striped across my tongue
    with squash, a pint. I searched
    for pleurisy, rib pain, cut glass
    opacities, read Twitter feeds
    of people in Berlin disputing
    quarantine R0 pathogen
    that ship the Princess Diamond
    why cocoons are never safe,
    then watched a video of snow
    massing right to left across
    the scientist’s window in Pankow
    until it was the only medium
    the only crazily still
    mobile thing behind the window
    flecked with paint chips, greasy
    fingerprint galaxies. Beyond,
    beyond: the snow did as it pleased
    effaced revealed the avenue
    he lived on with its scrub of
    park, its single taxi, and the lines
    of parked-up old estates which
    like the broken-backed receding
    linden trees reached to the
    grey horizon’s grainy limit.

    The Peripheralist

    During Black History Month earlier this year, the New York City streetwear boutique Alife brought to market a limited set of six heather grey hooded sweatshirts made of heavyweight, pre-shrunk  fourteen-ounce cotton fleece, with ribbed cuffs and waist. The garments, whose sole decorative flourish were the names of black cultural icons — from Harriet Tubman to Marcus Garvey — screen-printed in sans-serif across the chest, retailed for $138 a pop and sold out promptly. Of the six men and women featured in the campaign, there was only one writer: James Baldwin.

    On Instagram, to promote its product, the brand deployed a short clip of Baldwin’s extraordinary debate against William F. Buckley, Jr., on the theme “Is the American Dream at the Price of the Negro?” at the Cambridge Union in 1965 — a grainy YouTube gem beloved by aficionados that was recently brought to mainstream attention in Raoul Peck’s documentary I Am Not Your Negro. A friend messaged the post to me accompanied by the Thinking Face emoji, finger and thumb against the chin, a look of skepticism. I responded differently. I wasn’t incredulous about this cultural commoditization: Baldwin’s name had long since become a kind of shorthand, an emblem of a position — a way, increasingly fashionable in its own right, to signal which side of any number of contested issues of the day one wishes to come down on.

    Jean-Paul Sartre once described the young Albert Camus as “the  admirable conjunction of a man, of an action, and of a work,” by which he meant, simply, that there was no daylight between his life and his ideas, and it was impossible to think of one without conjuring the other. In an essay for the New York Review of Books in 1963, in which she contrasted morally virtuous if artistically second-tier writers (“husbands”) with perverse and reckless but exciting geniuses (“lovers”), Susan Sontag took Sartre’s observation as a springboard for a merciless review of Camus’ posthumously published Notebooks. “Today only the work remains,” she asserted. “And whatever the conjunction of man, action, and work inspired in the minds and hearts of his thousands of readers and admirers cannot be wholly reconstituted by experience of the work alone.” Elsewhere she expanded the critique:

    Whenever Camus is spoken of there is a mingling of personal, moral, and literary judgment. No discussion of Camus fails to include, or at least suggest, a tribute to his goodness and attractiveness as a man. To write about Camus is thus to consider what occurs between the image of a writer and his work, which is tantamount to the relation between morality and literature. For it is not only that Camus himself is always thrusting the moral problem upon his readers. … It is because his work, solely as a literary accomplishment, is not major enough to bear the weight of admiration that readers want to give it. One wants Camus to be a truly great writer, not just a very good one. But he is not. It might be useful here to compare Camus with George Orwell and James Baldwin, two other husbandly writers who essay to combine the role of artist with civic conscience.

    What occurs between the image of a writer and her work: the same problem afflicts the reception of Sontag herself. Still, she has a point. She writes elsewhere that Camus, as a novelist, attained a different altitude than either Orwell or Baldwin, but I have never been able to unsee that dressing down of all three “husbandly” men, Baldwin in particular, or to entirely dislodge him from her framework. As the years accumulate and Baldwin’s image and moral authority become ever more flattened, ever more frequently appropriated for the preoccupations of the present moment — with the most casual assumption of self-evidence — something in Sontag’s refusal to play along nags at me. In any event, and even though Baldwin, later in his career, wrote that he had “never esteemed [Camus] as highly as do so many others,” I have always found it useful to think of him as a kind of Harlem companion to the scholarship student from Algeria who became — and then failed to remain — his nation’s moral compass, who was blessed with the same gift of preternatural eloquence, and who struggled mightily and elegantly and perhaps vainly to bridge the disparate worlds that he straddled.

    Like Camus, a decade his senior, James Baldwin was born in the first quarter of the twentieth century in squalor, about as far as possible — spiritually if not physically — from the glittering intellectual circles that he would come to dominate. Both young men were total packages, publishing stories, novels, plays, essays, reviews and reportage after having exploded on the scene fully formed in their twenties. Likewise, both men rose to global stardom outside their home countries, specifically in Paris, and peaked at an age when others only start to hit their stride — more or less around forty. Unlike Camus, Baldwin was not exactly fatherless, but it was necessary for him to eliminate one such figure after another to make space in his life for his own prodigious talent. In this sense, he was every bit the “first man” that Camus intended. By the time that Baldwin died of stomach cancer in the sunbaked Mediterranean village of Saint-Paul-de-Vence — not so far from the equally picturesque medieval town of Lourmarin, where Camus invested his Nobel money and is buried—he too was regarded as passé by a generation of readers no longer interested in reconciling differences or avoiding conflict. “Unfortunately, moral beauty in art — like physical beauty in a person — is extremely perishable,” Sontag warned. Baldwin did have the good fortune to have won at least two very influential younger champions in Henry Louis Gates, Jr. and Toni Morrison. But it was not at all a foregone conclusion that he would become, in the next three decades, nothing less than the pop culture patron saint of an entire generation of black (and increasingly non-black) artists, activists, and writers, in America and beyond. I am referring to the generation that came of intellectual age during the Obama presidency and the Black Lives Matter movement, which defined this decade’s response to the spate of highly publicized police and vigilante killings of unarmed African Americans, beginning with Trayvon Martin’s murder in Sanford, Florida in 2012. The enormous renewal of attention paid to Baldwin — which, at least until the coronavirus catapulted The Plague back onto bestseller lists around the world, had eluded Camus — has certainly been merited and illuminating. It has also been reductive  and disturbing.  

    Poor, black, and not straight — intersectional avant la lettre — Baldwin fits seamlessly, as very few icons from the past are able to do, into the readymade template of our era’s obsession with identity. (Even Sontag, a near-exact contemporary who outlived him by almost twenty years, could not entirely bring herself to admit that she was gay.) Books about Baldwin abound, biographical and literary and political studies, and films too: a cottage industry of Baldwiniana has emerged over the past decade. The most sensational entry in the contest for Baldwin’s halo would have to be Ta-Nehisi Coates’ Between the World and Me, his letter to his teenaged son that was formally modeled on the first section of Baldwin’s book The Fire Next Time, called “My Dungeon Shook: an open letter to my nephew.” The motor of Coates’ essay was the question that Baldwin debated with Buckley — is the American Dream at the price of the Negro? In his own response to that question, Coates divided America into two essentialized camps, the “Dreamers” and a permanent black underclass. Between the World and Me went on to become one of the most widely read and discussed works of nonfiction in the new century.

    In the book’s sole blurb, the late Morrison herself enthused: “I’ve been wondering who might fill the intellectual void that plagued me after James Baldwin died. Clearly it is Ta-Nehisi Coates.” More than anything else, that endorsement bound the two men together in the public’s imagination. In his biography of Balwin, which appeared last year, Bill V. Mullen goes so far as to argue that Between the World and Me “was singularly responsible for the rediscovery of Baldwin by the Black Lives Matter movement.” Whether or not that is true, five years out a certain irony is clear: Morrison’s remark and Coates’ success had an even greater impact on the way we perceive Baldwin than the way we do Coates. 

    Despite the hard-won optimism and ardent emphasis on reconciliation and regeneration through love that distinguishes his work, there is an undeniably pessimistic strain in Baldwin that often rings prophetic today. Drawing on this latter element alone, Coates captured and vocalized the profound disappointment provoked by the many limitations of the first black presidency. Between the World and Me, which so frankly and forcefully embodied the rage and justifiable frustration of an historically oppressed people with a rising set of expectations, rhetorically homed in on a single (mostly but not entirely late-phase) blue note in Baldwin’s catalogue of sonorities. If there is a problem here, it is not that Coates’ version of Baldwin rings altogether false. But it is tendentiously selective. It is a simplifying and coarsening distillation of a versatile and multifaceted writer, a supple and self-contradictory writer, into a single dark and haranguing register. In the process we are made to sacrifice a large amount of the complexity that made the author of Giovanni’s Room and Another Country so special and difficult to pin. Baldwin is revered, but he is lost.

    Consider also that Oscar-nominated Baldwin documentary, I Am Not Your Negro. Though a decade in the making, the project arrived at and helped to define the Baldwin renaissance. The film takes as its impetus Baldwin’s thirty-page unfinished manuscript, Remember This House, which he described in a letter to his agent in 1979 as an exploration of race in America told through the assassinations of three prominent Civil Rights leaders: Medgar Evers, Malcolm X, and Martin Luther King, Jr. Onto this frame Peck grafts footage of Baldwin at roundtables and debates, familiar and jarring archival clips of violent white reaction to Civil Rights progress, such as school and bus integration, as well as contemporary shots of charged police confrontations with activists in Ferguson and elsewhere. There are no interviews with scholars and experts, no talking heads. Peck calculates correctly that Baldwin’s words alone will carry the film (he is the sole writer credited on the project), whether spoken directly or read with understated authority by the actor Samuel L. Jackson. The effect is exhilarating — Baldwin’s language is always captivating and lucid; he needs no translation or amplification. Even the wildly charismatic Jackson refrains from any attempt to compete with the words that he reads, which were written by a former child preacher in Harlem who was one of the few great writers in recent memory to be an equal or better public speaker, a distinction that the film makes thrillingly apparent.

    Yet I Am Not Your Negro inadvertently makes manifest some of the incongruities between the smooth new radical mythology of the writer and the man as he actually existed and co-existed with the cultural forces and major personalities of his era. Though it purports to tease out important connections — “I want these three lives to bang against each other,” Baldwin writes of the project — we learn very little about the relationship between him and the trio of martyrs he set out to examine in Remember This House. This is both because those leaders, while they knew and understood each other, did not really constitute a fraternity of any sort, and also — perhaps more importantly — because it can be expedient to avoid the complexity and contradictions of Baldwin’s own insecure position within the actually existing black America, to and from which he remained throughout his adulthood a permanent “transatlantic commuter.” 

    Of the three, he may have experienced the most straight-forward fellowship with the Mississippi activist Medgar Evers, the youngest of the group and the first to be murdered. Malcolm X was explicit, however, that what he sought was a “real” revolution, not the “pseudo revolt” of someone like James Baldwin. And Martin Luther King, Jr., as Douglas Field shows in All Those Strangers: The Art and Lives of James Baldwin, once balked — in a conversation taped by the F.B.I. — at appearing alongside the writer on television, claiming to be “put off by the poetic exaggeration in Baldwin’s approach to race issues.” It is hard to imagine that he could have been unaware that Baldwin was being denigrated as “Martin Luther Queen” in civil-rights circles.

    Baldwin himself was understandably eager to emphasize and even embellish his connection to such extraordinary and sacrificial figures, especially King, but their realities were highly incommensurate on a variety of levels. In his memoir No Name in the Street, in 1972, there is a revealing set piece in which Baldwin writes about buying a nice dark-blue suit for a scheduled appearance with King at Carnegie Hall. Two weeks later, after the latter was brutally assassinated, it would be Baldwin’s attire for his funeral. Early in the Peck film we hear Baldwin worry over his role as a “witness” and not an “actor” in the convulsions of his time, only to resolve the apparent discrepancy by declaring that the two roles are separated by a “thin line indeed.” In his attempts to write himself over that line and into proximity with men like Evers, King, and Malcolm and by extension into the center of the civil rights struggle — to collapse that space between man, action, and work — Baldwin at once underestimated a crucial distinction (as well as his own specialness) while also betraying his insurmountable distance from all of them. Darryl Pinckney, in a review of the Library of America’s edition of Baldwin’s writings, kindled to Baldwin’s comment to a newspaper journalist that he would never be able to wear that suit again:

    A friend of Baldwin’s, a US postal worker whom he rarely saw, had seen the newspaper story and, because they were the same size, asked for the suit that to Baldwin was “drenched in the blood of all the crimes of my country.” Baldwin went up to Harlem in a hired “Cadillac limousine” in order to avoid the humiliation of watching taxis not stop for him, a black man. His life came into the “unspeakably respectable” apartment of his friend like “the roar of champagne and the odor of brimstone.” He characterizes himself as he assumes he must have appeared to his friend’s family: “an aging, lonely, sexually dubious, politically outrageous, unspeakably erratic freak.”

    His friend had also “made it” — holder of a civil-service job; builder of a house next to his mother’s on Long Island. Baldwin was incredulous that his friend had no interest in the civil rights struggle. They got into an argument about Vietnam. Baldwin says he realized then that the suit belonged to his friend and to his friend’s family. “The blood in which the fabric of that suit was stiffening was theirs,” and the distance between him and them was that they did not know this.

    The story is tortured and yet, regardless of Baldwin’s outrage at indifference or his identification with slain civil rights leaders, there is something wrongly insinuating about his depicting his scarcely worn suit as drenched and stiffening with blood, even metaphorical blood. People still remember what Jesse Jackson’s shirt looked like after King was shot.

    This slightly frivolous side of Baldwin can just be glimpsed in I Am Not Your Negro (and is almost totally absent from the new hagiography). “I was never in town to stay,” he admits on the film, and after Evers’ death we do hear Jackson read, “Months later, I was in Puerto Rico, working on a play,” as the camera reveals a sparkling beachscape. But he assumes his comparative privilege in No Name in the Street, where he notes that, when King was murdered, he was ensconced in Palm Springs, working on an unrealized screenplay for The Autobiography of Malcolm X. After the emotional and rhetorical shift to Black Power at the end of the ’60s, many of Baldwin’s contemporaries and descendants wrote him off — much the same way that intellectuals and radicals in Algeria and Paris turned their backs on Camus — considering him too enamored of his own voice and far too comfortable in the white world. No Name in the Street, like much of Baldwin’s later output, can be read as a kind of overture to these critics, a capitulation to the new rules of engagement. 

    “I was in some way in those years, without realizing it, the great white hope of the great white father,” Baldwin concedes. “I was not a racist, or so I thought. Malcolm was a racist, or so they thought. In fact we were simply trapped in the same situation.” In actual fact their situations were very different and those differences are worth thinking through — not wishing away — because they help to explain why their worldviews differed, too. Baldwin was in London when Malcolm was murdered. In the epilogue of No Name in the Street, just a beat after he writes that “the Western party is over, and the white man’s sun has set. Period,” he signs off “New York, San Francisco, Hollywood, London, Istanbul, St. Paul de Vence.” Unlike Malcolm X, there were plenty of lovely and welcoming places where James Baldwin could go, Pinckney mordantly notes, “to remind himself that he felt trapped.”

    Yet he did not invent his own marginality. It is no exaggeration to say that he was in some crucial ways homeless. In 1950, with a reasoning that anticipates the desire of today’s #ADOS movement to disentangle the all-American experience of descendants of slaves from any larger pseudo-biological notion of international blackness — to say nothing of that infinitely fuzzier category “people of color” — Baldwin wrote in his essay “Encounter on the Seine” that “they face each other, the Negro and the African, over a gulf of three hundred years — an alienation too vast to be conquered in an evening’s good will, too heavy and too double-edged ever to be trapped in speech.” In Paris, he discovered what he could not recognize under the specific conditions of racial bigotry in New York City, and what he could never entirely disavow once he had experienced it: “I proved, to my astonishment, to be as American as any Texas G.I. And I found that my experience was shared by every American writer I knew in Paris.”

     

    That revelation comes in Nobody Knows My Name, his phenomenal second essay collection: “Like me, they had been divorced from their origins, and it turned out to make very little difference that the origins of white Americans were European and mine were African — they were no more at home in Europe than I was.” This is the Baldwin that the new revival has tended to gloss over or outright ignore. It is what distinguishes Baldwin from so many of his contemporaries and ours. This is the mature Baldwin, the wise Baldwin, the Baldwin who seethes at injustice but is not duped by the excesses of radicalism. It is the writer whose message — while not quite tailor-made to sell sweatshirts — is ultimately persuasive and always necessary. There can be an uncanny Benjamin Button-sense to reading Baldwin in chrono-logical order: it can feel as if the young man and not the elder is the all-accomplished, all-knowing sage. Here is that young-old man in his astonishing debut collection, Notes of a Native Son, recalling his birthday in 1943, which also happened to be the day that his father died and his sister was born. Riots in Harlem had erupted after a white police officer and a black soldier clashed in a hotel lobby in a dispute over a woman:

    Negro girls, white policemen, in or out of uniform, and Negro males — in or out of uniform — were part of the furniture of the lobby of the Hotel Braddock and this was certainly not the first time such an incident had occurred. It was destined, however, to receive an unprecedented publicity, for the fight between the policeman and the soldier ended with the shooting of the soldier. Rumor, flowing immediately to the streets outside, stated that the soldier had been shot in the back, an instantaneous and revealing invention, and that the soldier had died protecting a Negro woman. The facts were somewhat different — for example, the soldier had not been shot in the back, and was not dead, and the girl seems to have been as dubious a symbol of womanhood as her white counterpart in Georgia usually is, but no one was interested in the facts. They preferred the invention because the invention expressed and corroborated their hates and fears so perfectly.

    Later in the essay, in words he would live by to the end, he writes, “In order really to hate white people, one has to blot out so much of the mind — and the heart — that this hatred becomes an exhausting and self-destructive pose.” And he continues, magnificently: “That bleakly memorable morning I hated the unbelievable streets and the Negroes and whites who had, equally, made them that way. But I knew that it was folly, as my father would have said, this bitterness was folly. It was necessary to hold on to the things that mattered. The dead man mattered, the new life mattered; blackness and whiteness did not matter; to believe that they did was to acquiesce in one’s own destruction.”

    I would like to believe that Baldwin never grew out of such views, that he remained an outsider — a peripheralist, as my own father might say — his entire life; and that this is one of the reasons he lived out his final seventeen years in Provence and could never quite bring himself back to America. He paid huge costs to remain semi-aloof, one of which might be the risk of permanent misunderstanding, even in his posthumous homecoming — but I am convinced that this ability to stand apart, this refusal to be completely subsumed and taken over by any group or collectivity, is what ultimately spared him from the all-consuming identity myopia that plagued his era and now plagues ours. He was not a Black Muslim or a Black Panther, he observed, “because I did not believe all white people were devils and I did not want young black people to believe that.” The simple decency of that sentence still holds the power to shock. It is the kind of correct-to-the-point-of-seeming-naïve insight that puts me in mind of Camus, the belief of a naturally humane and moral man, which we are desperately in need of in this age of opportunism and distrust.

    None of this is to imply that Baldwin was ever less than lucid about the nature and tenacity of American racism. Baldwin in his nobility was nobody’s fool. One of the most powerful sequences in I Am Not Your Negro is instructive about what makes him, today, such an irresistible figure. Here at last we see him in crackling black-and-white in the company of two of the three martyrs. Here we encounter the “conjunction of man, action, and work” of which Sontag spoke. On a panel moderated by the sociologist E. Franklin Frazier — there was so much aggregated brilliance and iconography assembled there! — a weary-look-ing King and an implacable Malcolm appear as dignified props for an immensely thoughtful Baldwin, who speaks stirringly of the “vast, heedless, unthinking, cruel white majority.” Peck cuts to recent black-and-white images of contemporary American police on a war footing, storming through the streets of Fergu-son. “I’m terrified at the moral apathy,” Baldwin says, “these people have deluded themselves for so long that they really do think I’m not human. It means that they have themselves become moral monsters.” Now the screen floods with color as nostalgic mid-century shots of an all-white beauty pageant, and young white women frolicking in spotless ensembles against a radiant blue sky, wash over the viewer. The dissonance of the juxtaposition is excruciating, undeniable. 

    How are we ever to find our way out of this conundrum? Baldwin hit upon some of the answers. Late in life he seemed to return to a complex understanding of struggle that contrasts with the victim-oppressor binary to which the discourse that overtook him adheres. “It seemed to me that if I took the role of a victim then I was simply reassuring the defenders of the status quo,” he told The Paris Review shortly before he died. “As long as I was a victim they could pity me and add a few more pennies to my home-relief check. Nothing would change in that way. . . . It was beneath me to blame anybody for what happened to me.” And in “Letter from a Region in My Mind,” his essay in The New Yorker in 1962 that became The Fire Next Time, he was even clearer. “For the sake of one’s children, in order to minimize the bill that they must pay, one must be careful not to take refuge in any delusion,” he wrote. “And the value placed on the color of the skin is always and everywhere and forever a delusion,” he continued. “I know that what I am asking is impossible. But in our time, as in every time, the impossible is the least that one can demand.”

    A dozen years later the Israeli-Palestinian writer Emile Habibi coined the wonderful term “pessoptimist” for the title of a satirical novel. I cannot think of a better way to describe the mottled sensibility and variegated conscience that Baldwin brought to black American life and letters. He was repulsed by the stark, cliché-ridden, and fatalistic “Afro-pessimism” that we have become conditioned to espouse, and to tweet; nor was his understanding of race anything like the Panglossian self-hating optimism for which contemporaneous critics such as Eldridge Cleaver excoriated him. To reduce him to either pole in Habibi’s paradox is as irresponsible as it is boring. A great deal hangs on the proper interpretation of James Baldwin’s work and legacy. Even more than Malcolm X or Martin Luther King, Jr., and certainly more than Ralph Ellison, his principal African American rival in talent, James Baldwin has become one of the primary arenas in which the most urgent questions — the meanings of the past, the possibilities of the future — of black American life are being contested today. These are not idle feuds. The stakes of getting his reputation right extend well beyond literary disputations.

    Last May, the excruciating videotaped killing of George Floyd, a forty-six-year-old black man in Minneapolis on whose neck a white police officer kneeled for nearly nine minutes, was yet another brutal and galvanizing cause for pessimism, as Baldwin would rightly have told us. It is at once astonishing and unbearable that our society (and not just white society, as George Zimmerman and other killers “of color” grimly attest) can still produce so many instances of appalling cruelty and injustice, instances which disproportionately target blacks. And yet even as we condemn such evil, our indignation cannot support a total or unending negativity. Baldwin would have admonished us about this, too. It would be just as disastrous a misjudgment of the schizophrenic American reality to argue that nothing (or next to nothing) has changed, that “lynchings” continue to define the black experience some two decades into the twenty-first century, as it would be to dismiss the very specific and incontrovertible familiarity and dread with which so many black Americans viewed that stomach-turning footage from Minneapolis. What is so challenging — but all the more essential for its difficulty — for its absurdity, you could say — is to keep in mind two competing ideas simultaneously. The fight for justice must not end merely in blind revenge or catharsis. The struggle demands not just fury and resentment, but also hope and wisdom. 

    In maintaining such ambiguity, in defending such complexity, we are left with a single abiding truth: evil is always with us because it is one of the permanent conditions of humankind. Black people — like all other peoples forced to recognize up close the mixed-up character of life, its inextricable tangle of lights and darks — must become connoisseurs of pessimism and optimism to equal degrees. In his moral and intellectual capaciousness, Baldwin models this pessoptimistic mentality on and off the page. In this way his work (as opposed to the compressed and glib image that we are increasingly sold) is mimetic of American reality itself — plenty of which may turn out to be irreconcilable in the end, but none of which is ever enough to justify a single response in every season. Whatever our way out of our racial pain, it will be complicated and fitful and without fully satisfying once-and-for-all resolutions. Much like the context that created him, it is not necessary or even desirable to admire everything that James Baldwin said or did. But he exists to discomfit us, and to call us beyond tidy conclusions and easy emotions. He is forever inconvenient, which is why he is exactly what we need.

    The Indian Tragedy

    Earlier this year, the Republic of India turned seventy. On January 26, 1950, the country adopted a new Constitution, which severed all ties with the British Empire, mandated multi-party democracy based on universal adult franchise, abolished caste and gender distinctions, awarded equal rights of citizen-ship to religious minorities, and in myriad other ways broke with the feudal, hierarchical, and sectarian past. The chairman of the Drafting Committee was the great scholar B. R. Ambedkar, himself a “Dalit,” born into the lowest and most oppressed strata of Indian society, and representative in his person and his beliefs of the sweeping social and political transformations that the document promised to bring about.

    The drafting of the Constitution took three whole years. Between December 1946 and December 1949, its provisions were discussed threadbare in an Assembly whose members included the country’s most influential politicians (spanning the ideological spectrum, from atheistic Communists to orthodox Hindus and all shades in between) as well as leading economists, lawyers, and women’s rights activists. When these deliberations concluded, and it fell to Ambedkar to introduce the final document — with 395 Articles and 12 Schedules, the longest of its kind in the history of the democratic world — to the Assembly, he issued some warnings, of which at least one was strikingly prophetic. He invoked John Stuart Mill in asking Indians not “to lay their liberties at the feet of even a great man, or to trust him with powers which enable him to subvert their institutions.” There was “nothing wrong,” said Ambedkar, “in being grateful to great men who have rendered life-long services to the country. But there are limits to gratefulness.” His worry was that “for India, bhakti, or what may be called the path of devotion or hero-worship, plays a part in its politics unequalled in magnitude by the part it plays in the politics of any other country. Bhakti, in religion, may be a road to the salvation of the soul. But in politics, bhakti or hero-worship, is a sure road to degradation and to eventual dictatorship.”

    When he spoke those words, Ambedkar may have had the possible deification of the recently martyred Mahatma Gandhi in mind. But his remarks seem uncannily prescient about the actual deification of a later and lesser Gandhi. In the early 1970s, politicians of the ruling Congress Party began speaking of how “India is Indira and Indira is India,” a process that culminated, as Ambedkar had foreseen, in political degradation and eventual dictatorship. In June 1975, Prime Minister Indira Gandhi suspended civil liberties, jailed all opposition politicians, and imposed a strict regime of press censorship. This was a time of fear and terror, which lasted almost two years, and ended when Mrs. Gandhi — provoked in part by criticism from Western liberals and in part by her own conscience — ended the Emergency and called for fresh elections, which she and her party lost.

    If one is reminded of Ambedkar’s warning when reflecting on the career of Indira Gandhi, it brings to mind even more starkly the career of India’s current Prime Minister, Narendra Modi. In terms of their upbringing and ideological formation, no two Indian politicians could be more different than Modi and Mrs. Gandhi. One witnessed enormous hardship while growing up; the other was raised in an atmosphere of social and economic privilege. One had his worldview shaped by the many years he spent in the Hindu supremacist organization, the Rashtriya Swamaysevak Sangh (RSS); the other  was deeply influenced by her father, Jawaharlal Nehru, India’s first Prime Minister, who detested the RSS. One has no family; the other had children and grandchildren. One had to work his way up the ladder of Indian politics, step by step; the other had a lateral entry into a high position purely on account of her birth.

    And yet there are significant commonalities. These very different personal biographies notwithstanding, it has long seemed to me that there are striking similarities in their political styles. Back in 2013, I wrote in The Hindu that “neither Mr. Modi’s admirers nor his critics may like this, but the truth is that of all Indian politicians past and present, the person Gujarat Chief Minister most resembles is Indira Gandhi of the period 1971-77. Like Mrs. Gandhi once did, Mr. Modi seeks to make his party, his government, his administration and his country an extension of his personality.” At the time the article was published, the Chief Minister of the western state of Gujarat was making his national ambitions explicit. Fifteen months later, Narendra Modi became Prime Minister of India, his Bharatiya Janata Party (BJP) winning, under his leader-ship, the first full majority in Parliament of any party since 1984. Modi’s time in office has seemed to confirm the parallels between him and Indira Gandhi. As she had once done, he cut the other leaders in his party down to size; sought to tame the press; used the civil services, the diplomatic corps and the investigative agencies as political instruments; and corralled the resources of the state to build a personality cult around himself. 

    In January 2020, when the Republic of India turned seventy, Narendra Modi was facing his first serious challenge since he became Prime Minister six years earlier. Modi’s ideological formation in the RSS had convinced him that India’s destiny was to be a “Hindu Rashtra” — a theocratic state run by Hindus and in the interests of Hindus alone. In his first term as Prime Minister, Modi had kept these beliefs largely under wraps. But when he was re-elected with a large majority in May 2019, the majoritarian agenda came strongly to the fore. On August 5, 2019, the government of India abrogated Article 370 of the Constitution, which accorded cultural and political autonomy to the state of Jammu and Kashmir. This was done unilaterally, without consulting the people of the state (as the law required them to do). It was a wanton intervention in one of the most dangerous areas of contention in the world. The state of Jammu and Kashmir was abruptly converted into a mere “Union Territory.” It was henceforth to be ruled directly by New Delhi, preparatory to what the rulers of India called a “full integration with the Nation,” which the people of the Kashmir Valley feared would result in an invasion of their land by grasping outsiders and a transformation of this Muslim-majority state into a Hindu colony. 

    Worse was to follow. In early December, the Parliament passed the Citizenship Amendment Act (CAA). This sought to give Indian citizenship to people fleeing religious persecution in three countries: Bangladesh, Pakistan, and Afghanistan. The Act was illogical — it ignored the largest group of stateless refugees in India, the Tamils from Sri Lanka; and it was also spiteful, for it had carefully specified that Muslims from any country, however persecuted they might be, would not get refuge in India. Moreover, the Modi government announced that the CAA was to be accompanied by a National Register of Citizens (NRC), which would demand, from everyone living in India, documentary proof of Indian parentage, length of residence in India, and so on. Those who were unable to “prove” to the government’s satisfaction that they had these papers would be declared illegal immigrants. But if they had the good luck to be Hindu, Buddhist, Jain, Sikh, Parsi, or Christian — that is, anything other than Muslim — they could apply to become Indians under the Citizenship Amendment Act. The CAA was a clear violation of Articles 14 and 15 of the Constitution, which promised equality before the law and prohibited discrimination on the grounds of religion. Following on the downgrading of Jammu and Kashmir from full statehood to Union Territory status, the passing of the CAA represented a further — and fuller — ethnonationalist step towards the construction of a Hindu State. Were it to be implemented along with the NRC, as top government ministers had repeatedly threatened, Muslims would become, formally as well as legally, second-class citizens.

    The abrogation of Jammu and Kashmir’s statehood was met with muted protest by intellectuals and human rights activists, and little else. Prime Minister Modi and his hardline Home Minister, Amit Shah, clearly hoped that these new changes in the citizenship laws would likewise go uncontested. They were wrong. There were widespread protests across India, led at first by students, but then with a wide cross-section of the citizenry joining in. Elderly Muslim women staged a peaceful sit-in for weeks in South-East Delhi, this act inspiring many similar sit-ins in other cities and towns. The state sought to suppress the protests through colonial-era laws prohibiting gatherings of more than five people, but the non-violent and collective civil disobedience continued. Although the Acts targeted Muslims specifically, many non-Muslims participated in the protests, outraged at this whole stigmatization of their fellow citizens merely on account of their faith. The countrywide upsurge within India was accompanied by widespread condemnation of the Modi Government in the international press. This intensified when President Donald Trump visited India in late February, his visit coinciding with religious rioting in Delhi, the country’s capital, in which radical Hindus were the main perpetrators and Muslims the main sufferers. 

    At this time, it seemed that the degradation of Indian democracy had been arrested. The pushback against the cult of personality and the ideology of Hindu supremacy had begun and seemed as if it might perhaps accelerate. Then came the pandemic, and India, and the world, gasped in wonder and horror. I shall return to the consequences of covid19 for my country at the end of my essay. But first I wish to outline the historic roots of the struggle that has been unfolding within India, between the capacious ideals with which the Indian republic was founded and the majoritarian tendency that seeks to replace it. We must begin with the intellectual and moral origins of the Constitutional idea of India, which Narendra Modi and his party wish to consign to the ash heap of history.

    Like the railways, electricity, and the theory of evolution, nationalism was invented in modern Europe. The European model of nationalism sought to unite residents of a particular geographical territory on the basis of a single language, a shared religion, and a common enemy. To be British, you had to speak English, and minority tongues such as Welsh and Gaelic were either suppressed or disregarded. To be properly British you had to be Protestant, which is why the king was also the head of the Church, and Catholics were distinctly second-class citizens. Finally, to be authentically and loyally British, you had to detest France.

    Now, if we go across the Channel and look at the history of the consolidation of the French nation in the eighteenth and nineteenth centuries, we see the same process at work, albeit in reverse. Citizens had to speak the same language, in this case French, so dialects spoken in regions such as Normandy and Brittany were sledgehammered into a single standardized tongue. The test of nationhood was allegiance to one language, French, and also to one religion, Catholicism. So Protestants were persecuted. Likewise, French nationalism was consolidated by identifying a major enemy, although who this enemy was varied from time to time. In some decades the principal adversary was Britain; in other decades, Germany. In either case, the hatred of another nation was vital to affirming faith in one’s own nation.

    This model — a single language, a shared religion, a common enemy — is the model by which nations were created throughout Europe. And it so happens that the Islamic Republic of Pakistan is in this respect a perfect European nation. Pakistan’s founder, Mohammad Ali Jinnah, insisted that Muslims could not live with Hindus, so they needed their own homeland. After his nation was created, Jinnah visited its eastern wing and told its Bengali residents they must learn to speak Urdu, which to him was the language of Pakistan. And, of course, hatred of India has been intrinsic to the idea of Pakistan since its inception. 

    Indian nationalism, however, radically departed from the European template. The greatness of the leaders of our freedom struggle — and Mahatma Gandhi in particular — was that they refused to identify nationalism with a single religion. They further refused to identify nationalism with a particular language, and — even more remarkably — they refused to hate their rulers, the British. Gandhi lived and died for Hindu-Muslim harmony. He liked to emphasize the fact that his party, the Indian National Congress, had presidents who were Hindu, Muslim, Christian, and Parsi. Nor was Gandhi’s nationalism defined by language. As early as the 1920s, Gandhi pledged that when India became independent, every major linguistic group would have its own province. But perhaps the most radical aspect of the Indian model of nationalism was that hatred of the British was not intrinsic to it. Indian patriots detested British imperialism, they wanted the Raj out, they wanted to reclaim this country for its residents — but they did so non-violently, and while befriending individual Britons. (Gandhi’s closest friend was the English priest C.F. Andrews.) Moreover, they wished to get the British to ‘Quit India’ while retaining the best of British institutions. An impartial judiciary, parliamentary democracy, the English language, and not least the game of cricket; these are all aspects of British culture that Indians sought to keep after the British had themselves left.

    British, French, and Pakistani nationalism were based on paranoia, on the belief that all citizens must speak the same language, adhere to the same faith, and hate the same enemy. Indian nationalism, by contrast, was based on a common set of values. During the non-cooperation movement of 1920-1921, people all across India came out into the streets, gave up jobs and titles, left their colleges, and courted arrest. For the first time, the people of India had the sense, the expectation, the confidence that they could create their own nation. In 1921, when non-cooperation was at its height, Gandhi defined Swaraj (Freedom) as a bed with four sturdy bed-posts. The four posts that held up Swaraj, he said, were non-violence, Hindu-Muslim harmony, the abolition of untouchability, and economic self-reliance.

    When the Republic of India was created in 1950, its citizens sought to be united on a set of ideals: democracy, religious and linguistic pluralism, caste and gender equality, and the removal of poverty and discrimination. The basis of citizen-ship was adherence to these values, not to a single religion, a shared faith, or a common enemy. I would describe this found-ing model of Indian nationalism as constitutional patriotism, because it is enshrined in our Constitution. Its fundamental features are outlined below.

    The first feature of constitutional patriotism is the acknowledgement and appreciation of our inherited and shared diversity. In any major gathering in a major city — say, in a music concert or in a cricket match — people who compose the crowd carry different names, wear different clothes, eat different kinds of food, worship different gods (or no god at all), speak different languages, and fall in love with different kinds of people. They are a microcosm not just of what India is, but of what its founders wished it to be. For the founders of the Republic had the ability (and the desire) to endorse and emphasize our diversity. Multiethnicity was not the problem, it was the solution. As the poet Rabindranath Tagore once said about my country, “no one knows at whose call so many streams of men flowed in restless tides from places unknown and were lost in one sea: here Aryan and non-Aryan, Dravidian, Chinese, the bands of Saka and the Hunas and Pathan and Mogul, have become combined in one body.” An appreciation of this rich inner diversity means that we understand that no type of Indian is superior or special because they belong to a particular religious tradition or because they speak a certain language. Patriotism was defined by the allegiance to the values of the Constitution, not by birth, blood, language or faith.

    The stress on cultural diversity and religious pluralism was all the more remarkable because it came in the wake of the savage rioting of Partition. Gandhi and the Congress had hoped for a united India, but in the event, when the British left in August 1947, they divided the country into two sovereign nations, India and Pakistan. The division was accompanied by ferocious clashes between Hindus and Muslims, in which an estimated one million people died and more than ten million people were made into refugees. But Pakistan was explicitly created as a homeland for Muslims, whereas India resolutely refused to define itself in majoritarian terms. As the country’s first Prime Minister, Jawaharlal Nehru, wrote to the Chief Ministers of States in 1947, “We have a Muslim minority who are so large in numbers that they cannot, even if they want to, go anywhere else. They have got to live in India. … Whatever the provocation from Pakistan and whatever the indignities and horrors inflicted on non-Muslims there, we have got to deal with this minority in a civilized manner. We must give them security and the rights of citizens in a democratic State.”

    The second feature of constitutional patriotism is that it operates at many levels. Like charity, it begins at home. It is not just worshipping the national flag that makes you a patriot. It is how you deal with your neighbors and  your neighborhood, how you relate to your city, how you relate to your state. In America, which is professedly one of the most patriotic countries in the world, every state has its own flag. And some states of India also have their own flag, albeit informally. Every November 1, when the anniversary of the formation of my home state, Karnataka, is celebrated, a red-and-yellow flag is unfurled in many parts of the state. It is not Anglicized upper-class elites such as myself who display the state flag of Karnataka, but shopkeepers, farmers, and autorickshaw drivers.

    Patriotism can operate at multiple levels. The Bangalore Literary Festival (which is not sponsored by large corporations but is crowd-funded) is an example of civic patriotism. The red-and-yellow flag of Karnataka is an example of provincial patriotism. Cheering for the Indian cricket team is an example of national patriotism. This patriotism can operate at more than one level — the locality, the city, the province, the nation. A broad-minded (as distinct from paranoid) patriot recognizes that these layered affiliations can be harmonious, complementary, and reinforce one another.

    The model of patriotism advocated by Gandhi and Tagore was not centralized but disaggregated. And it helped make India a diverse and united nation. Look at what is happening in Spain today. Why are so many Catalans keen on a nation of their own? Because they believe that they have been denied the space and the freedom to honorably have their own language and culture within a united Spain. The central-ized Spanish state came down so hard that the Catalans had a referendum in which many of them insisted upon nothing less than independence. Had the Republic of Spain been founded and run on Indian principles, this may not have happened. Had Pakistan not imposed Urdu on Bengalis, they may not have split into two nations a mere quarter of a century after independence. Had Sri Lanka not imposed Sinhala on the Tamils, that country may not have experienced thirty years of ethnic strife. India has escaped civil war and secession because its founders wisely did not impose a single religion or single language on its citizens.

    One can be a patriot of Bangalore, Karnataka, and India — all at the same time. Yet the notion of a world citizen is false. The British-born Indian J.B.S. Haldane put it this way: “One of the chief duties of a citizen is to be a nuisance to the government of his state. As there is no world state, I cannot do this…. On the other hand I can be, and am, a nuisance to the government of India, which has the merit of permitting a good deal of criticism, though it reacts to it rather slowly. I also happen to be proud of being a citizen of India, which is a lot more diverse than Europe, let alone the U.S.A, USSR or China, and thus a better model for a possible world organization. It may, of course, break up, but it is a wonderful experiment. So I want to be labelled as a citizen of India.” A citizen of India can vote in local, provincial and national elections. In between elections he or she can affirm their citizenship (at all these levels) through speech and (non-violent) action. But global citizenship is a mirage, or a cop-out. It is only those who cannot or will not identify with locality, province, or nation who accord themselves the fanciful and fraudulent title of “citizen of the world.” 

    The third feature of constitutional patriotism, and this again comes from people such as Gandhi and Tagore, is the recognition that no state, no nation, no religion, and no culture is perfect or flawless. India is not superior to America necessarily, nor is America superior to India necessarily. Hinduism is not superior to Christianity necessarily, nor is Islam superior to Judaism necessarily. The fourth feature is this: we must have the ability to feel shame at the failures of our state and society, and we must have the desire and the will to correct them. The most egregious aspects of Indian culture and society are discrimination against women and the erstwhile “Untouchable” castes. A true patriot must feel shame about them. That is why our Constitution abolished caste and gender distinctions. Yet these distinctions continue to pervade everyday life. Unless we continue to feel shame, and act accordingly, they will continue to persist.

    The fifth feature of constitutional patriotism is the ability to be rooted in one’s culture and one’s country while being willing to learn from other cultures and other countries. This, too, must operate at all levels. Love Bangalore but think what you can learn from Chennai or Hyderabad. Love Karnataka, but think what you can learn from Kerala or Himachal Pradesh. Love India, but think of what you can learn from Sweden or Canada. Here is Tagore, in 1908: “If India had been deprived of touch with the West, she would have lacked an element essential for her attainment of perfection. Europe now has her lamp ablaze. We must light our torches at its wick and make a fresh start on the highway of time. That our forefathers, three thousand years ago, had finished extracting all that was of value from the universe, is not a worthy thought. We are not so unfortunate, nor the universe so poor.” And here is Gandhi, thirty years later: “In this age, when distances have been obliterated, no nation can afford to imitate the frog in the well. Sometimes it is refreshing to see ourselves as others see us.”

    As a patriotic Indian, I believe that we must find glory in the illumination of any lamp lit anywhere in the world.

    The crisis of contemporary India may be described succinctly: the model of constitutional patriotism is now in tatters. It is increasingly being replaced by a new model of nationalism, which prefers and promotes a single religion, Hinduism, and proclaims that a true Indian is a Hindu. This new model also elevates a single language — Hindi. It insists that Hindi is the national language, and whatever the language of your home, your street, your state, you must speak Hindi also. Thirdly, this model luridly presents a common external enemy — Pakistan.

    Whether they acknowledge it or not, those promoting this new model of Indian nationalism are borrowing (and more or less wholesale) from nineteenth-century Europe, where nationalism, for all its cultural riches, culminated in disaster. And to the template of a single religion, a single language, and a common enemy they have added an innovation of their own — the branding of all critics of their party and their leader as “anti-national.” This scapegoating comes straight from the holy book of the RSS, M.S. Golwalkar’s Bunch of Thoughts, which appeared in 1966. In his book Golwalkar identified three “internal threats” to the nation — Muslims, Christians, and Communists. Now, I am not a Muslim, a Christian, or a Communist, but I have nonetheless become an enemy of the nation. This is so because any critic, any dissenter, anyone who upholds the old ideal of constitutional patriotism, is considered by those in power and their cheerleaders to be an enemy of the nation.

    In the wonderful Hindi film Newton, one character says, “Ye desh danda aur jhanda se chalta hai,” the stick and the flag define this country. This line beautifully captures the essence of a paranoid and punitive form of nationalism, based on the blind worship of the sole and solitary flag, and on the use of the stick to harass those who do not follow or obey you. This new nationalism in India is harsh, hostile, and unforgiving. The name by which it should be known is certainly not patriotism, and not even nationalism. It should be called jingoism.

    The dictionary defines a patriot as “a person who loves his or her country, especially one who is ready to support its freedoms and rights and to defend it against enemies or detractors.” Note the order: love of country first, support of freedom and rights second, and defense against enemies last. And what is the dictionary definition of jingoist? One “who brags of his country’s preparedness for fight, and generally advocates or favors a bellicose policy in dealing with foreign powers; a blustering or blatant ‘patriot’; a Chauvinist.” The order is reversed: first, boasting of the greatness of one’s country; then advocating attacking other countries. No talk of rights or freedom, or of love either. Patriotism and jingoism are antithetical varieties of nationalism. Patriotism is suffused with love and understanding. Jingoism is motivated by hatred and revenge.

    I have already outlined the founding features of constitutional patriotism. What are the founding features of jingoism? First, the belief that one’s religion, culture, and nation (and leader) are perfect and infallible. Second, the demonization of critics as anti-nationals and Fifth Columnists. Rather than engage critics in debate, hyper-nationalists harass and intimi-date them, through the force of the state’s investigating agencies and through vigilante armies if required.

    In recent years, Indian nationalism has been captured by its perverted jingoist version. But the country remains some sort of democracy, where the jingoist version is popular among a large section of the population and has been brought to power through the ballot box. How did this come to pass? Why is it that the party of the Hindu Right has so many supporters in India today?

    I believe there are four major reasons why jingoism is ascendant in India, while constitutional patriotism is in retreat. The first is the hostility of the Indian left to our national traditions. The Communist parties are still an important political force in India. They have been in power in several states. Their supporters have historically dominated some of our best universities, and been prominent in theater, art, literature, and film. But the Indian left, sadly and tragically, is an anti-patriotic left. It has always loved another country more than its own.

    That country used to be the Soviet Union, which is why our Communists opposed the Quit India Movement, and launched an armed insurrection on Stalin’s orders in 1948, immediately after Gandhi was murdered. Later the country that the Communists loved more than India was China; and so, in 1962, they refused to take their homeland’s side in the border war of that year. Still later, when the Communists became disillusioned with both Soviet Union and China, they pinned their faith on Vietnam. When Vietnam failed them, it became Cuba; when Cuba failed them, it became Albania. When I was a student in Delhi University, there was a Marxist professor who taught that Enver Hoxha was a greater thinker than Mahatma Gandhi. But then Albania failed, too. So now the foreign country that our comrades love more than India is — what else? — Venezuela. The late (and by me unlamented) Hugo Chavez was venerated on the Indian left. If you think Modi is authoritarian, then Chavez was Modi on steroids — the ur-Modi. The megalomaniac Chavez destroyed the Venezuelan economy and Venezuelan democracy, and yet he continued to be worshipped by Indian leftists young and old.

    The degradation of patriotism in India has also been abetted by the corruption of the Congress Party. The great party which led India’s freedom movement has in recent decades been converted into a single family. I have spoken of how the Left chooses its icons, but in some ways the Congress is even worse. When it was in power, it named everything in sight after Jawaharlal Nehru or his daughter or his grandson. Why couldn’t the new Hyderabad international airport have been named after the Telugu composer Thygaraja or the Andhra patriot T. Prakasam? Why Rajiv Gandhi? Likewise, when the new sea link in Mumbai had to be given a name, why couldn’t the Congress consider Gokhale, Tilak, Chavan, or some other great Maharashtrian Congressman? Why Rajiv Gandhi again?

    Many, indeed most, of the icons of the national movement belonged to the Congress party. But the Congress has abandoned and thrown them away because it is only Nehru, Indira, Rajiv, Sonia, and now Rahul that matter to them. (The only Congressman outside the family they are willing to acknowledge is Mahatma Gandhi, because even they can’t obliterate him from their party’s history.) If someone like Hugo Chavez is adored so much by Indian leftists, then obviously this will help the jingoists — and likewise, if the Congress government named all major schemes and sites after a single family, ignoring even the great Congress patriots of the past, then that would give a handle to the jingoists, too. The corrupt and sycophantic culture of the Congress Party is a disgrace. When I made a sarcastic remark on Twitter about Rahul Gandhi becoming Congress president, someone put up a chart listing the presidents of the BJP since 1998 — Bangaru Laxman, Jana Krishnamurthi, L.K. Advani, Rajnath Singh, and so on, the last name on the list being Amit Shah, followed by “party worker,” whereas the presidents of the Congress in the same period were “Sonia Gandhi, Sonia Gandhi, Sonia Gandhi…Rahul Gandhi….”  

    A third reason for India’s jingoist fate is, of course, that jingoism is a global phenomenon, manifest in the rise of Trump, Brexit, Le Pen, Erdogan, Putin, Bolsonaro, Orban, and the rest, all of whom pursue a xenophobic, paranoid, often hateful form of nationalism. The rise of such narrow-minded nationalism elsewhere encourages the rise of jingoism in India to match or rival it, and friendships between the authoritar-ians are naturally formed. And finally we must note the rise of Islamic fundamentalism in our own backyard. Over the decades, the state and society of Pakistan have become danger-ously and outrageously Islamist. Once they persecuted Hindus and Christians; now they persecute Ahmadiyyas and Shias, too. And Bangladesh is also witnessing a rising tide of violence against religious minorities. Since religious fundamentalisms are rivalrous and competitive, every act of violence against a Hindu in Bangladesh motivates and emboldens those who want to persecute Muslims in India.

    The Bharatiya Janata Party, Modi’s party, and its mother organization, the RSS, claim to be authentically Indian, and damn the rest of us as foreigners. Intellectuals such as myself are dismissed as bastard children of Macaulay, Marx, and Mill. As an historian, however, I would say that it is the ideologues of the RSS who are the true foreigners. Their model of nationalism — one religion, one language, one enemy — is foreign to the Indian nationalist tradition, to the Gandhian model of nationalism which was an innovative indigenous response to Indian conditions, designed to take account of cultural diversity and to tackle caste and gender inequality.

    If the RSS model of nationalism is inspired by Europe, their model of statecraft is Middle Eastern in origin. From about the eleventh to the sixteenth century, there were states where monarchs were Muslims and the majority of the popula-tion was Muslim, but a substantial minority was non-Muslim, composed mainly of Jews and Christians. In these medieval Islamic states, there were three categories of citizens. The first-class citizens were Muslims, who prayed five times a day and went to mosque every Friday, and who believed that the Quran was the word of God. The second-class citizens were Jews and Christians whose prophets were admired by Muslims, as preceding Mohammed, the last and the greatest prophet. Third-class citizens were those who were neither Jews nor Christians nor Muslims. These were the unbelievers, the Kafirs.

    In medieval Muslim states, Jews and Christians, the ‘People of the Book’, were defined as ‘Dhimmi’, which in Arabic means ‘protected person’. As a protected person, they had certain rights. They could go to the synagogue or church; they could own a shop; they could raise a family. But other rights were denied them. They could not enroll in the military, serve in the government, be a minister or prime minister. Nor, unlike Muslims, could they convert other citizens to their faith. Such was the second-class status of Jews and Christians in medieval Islam. This model was applied in Medina and Andalusia, and in Ottoman Turkey. While Kafirs (including Hindus) had to be suppressed and subdued, Jews and Christians could practice their profession and raise their family, so long as they did not ask for the same rights as Muslims. 

    This is precisely how the Hindu Right wants to run politics in the Republic of India today. Muslims in modern India now must be like Jews and Christians of the medieval Middle East. If Muslims accept the theological, political and social superiority of Hindus they shall not be persecuted or killed. But if they demand equal rights they might be. 

    The new jingoism in India is a curious mixture of outdated ideas of nationalism mixed with profoundly anti-democratic ideas of citizenship. And yet it finds wide acceptance. But its popularity does not mean that we should surrender to it, or that it is legitimate, or that it is genuinely Indian. For the Republic of India is an idea as well as a physical and demographic entity. Those of us who are constitutional patriots must continue to stand up for the values on which our nation was nurtured, built and sustained. If the BJP and the RSS are to continue unchecked and unchallenged, they will destroy India, culturally as well as economically.

    The political and ideological battle in India today is between patriotism and jingoism. The battle is currently asymmetrical, because the jingoists are in power, and because they have a party articulating and imposing their views. The constitutional patriotism of Gandhi, Tagore, and Ambedkar has no such party active today. The Communists followed Lenin and Stalin rather than Gandhi and Tagore, and the Congress has turned its back on its own founders. But while Indians patriots  may not currently have a credible party to represent them, they are — as the protests in December 2019 and January 2020 showed — willing to carry on the good fight for constitutional values even in its absence. Those protests admirably demonstrated that citizenship is an everyday affair. It is not just about casting your vote once every five years. It is about affirming the values of pluralism, democracy, decency, and non-violence every day of our lives. 

    It was ordinary citizens, not opposition parties, who presented the Modi government with the first major challenge since it came to power in 2014. The challenge was political, it was moral, it was constitutional. But then came the pandemic, and the balance shifted once more, back in favor of the ruler and the regime.

    In the beginning of this essay I spoke of how Narendra Modi’s was the second great personality cult in the history of the Indian republic. The first, that of Indira Gandhi, had led to the imposition of a draconian Emergency. When Modi became Prime Minister, I myself had no illusions about his centralizing instincts, yet the historian in me was alert to how the India of 1975 differed from the India of 2014. When the Emergency was imposed by Indira Gandhi, her Congress Party ruled the Central Government in New Delhi, and also enjoyed power — on its own or in coalition — in all major states of the Union except Tamil Nadu. On the other hand, when Narendra Modi became Prime Minister, many states of the Union were outside the control of his Bharatiya Janata Party.

    My hope therefore was that our federal system would serve as a bulwark against full-blown authoritarianism. In Narendra Modi’s first term as Prime Minister, the BJP won elections in some major states while losing elections in other major states. Even after Modi and the BJP emphatically won re-election at the national level in 2019, they could not so easily win power in the state Assembly elections that followed. The anti-CAA protests further strengthened one’s faith in the democratizing possibilities of Indian federalism. Large sections of the citizenry rose up in opposition to a discriminatory act that seemed grossly violative of the Constitution. The Chief Ministers of several large states were also opposed to the new legislation. This seemed like further confirmation that the present was not the past. Indira Gandhi could do what she did only because her party controlled both the Center as well as all the states in India (Tamil Nadu’s DMK Government having been dismissed a few months after the Emergency was promulgated). But this was not the case with Modi and his BJP.

    The covid19 pandemic has changed this calculus. It has given Narendra Modi and his government the opportunity to weaken the federal structure and radically strengthen the powers of the Center vis-a-vis the States. They have used a variety of instruments to further this aim. They have invoked a “National Disaster Management Act” to suspend the rights of States to decide on the movement of peoples and goods, the opening and closing of schools, colleges, factories, public transport, and so on, and to centralize all these powers in the Central Government, effectively in the person of the Prime Minister. They have further postponed the disbursal of funds already due to the States as their share of national tax collections — substantial revenues, amounting to more than Rs 30,000 crores ($40 billion), which, if released, could greatly alleviate popular distress. They have created a new fund at the Centre, the so-called PM-CARES, which discriminates against the States in that it gives special exemptions (to write off donations as “Corporate Social Responsibility”) that are denied to those who wish to donate instead to the Chief Minister’s Fund of their own states. This fund gives the Prime Minister enormous discretionary power in disposing of thousands of crores of rupees as he pleases. The functioning of the fund is shrouded in secrecy, with even the Comptroller and Auditor General are not allowed to audit it.

    This heartless exploitation of the covid19 pandemic to weaken federalism has been accompanied by a systematic attempt to further build up the personality cult of the Prime Minister. State-run television, senior Cabinet Ministers, and the ruling party’s IT Cell have all been working overtime to proclaim that only Modi can save India. Even as lives are lost and livelihoods are destroyed by the pestilence, the Prime Minister is going ahead with an expensive plan to redesign India’s capital, New Delhi. This will destroy the historic centre of one of the most beautiful cities in the world, and replace it with a series of concrete and glass blocks. The showpiece of this project is a grand new house for the Prime Minister himself. As one writer has remarked, “the biggest irony remains that a prime minister from the humblest of backgrounds should yearn for a house on Rajpath, no less, to endorse his vision of personal greatness and legacy. Would Emmanuel Macron demand and, more importantly, get a house on the Champs Elysées? Can even Trump order himself a second home on the Mall?” The Prime Minister’s own justification of the project is that it was to mark not a personal but a national milestone—the seventy-fifth anniversary of Indian independence. This is disingenuous, because past anniversaries overseen by past Prime Ministers had not called for such a spectacular extravaganza. Apparently, what was good enough for Indira Gandhi and I. K. Gujral won’t quite do for the great Narendra Modi.

    The architecture of power reveals a lot about those who wield it, and Modi’s redesign of New Delhi brings to mind not so much living Communist autocrats as it does some dead African despots. It is the sort of vanity project, designed to perpetuate the ruler’s immortality, that Felix Houphouet-Boigny of the Ivory Coast and Jean Bédel-Bo-kassa of the Central African Republic once inflicted on their own countries. (I refer readers to V. S. Naipaul’s great essay “The Crocodiles of Yamoussoukro.”) And as this wasteful and pharaonic self-indulgence proceeds, an economy that was already flailing has been brought to the brink of collapse by the pandemic. The ill-planned lockdown has led to enormous human suffering. Working-class Indians, already living on the edge, are now faced with utter destitution. In his speeches to the nation since the pandemic broke, the Prime Minister has repeatedly asked Indians to sacrifice — sacrifice their time, their jobs, their lifestyles, their human and cultural tendency to be gregarious. Surely it is past time for citizens to ask the Prime Minister to sacrifice something for the nation as well. Anyway, he won’t.

    When he was first elected Prime Minister in 2014, Narendra Modi said that he wished to redeem India from the thousand years of slavery it had suffered before his election. My son, the novelist Keshava Guha, commented at the time that Modi saw himself as the first Hindu leader to have the entire country under his command. Nehru and Indira — the two prime ministers of comparable popularity before him — were to him fake Hindus, their faith corrupted by their English education and what he and his party saw as an unconscionable partiality towards Muslims. My son is right. Narendra Modi thinks of himself as doing what medieval chieftains such as Shivaji and Prithviraj Chauhan could not do — make the whole country a proud Hindu nation. His followers call him Hindu Hriday Samrat, the Emperor of Hindu Hearts, but it would be more precise to call him Hinduon ka Samrat, an Emperor for and of Hindus. He is, to himself and millions of others, Emperor Narendra the First. The history of personality cults tells us that they are always disastrous for the countries in which they flourished. Narendra Modi will one day no longer be Prime Minister, but when will India recover from the damage he has done to its economy, its institutions, its social life, and its moral fabric?