Zungguzungguguzungguzeng

    All me sparks fly all night;
    all my mouth axle bright, wheel
    the true guillotine serpents’ fleck
    amber sweat off my waistline,

    sibilant as touch-me-nots’
    shuttered leaves rattling Death
    in the Arena. Honey Blight
    and Armageddon. I am Thorn

    Tongue, bare sprite-child nerved
    against neon slush and ants trap,
    I squeal, bitten, “Mother O mother…
    come!” No one but echo and ice.

    Day fevers dusk a Midas
    wisp. Torched corona. I am adagio,
    brisk, cool and deadly onstage;
    my visible black flares yellow,

    speckled lava. All me manna
    chrome, stigmata’s tingling
    rush turns the purging
    cassia spokes ripe, ripe music.

    After Covid

    Paleontologists disagree about whether dinosaurs were thriving or had already entered a long decline when an extinction event finished them off sixty-six million years ago. Depending on who is right, the asteroid that struck Earth either radically changed the direction of evolution or merely accelerated an established trend. Disasters that target the currently dominant species invite similarly divergent interpretations. Their capacity to jolt us out of our complacency is not in doubt. But in so doing, do they truly redirect the course of human history, or do they merely act as catalysts of ongoing change? Covid19 is just the latest in a long series of crises that have raised this perennial question.

    And how it has been raised! Since the pandemic began, journalists, pundits, scholars, and pundit-scholars speak as if the pandemic will itself periodize history, into the “Before Time” and the new world that we have entered. They have fallen over each other predicting all manner of dramatic change. But what kind of change, exactly? A big divide separates the realists and the continuationists from the aspirationists and the disruptionists. The former prefer to view the coronavirus crisis as an amplifier of present shifts and enhancer of familiar structures. The latter consider it a transformative force, a crisis that is an opportunity, a source of novel remedies for assorted societal ills thought to be in urgent need of correction. 

    The continuationist position has much to commend it. After all, a great many of the crises highlighted by the pandemic were already underway. Nationalism and anti-glo-balist sentiment were on the rise. International indices of freedom were declining. Digital tracking and surveillance had become ever more invasive. Economic inequality was already unprecedented. Corporate debt had already reached record highs, and central banks had already begun to drive more of the economy. Tensions between the United States and China were already mounting. Iran had long been in trouble. China’s strategic ambitions were already obvious, and its growth had already begun to slow, as had India’s. Oil prices were already too low to sustain the bloated budgets of the petrostates. The European establishment was already eager to go green even before the EU’s covid-recovery package unleashed a torrent of funding for salient projects. Online shopping had long been eating into retail’s market share, and remote instruction and telecommuting had been expanding for many years. Millen-nials had already been dealt lousy cards by the Great Recession and austerity. African-Americans led shorter and unhealthier lives even before they succumbed in disproportionate numbers to the coronavirus. And even the “novel corona-virus” is not as novel as it has been made to sound: multiple outbreaks of SARS and MERS have been recorded since 2002.

    In all these and many other respects, the crisis has served as an accelerator and an amplifier. Sometimes the push was felt to be sudden and hard: the head-over-heels transition to remote work and teaching is a prime example. But even that apparent rupture was firmly rooted in technological shifts that had long prepared the ground. This kind of historical acceleration has a long pedigree. The World Wars and the Great Depres-sion spawned unprecedented mass mobilization (for war and revolution) and economic shocks. Taxes soared, the right to vote spread, colonial empires trembled, and welfare states bloomed. Capitalism was temporarily tamed, suspended, and sometimes even abolished. Yet none of this came out of nowhere: well before 1914, there had already been pensions, progressive taxation, labor unions, public schools, suffragists and suffragettes, and independence movements. What these crises did was give an enormous boost to initiatives that were already in progress.

    The dramatic empowerment of the masses was rooted in the modernizing institutional and economic transformations of the previous century or two. Even purposely radical communist regimes built on nineteenth-century ideas and embraced generic schemes such as industrialization. Genuine detours from the modernizing script — such as the Khmer Rouge’s murderous evacuations of urban residents to the countryside — remained exceedingly rare and unsuccessful outliers.

    Nor had historical pandemics been genuine game-changers. It is hard to imagine disasters more disruptive than the Black Death of the late Middle Ages and the pandemics of smallpox, measles and influenza that ravaged the Americas after European colonizers introduced these pathogens after 1492. Yet even the medieval plague frequently intensified earlier trends, from urbanization and the erosion of serfdom to challenges to Catholic unity and papal supremacy. In the New World, the decimation of the indigenous population greatly assisted the Spanish conquests, but even that process was ultimately a mere acceleration, however monstrous in scale and style. The ultimate outcome had hardly been in doubt: witness the wide and rapidly expanding disparities between the fiscal-military states of Europe and the largely Copper Age societies of the Americas, the fissions that had already opened within the most powerful American empires of the day, and the conquistadors’ zeal to out-colonize their fellow European competitors.

    Nothing quite as dramatic has happened since, even as epidemics remained common. When bubonic plague intensified one more time in seventeenth-century Europe, the most dynamic economies — most notably Britain and the Nether-lands — weathered it quite well. In the nineteenth century, massive outbreaks of cholera and yellow fever famously raised support for ambitious public health measures but cannot rightly be viewed as their root cause. After all, the counter-factual of ever richer and more knowledgeable societies persistently failing to invest in sanitation is hardly a plausible one. However much repeated health scares shaped the pace and scale of intervention, it was economic growth and science that made it possible in the first place.

    A century ago, the Spanish Flu was as global in its reach as Covid19 is today, but it turned out to be much more lethal. It targeted not only the elderly but also infants and, most crucially, people in their twenties and thirties — workers in the prime of life who often had just started a family and who left behind spouses and small children. Vast numbers of people died: perhaps forty million, or 1 in 50 people on earth, equivalent to more than 150 million today. Yet in the end, little tangible change resulted from this catastrophe. Improved coordination of international health monitoring was well in line with the overall consolidations of the League of Nations period and fairly unremarkable.

    This is not to say that crises never reroute trajectories of develop-ment. But such outcomes are made all the more noteworthy by their extraordinary rarity. A few years ago I was able to identify a clear example: the attenuation of income and wealth inequal-ity through major disasters. I found a pattern that has held true across recorded history: massively violent ruptures were the only events that have ever greatly shrunk the gap between rich and poor. Those events, I found, came in four flavors: the collapse of states, catastrophic pandemics, mass mobilization warfare, and transformative revolution. The latter two were especially characteristic of the twentieth century.

    State collapse was the most ancient leveling force, dating back to Old Kingdom Egypt. It was also the most dependable. Early states were designed as powerful engines of inequality: whenever they unraveled, they took elites and their accumulated wealth and power down with them. While most people ended up worse off, the rich had the most to lose. Pandemics, by contrast, equalized by different means, administering a harsh Malthusian solution to demographic pressure. When they carried off a sufficiently large enough share of the population, labor became scarce and wages rose, while demand for land fell, reducing its value. The masses who sold their labor found themselves less poor while the rich who controlled capital lost some of their income and wealth. 

    There occurred, in these cases, a kind of egalitarian intermission in history. Such shifts are faintly discernible during the first pandemic of bubonic plague at the end of Roman antiquity, are amply documented in Western Europe in the wake of the Black Death, and have also been observed in seventeenth-century Mexico, where real wages increased once indigenous population numbers had dropped to record lows. But these violent levelings — these adjustments by catastrophe — never lasted. As states were rebuilt, greedy elites returned. As plagues faded, population recovered, wages fell, and fortunes grew. Even so, the egalitarian intermissions could go on for generations, providing rare relief from plutocratic dominance. At the very least, they proved to the world that life did not always have to be the way it usually was.

    The unique ruptures of modernity drove home that message with even greater force. In the World Wars — especially the second one — returns on capital plummeted, and governments launched aggressive interventions in the private sector and raised taxes on large incomes and estates sky-high. Conscription and the war effort boosted the bargaining power of workers and unions thrived. After the war, social solidarity and the newly grown fiscal and organizational capabilities of government underwrote welfare states. During the 1950s and 1960s, and occasionally even the 1970s, economies grew, middle classes expanded, and inequality was kept at bay.

    Those societies were the lucky ones. Others experienced violent upheaval that led to far more dramatic change, as in Russia after World War I and in China after World War II. Communism actively pursued economic equality by the bloodiest of means. But that grand and grotesque experiment merely created new social hierarchies. The compression of wealth and income distributions persisted only as long as violent regimes survived or remained committed to that goal. The moment restraints were relaxed, material inequality soared to previously unknown heights, from Russia to China and beyond. In the West, where equalization had been less radical, its reversal was also more muted, but it has proven equally persistent. Since the 1980s, large economic policies and processes such as globalization, deregulation, financialization, and automation have rewarded some more than others, to put it mildly. In the United States, this process has gone further than among its Western peers, creating economic disparities not seen since the 1920s. Seemingly impervious to political preference, this process has continued under Democrats and Republicans alike.

    So will the pandemic prompt a change of course? Recent history gives us little reason to think so. Although the Great Recession of 2008 battered the One Percenters, they soon recovered, while many others continued to struggle. This time does not look notably different. Inequality has gone up, in the United States and elsewhere. Job losses have disproportionately hit the young, the poor, the less skilled, and traditionally disadvantaged groups. And economic inequalities have been replicated in other domains, from worse health outcomes for the least protected and inferior learning opportunities for poorer students.

    Meanwhile, at least so far, the super-rich have recovered with astonishing speed. Bloomberg’s index of American fortunes among the world’s top 500 reveals the most V-shaped of recoveries: in 2020, a steep plunge between mid-February and the third week of March followed by an almost complete turnaround by early June. Jeff Bezos, the leader of the pack, is richer than ever before. To the relief of non-billionaires, the S&P 500 has closely tracked this plutocratic V. Yet while that has been good news for portfolios of all sizes, key indicators of economic health and general welfare, such as GDP or employment, are lagging far behind.

    Taken together, these developments fit the continuationist template: existing inequalities have been brutally exposed, or have grown, or have made themselves more painfully felt, or all of the above. But there has been no change of direction. What does this mean for the future of inequality — or climate change, or “late capitalism,” or the “neoliberal world order?” What, in other words, are the odds of seizing progressive change from the jaws of … more of the same?

    A genuine re-direction, a bold new course for society, may be accomplished by peaceful or violent means. Aspirational disruptionists hope for the former. Their message is simple: this is the time. At long last, the coronavirus crisis will make it impossible for us to avert our gaze from society’s ailments. It will shake us out of our customary stupor and jolt us into action, ready to combat inequality and systemic racism while shoring up health care and worker protection and infrastructure and the environment. This perspective, popular in large parts of punditdom, involves a bold leap from trigger (virus) to outcome (transformative change). The proximate mechanisms that are supposed to generate such sweeping change tend to be rather less well defined. During the Democratic primary, Bernie Sanders’ vague allusions to a “movement” that would somehow ensure implementation of far-reaching programs were emblematic of the magical thinking employed to bridge that chasm. The path to a grandly upgraded social contract or a Green New Deal seems similarly obscure. Yet the more ambitious and game-changing the goals, the more clearly formulated the way to reach them needs to be. At least for now, the needed clarity is absent.

    The current buzz of progressive energy in politics may prove deceptive. The election of Joe Biden was a relief of historic proportions, but with mainstream politics stuck in crisis-management mode, drastic re-directional change would seem less plausible than ever. The results of our recent election, which highlighted persistent polarization and all but guaranteed a prolonged stalemate, confirm this impression. (Old news alert: Democrats and their allies enjoyed a 26-seat margin in the Senate when the New Deal got underway, and a staggering 63-seat margin when FDR’s Supreme Court packing plan failed. Obama dropped the “public option” from the Affordable Care Act when his side in the Senate was 18 seats ahead. Enough said.) American institutions have focused on keeping everything afloat, as have the leaders of European countries and others elsewhere. If such efforts bear fruit, the prospect of radical transformation will once again recede.

    That would not come as a surprise. Historically, transformative change has been born of extraordinary violence. Yet Covid19, for all its terrors and mortality rates, is not particularly violent at all. Worldwide, the 1.25 million or so lives lost as of this writing equal about a week of normal mortality (which averages 165,000 per day) — or rather a few days more, allowing for undocumented deaths; and they were for the most part gleaned from among those already well advanced on their journey to the sweet hereafter. This is a far cry from the fall of ancient Rome, which set back civilization by centuries, or the Black Death, which took one in three Europeans, or the world wars and communist takeovers, which ruined entire countries and killed many tens of millions of all ages.

    Looked at from that angle, the notion that we might achieve re-directional change without comparably massive dislocations seems more wishful thinking than realistic strategy. At the very least, it is squarely at odds with what history teaches us. There is no historical precedent and no obvious contemporary mechanism for making that happen. While we cannot rule out anything — what if the 2020s are different after all? — this should certainly give us pause.

    But what of the alternative? What is the potential for ruptures dramatic or violent enough to upend the established order and open up space for transformative change — for upheaval so severe that it cannot fail to force us off the beaten path?

    My answer will be sobering for those who crave radical change. Conservative forces are more powerful than they have ever been. The four violent leveling mechanisms that have been operating in the past are now kept at bay by four robust stabilizers of the established order. Mass affluence is the most basic one. It is hard to find societies with a per capita GDP of more than $5,000 or $6,000 (measured in 2011  standardized dollars to ensure comparability) that have experienced societal breakdown or revolution. By some measures not even Yugoslavia in the 1990s cleared that fairly modest threshold of prosperity.

    While this lack of precedent does not rule out truly violent dislocations in Western countries or their peers, it strongly suggests that the likelihood is low. Moreover, economic achievement tends to lower fertility and age populations: the resultant paucity of desperate young men — the most plausible agents of revolutionary struggle — imposes a pacifying constraint. Those concerned about secession, armed conflict, and collapse will have to cite the likes of Syria, Yemen, and the two Sudans rather than the United States.

    Such outcomes are qualitatively different from the anti-government protests and riots that have become more common in the United States and some European countries over the last decade. Rooted in the Great Recession and its social consequences, these protests have mobilized growing numbers against austerity, globalization, and more recently climate change. The wave of activism and unrest that started in May 2020, while triggered by particularly jarring instances of racial injustice, would likewise seem hard to disentangle from the Covid19 lockdowns and the sudden economic downturn that disproportionately hurt the young. Once again, the pandemic amplified existing discontent.

    It is true that this need not be the whole story. In pioneering work that seeks to identify regularized patterns across history, Peter Turchin has argued that these events are not merely responses to acute crises, but are meaningfully correlated to gradual shifts in destabilizing variables from political polarization to immigration and inequality over the long term. He envisions a cycle moving from relative stability after American independence to a peak of destabilizing factors from the 1860s to the 1910s, and then on to another minimum in the 1950s followed by a renewed and ongoing rise. At least so far, however, the overall intensity of unrest has been much lower than it was in the past, when society was poorer and less well buffered against privation. The United States seems a long way from the riots of the late 1960s, let alone the bloody labor conflicts of the 1910s and 1920s — not to mention the Civil War. This will come as bad news  to radicals. 

    This is not an accident. Other stabilizers have been contributing to this striking attenuation. The social safety net has helped tame the fallout from crises. Europeans woke up to the virtues of welfare when the mobilizations of the Great War and the Bolshevik revolution shook the foundations of the old order. Although America briefly lagged behind, the Great Depression quickly forced it to follow suit. Support schemes have been expanded ever since, from Johnson’s Great Society to the second Bush’s prescription drug benefit and Obama’s Affordable Care Act. Threadbare though this system may seem to admirers of the most generous European welfare states, it largely manages to stave off mass immiseration and serious social unrest, especially as ad hoc patches — such as the $600 weekly supplement to unemployment benefits this spring — can be applied as needed.

    The third and fourth great stabilizers — the other impedi-ments to cataclysm — are more recent in origin. Quantitative easing — whereby central banks expand the money supply by buying government securities — has come to play the role of a miracle drug that promises to shore up businesses and markets without the need for austerity or punitive taxation. Thus far, this torrent of keystroke money has been good news for investors and bad news for progressives. The aforementioned V-shaped recovery enjoyed by the former would not have been possible without this intervention. And the final stabilizing force is science, and its particular potency in the face of a pandemic. During the Spanish Flu, there was no flu vaccine and DNA was unknown. A century later, the SARS-CoV-2 genome was sequenced mere weeks after the first report in Wuhan, and its mutations are now carefully tracked around the globe. Within months, more than a thousand drug and vaccine trials were underway, fast-tracking has cut development time to a fraction of the usual slog, and pharmaceutical production capacity was ramped up on spec.

    Of course we still do not know how all this will work out, especially as the efficacy — and the popular acceptance — of new vaccines and treatments remain uncertain. Yet a measure of cautious optimism seems warranted. The sooner science delivers the goods, the better are our prospects of a return to some version of normal, if indeed that is, or should be, our goal.

    Yet modernizing development is not a one-way street toward greater resilience. At the same time as it buttresses the established order with growth, welfare, finance, and science, it also undermines the status quo, rendering it more socially and economically fragile as a direct result of progress. Governance is the main exception. In rich countries, the state and its agents are firmly entrenched. If push came to shove, we would soon realize just how far their instruments of surveillance and repression surpass anything available in the past. The only restraint resides in the political will to employ these means.

    Welfare matters even more. States that capture and redistribute between a third and half of GDP cannot be dislodged, at least not without bringing down everything else. There is no plausible alternative. Bloated in their bureaucratic complexity and persistent in their insinuation into every conceivable aspect of our lives, they are hard to capture, harder to restructure, and impossible to overcome. 

    Fragility lurks elsewhere now, above all in the economic domain. Advanced economies have become vulnerable in new ways. Three principal reasons stand out, all of them direct consequences of development and progress. First, there is globalization in the broadest — and most de-politicized — sense of the term: the interconnectivities and interdependencies that govern production and exchange. This is, empirically, how economies now work. A chain with many links has many vulnerabilities. Yet despite initial worries about vulnerable supply chains, this intricate web seems to have passed the latest test.

    The second is the growing importance of the service sector, which expands at the expense of farming and manufacturing as societies grow richer. In normal times, retail, hospitality, and entertainment account for a tenth of America’s GDP. The greater the role played by these services, the more lockdowns and social distancing drag down the overall economy. When the Spanish Flu struck, there was far less to shut down than there is now, from airlines to resorts.

    But this time we also caught a lucky break. Fully a third of official economic output is generated by finance, insurance, the real estate business, and all manner of financial and business services. Well suited to remote work, these crucial white-collar sectors were spared major devastation. Had Covid19 appeared twenty years ago, they would have been much harder hit, with dire consequences for economic life more generally. (All we got was the pathetic Y2K scare.) This in turn underscores the stabilizing potential of science and technology well beyond virology. Information technology has truly been a savior.

    Overall, these vulnerabilities have been rather well contained by some of the same technological and economic innovation that has brought them into being. This leaves a third and altogether different source of fragility: our valuation of life and our attitude to risk. Given short shrift in current discourse, it deserves far more attention. All other things being equal, a society more inured to morbidity and death would be considerably more resilient in the face of a pandemic, and so would be its economy.

    To be sure, humans have always feared disease and the end of life. Yet no matter how fundamental and invariable such attitudes might seem to be, they are sensitive to overall development. For most of history, life was short. Two centuries ago in the West, and a century ago or even less elsewhere, average life expectancy at birth was a third of what we now take for granted. Perhaps one in three babies did not survive their first few years. The ranks of adults were whittled down in a steady drain of attrition. And even as some lasted to a ripe old age, they were vanishingly few in number. No one thought that odd or even remarkable. Much of the underlying suffering was frightfully mundane, driven by childhood diarrhea and dysentery and typhoid and tetanus. Epidemics merely added further uncertainty to the mix.

    When great plagues struck ancient civilizations, there was nothing to be done. The basics of infection long remained a mystery. In Europe, the Black Death of the late Middle Ages inspired early experiments with quarantines. An improvement over helpless laissez-faire, those early lockdowns nevertheless failed to solve the problem: waves of plague pounded Europe for more than three centuries. In the 1720s, when Marseille was sealed off from the outside world for two years to prevent a plague outbreak from spreading inland, half of its residents perished. And the disease slipped through anyway: not even the seventeen miles of actual stone walls that had hastily been thrown up cross-country managed to stop it. When Yellow Fever swept Philadelphia in 1793, the federal government shut down and almost one in ten residents lost their lives. Residents blamed a variety of causes from rotting coffee and lightning rods to that old mainstay, divine punishment.

    But then the world finally changed. In the centuries that followed, successive breakthroughs in epidemiology gradually rendered human life more predictable and less vulnerable. Modernity’s crackdown on smallpox and typhus, on cholera and typhoid, on tuberculosis and yellow fever, on polio and measles set us free us from much misery and early death. This was without precedent. Science did not restore a better world we had lost. It cleaned up and increasingly secured a world that had always been a dirty, dire, deadly mess.

    It has been all too easy to get used to the blessings of that epochal clean-up, and to take them for granted. Now is the first time they are slipping from our grasp, and the first time we count on science to retrieve them. Gone are the low expectations of even a hundred years ago. When the Spanish Flu appeared back in 1918, global life expectancy at birth was only half of what it is today, and the Grim Reaper was still a constant companion in ways we now find hard to fathom. Vaccines for tuberculosis, typhus, tetanus, measles, and polio had yet to be developed. Viruses raged largely unchecked. In that world, a new strain of influenza was simply one more refugee from Pandora’s well-stocked box. And when that pandemic departed as abruptly as it had arrived, scientists could not take credit.

    2020 was very different. Cradled by the comforts of peace, penicillin, Prozac, and prosperity, we have grown far less tolerant of hazard than our toughened ancestors. Economies wither under anxious distancing even as case fatality rates fall far short of those wrought by historical pandemics and greatly favor those with most of their lives still ahead of them. This makes our pandemic above all an economic crisis, with all the social, psychological, and political repercussions that entails. Economic activity hinges very much on perception — not just on bare needs (long met by modernity) but on the confidence that has us demand and consume all those superfluities that prop up employment and GDP. But now, unlike in the past, that confidence has been shaken.

    Every year about 2.8 million Americans die from all causes. As of this writing, Covid19 had raised this tally by a little  over 8 percent, or a bit more if all likely deaths are included. And these bare numbers inflate the pandemic’s overall impact. In the United States, the mean age of death of or with Covid19 has been around 75 years, an age at which remaining life expectancy averages 12 years (or rather less for those with pre-existing conditions, which are overrepresented among those who do die). This is very much worth putting in perspec-tive. Recall the morbid excitement of the media when, last spring, Covid19 fatalities passed the toll of the Vietnam War of a little over 58,000 — a false equivalence if there ever was one. Average age at death among those soldiers was 23 years, at which point the average man could expect to live another 48. A total of 2.8 million years of life were lost. The official Covid19 toll did not reach that mark until around Election Day. And even that calculation ignores the fact that these young soldiers were cut down with their whole lives ahead of them and before most of them had a chance to start a family. If we found a way to factor that in as well, the weight of loss would appear even more staggering.

    And yet our response is different, our fear more palpable. How would we feel today if 2.7 million young Americans were drafted to go fight overseas, as they were then? (Or rather 4.5 million, adjusting for today’s larger population.) Would that even be possible? Yet just fifty years ago they mostly went, even if some claimed bone spurs or went to Canada. Over 26 million Americans served in the two World Wars and in Korea, without major resistance at all. In the Civil War, 1 in 50 Americans was killed. The shift away from treating lives as expendable and fellow citizens as cannon fodder is a fairly recent one.

    Good riddance, we might say: few if any of us will pine for that bygone age. But our growing commitment to safety, and our ability to honor it, have come at a price. Some economists have cooked up an unappetizing concept known as the Statis-tical Value of Life. Working from dubious premises, they inform us that an American life is currently worth close to ten million dollars. Government agencies eagerly seize on this number to impose costly safety regulations on private industry but conveniently ignore it when compensating the families of service personnel killed in action, who are usually fobbed off with a million or two in lifetime support. Callous as it might seem, that latter approach at least has the virtue of being more realistic than the high-end price tag that signally fails to align with other metrics: equivalent to twice average lifetime per capita GDP in the United States, it values all Americans, who account for a paltry 4 percent of humanity, at more than 3 quadrillion dollars, or close to ten times total global wealth.

    Such absurdities are but a pale reflection of a broader revaluation of values. We no longer fight fear, we promote it. We seem transfixed by our fragility. Summary school closings, which wreak havoc on countless working parents, are hard to reconcile with the minuscule health risks faced by the very young. Let us grant that they are designed to protect those parents and not just unionized teachers. But plenty of parents have also been wary of letting their children return to college, despite the fact that only 0.1% of Covid19 deaths have occurred between the ages of 18 and 22. One cannot help wondering how the families of the 26 million Americans who served in the two World Wars and in Korea would have felt about that.

    During the Spanish Flu in the fall of 1918, male college students trained for the war in France. And even as this pandemic took more than half a million American lives, its economic consequences were ultimately minor. Shutdowns were haphazardly imposed and retail was not greatly affected. For the most part, people just plowed on — partly because working from home was not an option and welfare was almost non-existent, and partly, perhaps, because it did not actually occur to them not to. The notion that the preservation of life justifies almost any cost —widely if unevenly embraced by a citizenry ready to pull back regardless of government fiat — had not attained its present dominance.

    This collective anxiety, and the open-ended distancing and recession it spawns, places an especially heavy burden on the fourth great stabilizer. Well beyond warding off mass mortality and morbidity, science must now restore the sort of confidence that governments cannot decree. Our trust in the blessings of modern medicine has eclipsed traditional religious beliefs as well as faith in government — at least in much of the population. Last year, a Pew Research Center survey found that more than twice as many Americans had confidence in scientists than in elected politicians. And why wouldn’t they? Unlike the uneasy compromise of life with rampant Covid19, the renewed freedom of life after — or rather alongside managed — Covid19 is entirely the gift of the high priests of science. And the more we value life, health, and safety, the more we rely on that gift to get everything back on track.

    This is all the more true as reactions to containment measures vary, driven by factors as diverse as vulnerability, political preference, and class. The pandemic has even provoked a new know-nothingism, an unembarrassed hostility to science, that reached all the way into the White House. This variation in responses to the crisis fosters acute tensions — between young and old, between red and blue, between rich and poor. The longer the viral threat lingers, the more corrosive these tensions are bound to become. Government must help us stay afloat while the pandemic rages. It also has a crucial role to play in subsidizing and distributing medical remedies. At some point it might even have to mandate their use. In the end, however, only science can deliver us.

    The coronavirus shock has been both amplified and checked by modernity: by global connectivity and fragilities on the one hand, and by financial relief, Zoom, and medicine on the other. It bears within it both boost and restraint. This is something it has in common with a greater crisis to come. Neither the Paris Accord nor Greta Thunberg will save us from climate change. Geneticists, physicists, and engineers are our only credible line of defense.

    Much has already been accomplished, from drought-re-sistant genetically modified crops to the ever more effective harnessing and storing of renewable energy. Nuclear power, an even more powerful redeemer, has been at our disposal for almost eighty years, even as misguided politicians around the world are doing their craven best to snuff it out. But more will be needed to sustain mass affluence in the First World and to spread it elsewhere. If nuclear fusion remains a pipe dream, geoengineering may well have to come to the rescue. Yet whatever the precise configuration of techniques that will keep us on track, all of them will share the same source: science.

    We can argue to our heart’s content whether the twenty-first century will be America’s or China’s. But that debate is moot. We already know that this will be the century of science. For the first time in history, it will not be enough for scientists to make the world smarter and richer. Now they will be called upon to make sure that it does not slide back into the dark days when germs held humanity hostage, and to enable us to square the circle by reconciling environmental protection with ongoing growth.

    Modernity has long struggled to contain the forces it unleashed. Smog had to be tamed. Nuclear war — the ultimate genie-out-of-the-bottle hazard of modernity — had to be averted. HIV, Ebola, and the SARS outbreak of 2003 had to be contained. The current challenge has not introduced a new dynamic: it is simply the latest in a long line of challenges brought forth by progress. As a challenge, the coronavirus crisis ought to be manageable enough to be overcome even as it accelerates ongoing shifts and trends, not all of them equally worrisome. Yet this need not be true of the next “novel” virus, be it natural or man-made — let alone of the much more complex and daunting process of environmental degradation.

    It may seem strange, even a little morbid, to marshal an ongoing crisis to ponder future ones. But that is exactly what we must do, once we appreciate that Covid19 is unlikely to serve as that rarest of agents, a genuine historical disrupter. But the post-Covid19 continuities, for better and for worse, should not make us complacent about the volatility of late modernity. The most positive spin we might put on our epidemiological calamity is that it has provided us with an invaluable trial run for the more enormous crises that await us. Just as corona-virus outbreaks in the early 2000s and the financial crisis of 2008 taught us lessons that have come in handy (or would have come in handy had they been heeded), the more we are willing to learn from the present pandemic the better equipped we will be to deal with worse travails down the road.

    In the end, this might well turn out to be the most important legacy of Covid19: an enlargement of our imagination of disaster, a sober preparedness for the perils that surely await us. Do not expect the virus to re-make our world. It will not force us to solve our most pressing problems; only we can force ourselves to solve them. There was a touch of wisdom in Michel Houellebecq’s sardonic prediction that the post-Covid19 world “will be the same, just a bit worse.”

    Strangering

    According to Wallace Stevens, “Every poem is a poem within a poem: the poem of the idea within the poem of the words.” We often put “the idea” in a brief phrase: “the evils of war.” We rarely talk about the poetry of the idea. By itself, the theme, the idea, is always banal: it has to be recreated by the poet’s imagination into something animated. (And since the poet at hand is male, I’ll call him “he” in what follows.) And then, how is he to arouse a glow of personal vividness within the language, and create “the poem of the words”? 

    Suppose the poet wants the “idea” of his poem to be “the disparity of cultures.” What might “the poem of the idea” be? This particular poet’s imaginative move is to locate his two cultures in cosmic space, on two different planets, one of which is Planet Earth. And how will the words be made into “the poem of the words?” An answer occurs to him. What if a visitor from outer space had studied English, but could not escape mistakes in using it for the first time? At this initial stage, there remains a great deal to be done, since both the poem of the idea and the poem of the words are still sketchy and unfulfilled. But at least the poet now has the two poetries to work with. And the poet, Robert Hayden, an Afro-Amer-ican (his preferred term), is convinced that a poem written by a minority poet has to be as strong in the poetry of its words as in the poetry of its idea. In “Counterpoise,” a group-mani-festo that Hayden published in 1948, he declared emphatically, “As writers who belong to a so-called minority we are violently opposed to having our work viewed, as the custom is, entirely in the light of sociology and politics.”

    Let us suppose that it is 1978, and in a new book of poems a reader is seeing an odd entry, bizarrely bracketed fore and aft to show that the title is an editorial addition: “[American Journal].” Who, the reader asks, kept the journal; for whom was it intended; who attached the subsequent title implying, by its non-authorial initial capitals, an editor familiar with written-English usage? The answer is suspended. As the poem opens, the reader sees a series of totally unpunctuated sentiments flowing down the page in hesitant and unequal paragraph-stanzas halted intermittently by pauses. The journal-speaker is fluent, but not error-free, in English. The reader is in fact encountering the internal stream of conscious-ness of an extraterrestrial, dispatched by his rulers (“The Counselors”) to spy on, and report on, a group of brash new planetary invaders calling themselves “americans.”

     We see that the spaceman has learned only oral English, and knows none of the conventions of written English such as punctuation, apostrophes, and upper-case letters; but there must exist in his own native language some sort of honorific distinction reserved for the rulers, “The Counselors” (the honorific is translated in his journal by its sole and singular use of capital letters). In the poem’s prologue, the extraterrestrial muses on his new situation:

    here among them     the americans     this baffling
    multi people         extremes and variegations     their
    noise        restlessness       their almost frightening
    energy                  how best describe these aliens in my
    reports to     The Counselors

    disguise myself in order to study them unobserved
    adapting their varied pigmentations          white black
    red brown yellow       the imprecise and strangering
    distinctions by which they live     by which they
    justify their cruelties to one  another

    charming savages     enlightened primitives     brash
    new comers lately sprung up in our galaxy     how
    describe them       do they indeed know what or who
    they are         do not seem to             yet no other beings
    in the universe make more extravagant claims
    for their importance and identity

    The spy, disguised and passing as a fellow-citizen, studies the unfamiliar new tribe, noting its heterogeneity, its “strangering” distinctions, and its repellent moral justifications. Little by little the inner voice of the spy reveals his burden: he must compose a report for The Counselors, and he feels inadequate to the task. Although the planet of the “aliens” belongs to the same galaxy as his own, he knows no group in the entire universe who regard themselves so insolently, so proudly, as these “savages” do. So far, the extraterrestrial voice has offered relatively little information about its own powers and intentions; only later — during a visit to a rough tavern — does it reveal that it has masked itself (at least in the tavern) as male. I am calling the voice “he,” but it has the power to exist in different genders and can adopt local skin-pigmentation at will. 

    Since the planetary visitor is addressing himself, we can only guess, from his own categories and judgments, what sort of person is generating these words. We learn that he is fright-ened by lawless energy, by noise, by unpredictable restlessness, by multiple skin-colors: in disguise he has “adapted” — but he means “adopted” — various pigmentations depending on his social context. “Adapted” is one of his linguistic falterings, like “strangering” (in lieu of “strange”). He has strong moral views, and is revolted by the cruelties he sees among these “savages” (however charming); he has equally strong intellectual views, judging the newcomers as “primitives” (however sophisticated their technology). To him, the “americans” are aliens incapable of introspection or self-analysis yet ever-boastful in their claims to importance and to a unique identity.

    Hayden’s 141-line “[American Journal]” has attracted a good deal of contemporary attention, but its imaginative swirls of inconsistent “american” ideologies and behaviors have provoked more critical observation than its equally imaginative flights of language. I want to reflect here on Hayden’s imaginative interest in creating a spaceman’s mind and forms of expression. “The Counselors” on the spaceman’s planet apparently maintain a training laboratory for spies, providing language-tapes of any culture they intend to investigate. Two sets of these tapes are labeled “English,” one transmitting British English and the other American English, and the spy has been afforded both sets for his diligent preparatory study. One of the most entertaining aspects of this nonetheless serious poem is its presentation of the verbal and interpretive blunders that any visitor to a foreign land is bound to commit when he finds himself embedded in a bewildering unknown culture. Hayden must have taken intense pleasure in thinking up, all through, the multi-faceted “poem of the words” used by the alien.

    “[American Journal]” presents itself as a quasi-symphonic poem, advancing with the fluidity of musical movements in the spaceman’s successive choices of aspect: scenes, emotions, distinctions of pacing, degree of self-distancing. After the opening prologue, the voice (perhaps to reassure The Counselors) begins to liken this brash new species to his own tribe; “like us,” he says, they have advanced technology and have traveled to the moon (grossly leaving their “rubbish” behind); and apparently they too worship “the Unknowable Essence” (but how do they define their Unknowable?). In lieu of shamans they have “technologists” (a native speaker might have said “scientists”). The observer tallies geographical and meteorological earth-features that he recognizes from his own home-planet, including the temporal feature of the sun by day, the moon by night:

    oceans deserts mountains grain fields canyons
    forests           variousness of landscapes weathers
    sun light moon light as at home

    Nostalgia for “home” has made him begin his observations with familiar perceptions, but he is as yet a novice in English pronunciation: he separates the word “sun” from “light” and “moon” from “light,” as though his overarching category “Light” has separate subordinate categories, that of the sun and that of the moon. With him, the light has not, as in native English voicing, been absorbed almost silently into the polysyllabic “moonlight” and “sunlight.” 

    The observer, we are pleased to see, has an aesthetic sense resembling our own, responding instantly to “red monoliths” like those of his remembered “home”:

                                     much here is
    beautiful     dream like vistas reminding me of
    home       item         have seen the rock place known
    as garden of the gods and sacred to the first
    indigenes     red monoliths of home

    For the first time, an actual American name has at this point made its way into the spaceman’s report: the so-called Garden of the Gods in Colorado Springs is a stark terrain of red monoliths held sacred by Native Americans. The name points us to an incident in the life of Robert Hayden. In 1975, five years before his death, the day after his reading at Colorado College, Hayden visited the Garden of the Gods at the invitation of a young MFA student whom he had met the night before. A few years afterwards, that student — Yusef Komunyakaa, later a distinguished poet himself — recorded their walk:

    Hayden had to be assisted closely along the rocky paths up the beautiful hills.  He seemed nearly blind. . . .  Soon we were in the heart of the Garden of the Gods, beside a formation called Balanced Rock—a smaller stone supporting a larger one, massively depicting a visual mathematics too subtle for words.  Hayden stopped, looked around, and said, “I love this country.”

    In “[American Journal]” Hayden bestows his own warm response to the grandeur of the scene on his extraterrestrial, fusing himself and the surreal cosmic visitor. 

    A reader aware that Hayden is African-American may suspect that he is satirizing, in the response of the technically sophisticated alien contemplating the americans, the discourse of a “civilized” white gazing, with simultaneous denigration and envy, at a “primitive” Black culture. But by now enough ink has been spent on the poem to discourage any idea that its “message” is without subtlety; a number of identity-determi-nants — national, linguistic, gendered — populate the poem. Although the spy celebrates the landscape so like his own, he is not free to mention in his report the sensuous appeal of the americans themselves. After his search for the right adjective to describe them—”i am attracted to / the vigorous americans   disturbing, sensuous”—he becomes ashamed, adding “never to be admitted,” meaning, surely, not even to himself.

    The next movement of the poem is a scherzo, in which the alien-in-disguise has a conversation in a tavern with an american. When he asks what is meant by “the American dream” the “earth-man” answers in ignorant colloquial language (with its crude “irregardless,” its unthinking alliance of “sure” and “i guess”). The alien, never having read written English, is mistaken in substituting two words for the proper English single word, as in “night mare” and “every body”), and he is baffled by the redundant insertion of the all-purpose American linguistic filler, “okay.” The “earth-man” says, of  the American Dream:

                 sure
    we still believe in it i guess. . . irregardless of the some
    times night mare facts we always try to double
    talk our way around                 and its okay the dreams
    okay and means whats good could be a damn sight
    better            means every body in the good old u s a
    should have the chance to get ahead or at least
    should have three squares a day          as for myself
    i do okay      not crying hunger with a loaf of
    bread tucked under my arm you understand

    The alien‘s dutiful previous listening to the tapes of spoken English does not equip him to understand the torrent of incorrectness, slang (“double talk,” “three squares”), and abbreviations (“u s a”) uttered by the “earth-man.” He puts forth, in reply to this barrage of American dialect, his courteous British reply (deriving from his alternate set of language-tapes, the British one): “i / fear one does not clearly follow.” His tavern-mate becomes suspicious:

    notice you got a funny accent pal        like where
    you from he asked       far from here I mumbled
    he stared hard I left

    The tavern-dialogue teaches the alien that his linguistic mimicry is still imperfect:

    must be more careful    item     learn to use okay
    their pass word              okay

    After the comic interlude of the tavern scene, however, “[American Journal]” suddenly turns savage, as a street riot erupts, alive with new unintelligibility. The alien sees people he characterizes as “sentinels” — a literal translation from some word in his native tongue, since he hasn’t learned the correct English word for “police.” The “sentinels” are disturb-ingly re-characterized by the crowd — “pigs / i heard them called”–as the police retaliate “with flailing clubs”:

                  unbearable decibels      i fled lest
    vibrations of the brutal scene do further harm
    to my metabolism already over taxed

    A biological fact about the alien — that under the rule of The Counselors the capacity to tolerate violence has been genetically bred out of his metabolism — leads him to side with the police, as with the primary authoritarian decisions that have created and socialized him. His voice becomes that of a repressed creature unconscious of his own victimization, incapable until now of any mental act not channeling the opinions of The Counselors. Yet his equilibrium has been so shaken by the violence of the riot that the very word “serenity” shatters into linguistic fragments over a line-ending:

    The Counselors would never permit such barbarous
    confusion          they know what is best for our sereni
    ty          we are an ancient race and have outgrown
    illusions cherished here          item       their vaunted
    liberty

    His (temporary) identification with The Counselors allows the alien to parody the earth-men’s truculence:

        “no body pushes me around i have heard
    them say             land of the free they sing          what do
    they fear mistrust betray more than the freedom
    they boast of in their ignorant pride     have seen
    the squalid ghettoes in their violent cities

    (Nowhere does the alien sound more like a white supremacist than here: he has learned, and uses, the abusive word “ghetto.”) And he wonders, returning to the word “paradox” from an earlier summary:

    paradox on paradox       how have the americans
    managed to survive

    After the deafening street riot there arrives a louder scherzo than the earlier tavern-interlude: now it is the “patriotic” spectacle of the Fourth of July. As “earth-men / in antique uniforms play at the carnage whereby/ the americans achieved identity,” the alien reveals that on his own planet they indeed do study American history in its origins:

                                                        we too recall
    that struggle as enterprise of suffering and
    faith uniquely theirs

    But what has happened in the vulgar modern era to the noble independence celebrated on the Fourth? With mockery the alien sees its debasement into a craven nationalism:

                                                         blonde miss teen age
    america waving from a red white and blue flower
    float as the goddess of liberty                            a divided
    people seeking reassurance from a past few under
    stand and many scorn

    “A past [that] few understand and many scorn”: in these high-minded words the alien exhibits his own superior wisdom as he judges American ignorance and political decline. And hearing contemporary skeptics dismiss the Fourth of July parade (“why should we sanction / old hypocrisies”), the alien returns to his “native” moralizing and irritated scorn. Yet his anxiety exhibits itself afresh as the revered word “Counselors” breaks into pieces at a line-end:

                                          The Counse
    lors would silence them

    a decadent people The Counselors believe         i
    do not find them decadent     a refutation not
    permitted me

    The Counselors, we begin to understand, do not countenance objections to their views. The alien’s irrepressible mixed feelings about the americans throw him into a violent mixed diction as he ends up siding with the Counselors’ stereotypes of raw crude “earthlings”:

                   but for all their knowledge
    power and inventiveness not yet more than raw
    crude neophytes like earthlings everywhere

    With the subsiding of his unresolved responses to the Fourth of July, the alien wonders how his report on america will strike The Counselors. Since he is, himself, delighted by the ingenuity of his multiple disguises, he reminds himself sotto voce to induce approval in The Counselors by describing his stratagems. But even while reassuring himself that The Counselors will admire his powers, he still worries about their eventual estimation of his work. Hoping to curry favor, he describes his spy-costumes in a cascade of nouns and idioms learned, we feel, rather on the street than from the bland tapes of his language-lab):

    though I have easily passed for an american   in
    bankers grey afro and dashiki long hair and jeans
    hard hat yarmulke mini skirt               describe in some
    detail for the amusement of The Counselors   and
    though my skill in mimicry is impeccable        as
    indeed The Counselors are aware        some thing
    eludes me      some constant amid the variables
    defies analysis and imitation               will I be judged
    incompetent

    In his next, most analytical moment, the extraterrestrial rises to the philosophical diction natural to his culture — a discourse technologically supreme, wholly rational, but emotionally repressed. The minor role of america in the cosmic scheme of things (“an iota in our galaxy”) is evident to him, but he is disturbed by its problematic existence as a conceptually insoluble entity, resistant — in its mobile lability of science and fantasy, logic, and imagination — to the analytic reason that is the pride of his civilization. He sighs in frustration:

    america           as much a problem in metaphysics as
    it is a nation earthly entity, an iota in our
    galaxy           an organism that changes even as i
    examine it     fact and fantasy never twice the
    same     so many variables

    As the spy ponders the unintelligibility of america, its antagonism to all he has valued, he realizes that he is in physical danger from its natives: already his presence has been rumored in the newspapers. While the papers laugh at those “believing” in the existence of “humanoids,” the “humanoids” in their spaceship laugh back at the scoffing newspapers. Quiet in his withdrawal from the company of his “crew,” the alien reflects on all he has seen and heard: the gaudy Fourth of July parade, blonde miss teen-age america, the suspicious “earth-man” in the tavern, the street-riot between citizens and “sentinels,” the awful decibels of both celebration and violence, the confluence in the streets of dashikis and yarmulkes. Lost in his memories, the alien, tensely frustrated, cannot define what the americans are: he knows only that the american personality confounds his own schooled, careful, sexless, logical self. He cannot, now, return unthinkingly to his own sterile planet, submit to The Counselors’ rules, and censor his speech. Once home, he will ponder the “variegations” of his past journey — his adroit disguises of body, skin-color, gender, and manner of speech — but for all his wide-ranging observation, he will remain forever unable to solve the “quiddity” — the “thisness” — of this paradoxical population, this exuberant and savage rebel-tribe.

    Hayden’s science-fiction is doubly dystopian. His spacemen are like Swift’s whinnying Houyhnhnms, inhuman, chilly, fastidious, rational; and their representative courier flinches at the americans’ untidiness, their boasting, their costumed mimicry of the carnage of 1776, their cruelty, their childish “floats,” their veneration of the “Goddess of Liberty” in the person of a teen-ager in a toga, their incoherent “metaphysics,” their elusive essence.

    Behind the agitated monologue of the visiting “humanoid” lies the implied story of his former life: he was born, he was schooled, he was reprimanded for any excess of act or emotion, he was indistinguishable from others of his tribe. Passionless, he needed no human relations (family, wife, children); he worshipped “technologists,” and excelled in scientific observation, memory, and analysis. Posted to another planet to spy on the brash new tribe of “earthlings,” he is disposed at first to dismiss their childish “civilization,” but eventually, as he moves among them, he discovers in their “variegated” pigments and “various” behaviors much that he has lacked in his artificially rational former life. And what will his future be? He will be sadder, and wiser, forever alienated from his compliant fellow-citizens, unable to convey to them the extravagance of emotion and action, free from punitive supervision, that the americans, for all their faults, possess.

    Hayden made room in his poem for his extraterrestrial’s implied past and presumably alienated future to sharpen the contrast of the two cultures, the governed rational and the unbridled free. Both are insufficient, both are incomplete. The rational and disciplined one sees the unbridled one as ungovernable; the unbridled one would see the alien’s author-itarian Counselors as intolerable. Neither culture is really admirable. The chief difference between them is that one is subjugated, the other free (in both virtue and vulgarity). The free culture has no stable government; its people are unruly, as likely to sponsor a riot as a parade. The governed culture has the dark stability of its euphemized “counseling” — coercive, repressive, severe, implacable. 

    Hayden invented from scratch the unusual sensibility and the “faulty” English of the alien, his innocence as to punctua-tion and spelling, his nervousness intermittently betrayed by his words’ falling into pieces (not syllables), his complacent moral judgments, his intellectual scorn of the “earthlings” who have gotten to the moon but no further, his horror at the sheer noise of the american streets in parades and riots — all the while showing his opinions being put into question by that elusive “something” for which he has no words. It is, of course, freedom, both in creation and in destruction.

    We can, if we choose, read this conflict of cultures as embodying on the one side technologically schooled and hierarchically socialized America and on the other side that supercilious America’s view of African-American life. There is something to that reading, but not everything. Hayden repudiated the narration of victimhood as the chief resource of a minority writer, just as he repudiated despair at the racial division of his America. His “God-consciousness” (as he named it) led him to an unshakeable conviction of human brotherhood and enabled him ultimately to join his wife Erma’s church, the Baha’i, which exists without a hierarchical structure and affirms belief in the unity of all humankind.

    And yet Hayden had, by his own acknowledgment, periods of profound depression as well as periods of strenuous belief that relations between the races could not only improve but become harmonious. He incurred the wrath of the Black Power movement in the 1970s because of his conviction that the literature of organized protest movements tended toward propaganda, not art. Nor could he bring himself to refuse Emersonian symbolism in favor of literal statement.

    When an interviewer asked him why he wrote poetry, he said — disarmingly and wittily — because he liked it better than prose. He thought “confessional” poetry too naked to attain universality. He never stopped revising his poems in the direction of greater concision, greater symbolic power, and greater objectivity. Famous for his powerful sequences of African-American history — “Middle Passage,” “John Brown”— he is justly remembered in most anthologies for the inexpressibly moving “Those Winter Sundays,” an elegy for his laborer foster-father. “Sundays, too, my father got up early,” it begins, with all the emphasis on the accented “too”: — “got up early” as a kindness to the sleeping family in the cold house, “making banked fires blaze.” “Nobody ever thanked him”: that is the line of the poem that nobody can ever forget.  

    Once Hayden learned to read — by himself, at three — he read intensely and passionately in the major British and American poets. One can see him, over a lifetime, experimenting with nearly all poetic genres: nature lyrics, elegies, sequences, allegories, ballads. When he looked to African American predecessors, he saw some of them writing in dialect, others creating new folk ballads, still others choosing the high language of the canonical English lyric. He would learn from them, but equally from Whitman, Crane, and Auden (who taught Hayden at Michigan). Just as Elizabeth Bishop would not allow her poems to appear in single-gender anthologies because she took herself to be an American poet, not a “female poet,” so Hayden always believed himself to be an American poet among other American poets. For him, the democracy of literature could not countenance partisan hostilities, nor could the brotherhood of human beings conceive of exclusions within the company of artists.

    Born in Detroit in 1913 and named Asa Sheffley by his birth-parents, the poet was given away, but not abandoned, by his mother when she moved to find work. He was raised (but never adopted) by a neighborhood family, the Haydens, and subsequently went by the name Robert Hayden. He came to feel that his foster-family meant well by him; his father did not obstruct his intellectual desires, and saved to help him through college, but it was a teacher, a librarian, and a social worker (assigned to the Haydens when they were on welfare), who saw something unusual in him and encouraged him. In his prose, he was candid about his group difficulties in school; with his thick glasses, his poor sight, and his love of poetry, he was called “nigger, Four-Eyes, sissy.” In view of the violent racial divisions of American life, which he experienced from childhood with unavoidable pain, he thought that an artist had to cultivate a strict objectivity in social observation. He supported himself all his life by teaching. For twenty years he remained at Fisk (teaching fifteen hours a week, a taxing load for a conscientious teacher), and thereafter he closed his career at the University of Michigan. In 1976, the Bicentennial Year, he was appointed Consultant in Poetry to the Library of Congress (a congratulatory post now renamed, more accurately, Poet Laureate). The final triumph of Hayden’s personal and impersonal objectivity was “[American Journal],” composed in 1976 as the Phi Beta Kappa poem for the University of Michigan and placed as the final work in his Collected Poems. You can hear Hayden read it in his quiet and musical voice on a tape he made for the Library of Congress in 1978, two years before he died, early, at 66, of cancer.

    Lolita Now

    After almost three-quarters of a century, how are we now to think about Lolita? It may well be the most commented on novel written in English in the past hundred years, alongside Joyce’s Ulysses. In the case of Ulysses, the imperative for commentary is chiefly a consequence of the invitation to exegesis generated by that novel’s dense network of allusions and the multiple complexities of its structure. In fact, Alfred Appel, Jr., in the introduction to his splendid Annotated Lolita, has observed certain affinities between Lolita and Ulysses in the centrality of parody for both novels, in their resourceful deployment of popular culture, and, of course in their shared elaborate mobilization of literary allusions. Nabokov, we should recall, was a great admirer of Ulysses, and Lolita has its own formal intricacies, which have been duly explicated by much apt criticism ever since its initial American publication in 1958.

    Yet the more obvious reason why Lolita has elicited so much commentary through the years is the moral questions raised by its subject. The crudest notes of the discussion were first struck by readers who imagined that the author must be a pervert and that the novel he wrote was altogether a sordid thing. In more sophisticated guise, some conservative critics, such as Norman Podhoretz, have contended that Lolita may corrupt morals and must be approached with caution by right-thinking people. Inevitably, the novel has also been excoriated by the feminist Left. In her diffuse but influential article “Men Explain Lolita to Me,” Rebecca Solnit seems to classify Lolita (her meaning is a bit opaque) as one of the books that “are instructions on why women are dirt or hardly exist at all except as accessories.”

    Serious considerations of the novel have properly dismissed all such views, and, indeed, many of the earliest critics recognized it as a literary achievement of the first order of originality (but not Nabokov’s erstwhile friend Edmund Wilson, who thought it regrettable). Indeed, powerful and persuasive arguments have been made for the moral character of the book, and these need not be repeated here.  

    What may be at issue for readers of Lolita in the twenty-first century is how to regard the book in an age when our culture has become so conscious of the sexual exploitation of children and of women in general, young or otherwise. This is, of course, a social problem that is alarmingly widespread and deserving of urgent reform, but it must be said that the public exposure of certain especially egregious cases has led much of the public to hair-trigger responses to any activity that is even obliquely related to such appalling exploitation. It is a sign of our confused and simplified and sanctimonious times that Dan Franklin, the editor-in-chief of the esteemed London publishing house Jonathan Cape, has declared that he would not publish Lolita if it were submitted to him now. His judgment stems from an acute nervousness about how thirty-year-olds on his company’s acquisition team would respond if he proposed publication, as he himself has said.

    Is the new awareness of sexual harassment likely to make it altogether uncomfortable to read the first-person narrative of a middle-aged male who repeatedly, extravagantly, and at times brutally commits carnal acts with a pubescent girl who is quite helpless to free herself from him? Novelists, of course, have not infrequently chosen to write books about deviant, criminal, or murderous characters — Humbert Humbert is all three — but the sexual exploitation of a child surely touches a raw nerve, especially now. (One highly intelligent reader, recently reading Lolita for the first time, told me that he could see it was a brilliant novel but found it difficult to stick with it because of the subject.)

    I would like to suggest that the way Humbert’s story is constructed anticipates this sort of discomfort, in a sense even aligning itself with the discomfort. Devoted as he was to the supreme importance of art, Nabokov had been concerned since his Russian novels with the phenomenon of the perverted artist, the person who uses a distorted version of the aesthetic shaping of reality to inflict suffering on others. Humbert Humbert is only his most extreme representation of such distortion. The perversion of the artistic impulse is a vital subject for Nabokov precisely because art matters so much to him.

    The first thing that should be noted about the treatment of this subject in Lolita is that Humbert Humbert clearly regards himself as a monster, repeatedly emphasizing his own monstrosity. This goes along with the fact that he is insane, as he frankly admits, and that he has been several times institutionalized in asylums. Humbert’s assertions of his own moral repulsiveness abound in the novel. “I am,” he says of himself early in his story, as a boarder in the Haze home, “like one of those pale inflated spiders you see in old gardens. Sitting in the middle of a luminous web and giving jerks to this or that strand.” With Lolita tantalizingly sitting in his lap on the Haze davenport, he invokes a familiar fairy tale that here will have no happy ending as he wriggles in order “to improve the secret system of tactile correspondence between beast and beauty — between my gagged, bursting beast and the beauty of her dimpled body in its innocent cotton frock.” Humbert’s framing of this allusion altogether reduces the man to his imperious sexual member. And as we shall see from other citations, he has a clear awareness that his absconding with Lolita is bound to have dire consequences for both. 

    When he finally consummates his lust for Lolita, he declares that it was she who seduced him, not an altogether improbable claim given her sexual precociousness, but she on her part says, fearing that he has torn her internally — though it is unclear whether she might be merely joking — that she ought to report him to the police for rape. At least in a moral sense as well as in the statutory one, this could be quite right. The year-long frenzy of sexual gratification with a sometimes reluctantly submissive, sometimes resistant, twelve-year-old has its particularly sordid moments beyond its intrinsic sordidness, as when Humbert insists on sex when Lolita is running a high fever or in his repeatedly bribing her with magazines and treats to make herself available to his insatiable desire. Humbert’s admission of all this repeated abuse culminates near the end of the novel in his often cited recognition, as he watches school children at play, that he has deprived Lolita of her childhood. But a summarizing assess-ment of what he has perpetrated in the throes of his obsession occurs earlier, as he and Lolita head back east in his car:

    We had been everywhere. We had really seen nothing. And I catch myself thinking today that our long journey had only defiled with a sinuous trail of slime the lovely, truthful, dreamy, enormous country that by then, in retrospect, was no more than a collection of dog-eared maps, ruined tour books, old tires, and her sobs in the night—every night, every night—the moment I feigned sleep. 

    Here the defiling of America and the defiling of Lolita are virtually interchangeable. This self-revelatory moment, coming at the end of a chapter, is very telling in two ways — first the invocation of slime, cognate with the earlier image of the spider, to indicate the repulsiveness of this sexual odyssey, and then, at the end of the little catalogue of the detritus of the journey, interwoven with it and constituting Humbert’s first report of this wrenching fact, Lolita’s sobbing through it all, night after night.

    If Lolita were nothing but this, it would merely be a riveting and also unappetizing representation of a sexually obsessed madman. Yet what is enacted in the novel is more complicated and more interestingly ambiguous. In the afterword that Nabokov wrote to Lolita in 1956 to accompany the Ameri-can publication of excerpts in The Anchor Review, he offers a curious origin for the idea of the novel. When he was laid up with illness in Paris in 1940, he came across a newspaper story about a caged ape in the Jardin des Plantes that had been given charcoal and paper and produced a sketch of the bars enclosing him in. (One thinks of Rilke’s famous poem about the panther, “Au Jardin des Plantes,”: “It seemed to him there were a thousand bars / and behind those bars no world.”) The ape inspired Nabokov to write a Russian story with a plot roughly like that of Lolita, but, unhappy with the piece, he destroyed it.

    What does an ape in a cage drawing his prison have to do with Lolita? The obvious answer is that Humbert Humbert’s predicament is of a man hopelessly imprisoned by his obsession. The narrative he produces is the representation of his prison, which is not an enclosure of vertical bars but rather an alluring and also vulnerable girl whom he has desperately fixed as the object of his desire. This transformation of a cage into a sexual obsession has a double effect: Lolita as its object is repeatedly celebrated in radiant prose as a thing of beauty, and the reader is led to perceive Humbert not only with horror but also with a qualified kind of sympathy, as a man hideously trapped in his own impulses that inflict grave harm on someone he comes to love and that in the end destroy him. It is relevant in this connection that the Russian story Nabokov discarded ended with the suicide of its perverted protagonist. The central paradox of Lolita, and one of the effects that makes it a great novel and not just the story of a psychopath, is that one simultaneously recoils from its narrator and is drawn into both the anguish and the lyric exuberance of his point of view.

    Especially in regard to the second of these contradictory responses, the extraordinary style of the novel surely takes the book well beyond the fictional case-study of a madman. Nabokov himself characterized the book as his love affair with the English language, and there are few other novels since Joyce that deploy its resources with such pyrotechnic virtuosity. In the famous first paragraph, which is a spectacular prose poem, Humbert ends by saying, “You can always count on a murderer for a fancy prose style.” Humbert, with his inventor standing firmly behind him, is wonderfully having it both ways: the extravagance of the musical prose might push to the brink of excess, and Humbert is perfectly aware of this, yet the prose is glorious and is surely a part of the reader’s enjoyment of this troubling story. This is the narrative of a man repeatedly doing something morally ugly conveyed in language that is often quite beautiful. The contradiction between subject and style poses a certain moral dilemma for readers, who may well relish the novel and at the same time feel uneasy about the delight they take in it. Perhaps that double-take was part of Nabokov’s intention.

    For a characteristic instance of this tricky balancing act, let us return briefly to Humbert on the davenport in the Haze home with Lolita, who is evidently unaware of his sexual excitement, sitting in his lap. As he approaches climax, deliberately prolonging the pleasure, he says, in a phrase that shrewdly defines his relationship with the pre-pubescent girl, “Lolita had been safely solipsized.” He continues in his habitual extravagant style:

    The implied sun pulsated in the supplied poplars; we were fantastically and divinely alone; I watched her, rosy, gold-dusted, beyond the veil of my controlled delight, unaware of it, alien to it, and the sun was on her lips, and her lips were apparently still forming the words of the Carmen-barman ditty that no longer reached my consciousness. Everything was now ready. The nerves of pleasure had been laid bare….I was above the tribulations of ridicule, beyond the possibilities of retribution. In my self-made seraglio, I was a radiant and robust Turk, postponing the moment of actually enjoying the youngest and frailest of his slaves. 

    This entire scene is the most explicitly sexual moment in the novel — after this, Nabokov pointedly refrains from explicit representations of sex — but it is also something rather different. The murderer’s fancy prose is exquisitely orchestrated in a virtually musical sense, the passage beginning with a spectacular set of alliterations that also incorporates a rhyme: “The implied sun pulsated in the supplied poplars.” The sun is “implied” probably because Humbert, totally focused on Lolita and his pleasure, is not directly observing the sun and the “supplied” poplars on which it is shining, though he does notice the sunlight on her lips. Beyond that detail, Lolita’s presence, radiant for Humbert, is evoked only in the brief phrase “rosy, gold-dusted” because Humbert is completely concentrated on his own sexual excitement.

    The verbal pyrotechnics of the kind one sees here, which are abundantly deployed throughout the novel, are surely a source of delight for readers, perhaps even eliciting a certain sense of admiration for Humbert’s “sensibility” or his inventiveness, though the acts he performs trigger moral revulsion. The novel’s perverted protagonist is manifestly a man of high culture — and, at the same time, following the precedent established by Joyce, avidly attentive as well to popular culture — and so this passage, like so many others in the book, spins a web of allusions in its very representation of sexual arousal. The invocation of Carmen, one of several in the novel, probably refers to Merimée’s novella rather than to the opera based on it, as Alfred Appel, Jr. plausibly suggests, thus conjuring up from fiction a young and sexually alluring woman, here appearing in a silly ditty. Humbert as a Turk in his seraglio, depicted in still another alliterative chain (“In my self-made seraglio, I was a radiant and robust Turk”) taps into an old cliché of Western culture in which the Orient is figured as a theater of exotic sexual license. Leopold Bloom plays more than once with this same Orientalist notion.

    Again, I think that the articulation of Humbert’s fantasy produces a double effect. A reader may enjoy the exuberance of his inventiveness, but surely what the fantasy reveals about his intentions is repugnant. What is especially telling is the phrase “enjoying the youngest and frailest of his slaves.” Presumably, this compliant or helpless victim of the Turk Humbert’s lasciviousness is almost or actually a child, and the fact that she is the “frailest” of the female slaves in the seraglio betrays his awareness of Lolita’s vulnerability, an aspect of her that may well pique his twisted desire. What I have characterized as the balancing act of Nabokov’s prose in this novel is abundantly evident here.  

    I would like to offer a final example of the odd allure created by Humbert’s writing, a passage in which the sheer literariness of the writing is especially prominent. It is Humbert’s first sighting of Lolita, peering at him over her dark glasses as she sunbathes on the patio. Her appearance will present to Humbert, or so he claims, the very image of Annabel Leigh, his first love met on the Riviera when both were still pre-teens, and then forever lost to him through an early death:

    It was the same child—the same frail, honey-hued shoulders, the same silky supple bare back, the same chestnut head of hair. A polka-dotted black kerchief tied around her chest hid from my aging ape eyes, but not from the gaze of young memory, the juvenile breasts I had fondled one immortal day. And, as if I were the fairy-tale nurse of some little princess (lost, kidnapped, discovered in gypsy rags through which her nakedness smiled at the king and his hounds), I recognized the tiny dark-brown mole on her side. With awe and delight (the king crying for joy, the trumpets blaring, the nurse drunk) I saw again her lovely indrawn abdomen where my southbound mouth had briefly paused; and those puerile hips on which I had kissed the crenulated imprint left by the hem of her shorts—that last mad immortal day behind the “Roches Roses.” The twenty-five years I had lived since then tapered to a palpitating point, and vanished.

    The idea of a formative experience in early life imprinting itself so indelibly on the psyche that the person becomes its lifelong captive is, as quite a few commentators have noted, Nabokov’s mockery of the Freudian notion of the causation of sexual pathology by childhood trauma, a notion he famously despised. Given that it plays an altogether determinative role in Humbert’s perversion, one must conclude that the “psychology” of the novel, based as it is on a parody of Freud, can scarcely be regarded as realistic. 

    It is, instead, a central instance in which playfulness is paramount in this representation of a sexual deviant, an unanticipated conjunction of manner and subject that may compel us to reconsider how to think about Humbert Humbert. In one respect, he is a powerful fictional representation of a disturbed person that one can readily relate to troubling manifestations of this kind of disturbance in the real world; in another respect, he is a kind of pawn in a wild literary game. One should note that the caged ape in the Jardin des Plantes breaks through the surface here in Humbert’s self-denigrating characterization of his own “aging ape eyes.” He proceeds to embark on the fantasy of the little princess kidnapped by gypsies — the introduction of gypsies ties in with the allusions to Carmen, who is a gypsy — comically casting himself, a male figure to uncomfortable excess, as the nurse of the vanished infant.

    The story of the kidnapped child rediscovered in adulthood through the recognition of a birthmark is very old, originating in the Greek romances of Late Antiquity and continuing to lead a literary life in the Early Modern period and beyond. Fielding, for example, employs it in Joseph Andrews, birthmark and all, with the kidnappers there identified as gypsies, a European fantasy about them in that era. Nabokov, then, is playing not only with Freud but also with the contrivance of an old tale told many times since the Greeks. What may properly be described as the highjinks of Humbert’s consciousness, however tormented he often may be, is on display as he quickly switches roles from nurse to king, clearly the child’s father, crying for joy over her discovery. The fact that the nurse is imagined to be drunk at this moment is a wildly extraneous and incongruous detail, Humbert indulging in a riot of the imagination as he recreates this old story for his self-explanatory purposes.

    In regard to his function as a narrator of the novel, it should be kept in mind that Humbert speaks in two distinct and intertwined modes: his language reflects an obsessive and, indeed, deranged mind, as in the excessive doubled insistence on “immortal” in this passage; and it also deploys the extravagant resources of Nabokov, shrewd and witty observer and master stylist. One might note here the lovely precision of the adjective in “the crenulated imprint” and the wit of “my southbound mouth” to refer to the ultimate sexual destiny toward which the mouth is traveling. The two twelve-year-olds on the Riviera, it seems, were going a step beyond ordinary pre-adolescent fooling around. The wonderful concluding sentence goes on to strike a distinctively Nabokovian note. It is strongly reminiscent of at least a couple of sentences in Speak Memory, a book cast in its initial version not long before the composition of Lolita. The literary and, one could also say, stylistic recapture of the past was an urgent undertaking  for Nabokov, splendidly achieved in Speak Memory, and in Lolita the intellectual joke of a “Freudian” childhood experience becomes also, at least at this moment, an emotionally fraught and joyous realization of the past returned in all its luminous presence.

    This concert of surprising and vividly inventive effects in the passage, and elsewhere in the novel as well, leads me to propose an aspect of the readerly experience that one would certainly not expect in the narrative of a sexual abuser of young girls: for all the moral dubiety of the protagonist’s story, Lolita is a pleasure to read, and anyone who denies this is likely to be suffering from terminal moralism or bad taste. In this import-ant regard, we should consider the essential role of parody in this novel, because parody is also not something generally associated with the fictional portrayal of psychopaths.

    Parody, of course, is pervasive in Nabokov’s novels. What its presence necessarily implies is that we must see the novel not as a direct representation of reality — a word, we should keep in mind, that for Nabokov must always be wrapped in scare quotes — but rather as a response to the world outside literature entirely mediated by literature, which is to say, both the novelist’s own literary contrivances and the variegated background of literary tradition on which he chooses to draw. As critics and scholars through the decades have abundantly shown, Nabokov constantly calls attention to the status of his fiction as literary artifice, executing what the Russian Formalists of the early twentieth century referred to as “laying bare the device.” Yet the double edge of this procedure as he practices it may be a little hard to get a handle on. Invitation to a Beheading and Bend Sinister are ostentatiously self-reflexive novels, but they are also serious engagements with the horrors of totalitarianism, whose potential for the wholesale extirpation of humanity was all too evident during the years when they were composed. Much the same is true of the totalitarian state fantasized by Kinbote in Pale Fire. Nabokov’s early novel The Defense abundantly calls attention to its own artifices, as we would expect, but it is also a wrenching representation of a genius trapped in the world of chess that is the vehicle of his genius. One could extend this Nabokovian catalogue of grave human predicaments, historical or personal, confronted through the medium of self-reflexive fiction.

    In Lolita, then, we get the probing portrait of a sexual deviant who kidnaps a girl-child and inflicts great harm on her that is conveyed through a novel which reminds us of its status as an invented fiction and plays quite exuberantly with literary tradition. Parody, again, is ubiquitous. It begins on the first page of the novel with the quotation from Poe’s “Annabel Lee,” a poem that lends the name Annabel Leigh to Humbert’s first love. Is she, after all, a “real” character in a novel or a kind of personified citation, Humbert living out the role of the male speaker in Poe’s poem? Allusions to Poe’s poem are scattered through the novel. The “winged seraphs” of the poem flit across these pages. Here is one especially telling instance: “I should have known (by the signs made to me by something in Lolita — the real child or some haggard angel behind her back) that nothing but pain and horror would result from the expected rapture. Oh, winged gentlemen of the jury!”

    The parodies and satiric references in the novel include Merimée, A. E. Housman, T.S. Eliot, Arthur Conan Doyle, Pierre de Ronsard, and many other writers. The elaborate development of Clare Quilty as Humbert’s doppelgänger harks back to Dostoevsky’s The Double, the work of a writer whom Nabokov despised, as well as to Poe’s story “William William.” Parody is also deployed generically, as in the old romance story of the kidnapped child discovered through a birthmark, or in the desperate, farcical physical battle between Quilty and Humbert, about which he himself observes, “elderly readers will surely recall at this point the obligatory scene in the Westerns of their childhood.” All these allusions and parodic elements have a paradoxical effect. Humbert is an appallingly twisted figure repeatedly operating in a literary landscape evoked through his own rich background in culture high and low. In the climactic scene with Quilty, we do not cease to see him as a violently jealous lover seething with rage against the man who has stolen his beloved girl from him, but the scene, with its plethora of parodic literary and cinematic references, is also hilarious: fun and horror are interfused and unmediated. The aesthetic does not usurp the ethical, but the ethical is made to co-exist with the aesthetic, and in this way the reader is made to read complexly, and is never let off the hook.

    Nabokov approaches two things with the utmost seriousness: the despicable act of sexually exploiting a child and the instrument of art through which the moral issue is represented. For all of the fun and games of his play with artifice, strong and moving emotions are expressed, as in the great moment near the end, when Humbert discovers the now pregnant Lolita with “her adult, rope-veined narrow hands and her goose-flesh white arms, and her shallow ears, and her unkempt armpits” (which earlier were called “pristine” when he watched her at tennis), and he can assert, “I loved her more than anything I had ever seen or imagined on earth, or hoped for anywhere else.” 

    The defining dimension of art in Lolita must be kept in mind. Parody and the overlapping practice of allusion are essential to the adventure of the novel at the same time that they point again and again to its status as a work of literature. Allusion itself is intrinsic to the dynamic of most literature: you would scarcely think of writing a story or a novel or a sonnet or an epic if you had no familiarity with other such works, and allusion, through which you insert your own writing in the body of its predecessors, remaking them and often challenging them as you invoke them, is a recurrent method for the creation of new literature. Parody may be thought of as a special category of allusion, usually in a comic and critical vein. These twin processes in Lolita constitute an implicit affirmation of the artfulness of the novel, of the pervasive operation in it of literary art. That art is of course manifested in the spectacular prose that Nabokov creates for his deranged narrator, at times deliberately over the top in keeping with his derangement but very often brilliantly original, witty, finely lyrical, and on occasion quite affecting. Here is a moment when Humbert introduces circus performance as a metaphor for art — the same trope will recur in Ada — that suggests how artistic skill can convey the plight of a pathetic and unseemly character, which is precisely what his author has done for him: “We all admire the spangled acrobat with classic grace meticulously walking his tight rope in the talcum light; but how much rarer art there is in the sagging rope expert wearing scareclothes and impersonating a grotesque drunk!”

    Lolita is the most troubling and touching representation of a morally grotesque figure in the fiction of the last century. At the very end of his painful story, in his prison cell, his death imminent, Humbert affirms that he has used his fleeting time to make Lolita “live in the minds of later generations.” He then goes on to proclaim these grand concluding lines: “I am thinking of aurochs and angels, the secret of durable pigments, prophetic sonnets, the refuge of art. And this is the only immortality you and I may share, my Lolita.” The very last word of the novel, as Alfred Appel has observed, is the same as the first, affirming a kind of architectonic unity for the novel as a whole. The reference in “aurochs” to the cave paintings of early man and in “angels, the secret of durable pigments” to Renaissance art set this narrative in the grand tradition of art going all the way back to prehistory, much of it still enduring. There is a certain ambiguity as to who is speaking here at the end. Of course, it has to be Humbert, reflecting on what turns out to be in the end the truly beloved human subject of his story as he senses his own end approaching. Yet his voice merges with Nabokov’s in the proclamation of the perdurable power of art.

    Humbert Humbert is not Vladimir Nabokov: the point is worth emphasizing in our cultural circumstances. And the real identification of the novelist with his protagonist is not in regard to Humbert’s perversion, as some readers of the book have misguidedly imagined, but in the celebration of art as a fixative of beauty and feeling, anguish and love — as a fixative of humanity. It is this, finally, that lifts Lolita above the currents of shifting attitudes toward sexual exploitation or toward sex itself. The novel is obviously not a case study in perversion, as the highly parodic foreword by the fictional psychologist John Ray, Jr. would have it. It is also something more than a riveting fictional portrait of a repellently disturbed person. A murderer may have a fancy prose style, but in this instance the prose style turns out to be both arresting and evocative, at moments sublime, leading us to experience through the moral murk of the narrator a great love story that seeks to join the company of the cave paintings of Lascaux and the sublime angels of Giotto and Raphael, and nothing less.

    The Scandal of Thirteentherism

    Amendment XIII
    Section 1.
    Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have
    been duly convicted, shall exist within the United States,
    or any place subject to their jurisdiction.
    Section 2.
    Congress shall have power to enforce this article by
    appropriate legislation.

    In our age of roiling discontent, liberalism and its historical achievements are under assault from all sides. For the past four years, Donald Trump had little use for truth, science, progress, mutual respect among races and identities — all the liberal ideals embodied in the founding documents and embedded in the history of American politics. Despite overseeing the military that long ago defeated the Confederacy, Trump nonetheless made the Lost Cause his own, becoming the protector of Confederate monuments and place names, and this support has gained him the appreciation of white nationalists and other “good people” like the ones who marched on Charlottesville. Trump had little use for the colorblind state that liberals associate with the Party of Lincoln.

    Even with Trump out of the Oval Office, Trumpism continues to be the perfect ideological provocation for those on the other side now questioning America’s central political tradition. It sets the mood for their revisionism. At war with classical liberalism and “neo-liberalism” alike, the progressives are busy rewriting American history. They want a past that reflects their dim view of the American record and justifies certain policies to address racial grievances. American history, they now instruct, is dominated by topics that liberals allegedly marginalized, including settler colonialism, slavery, white supremacy, whiteness, and peoples of color. The editor of the eminent American Historical Review writes that he aims to “decolonize” American history. Ibrahim X. Kendi’s book Stamped from the Beginning described racism as our very origins. Reducing four hundred years of black history to victimhood, the New York Times’ 1619 Project echoed this sentiment. Racism explains slavery, which in turn explains the American Revolution and much else worth knowing about American history. Internal conflicts among whites — based on religion, ethnicity, or class — hardly explain anything, and there is certainly nothing exceptional about America.

    Rather than claiming their own version of the liberal tradition articulated in the Declaration of Independence, the Reconstruction Amendments, the promise of the New Deal, and the Civil Rights Acts of the 1960s, the progressives play up the failures and the betrayals of previous generations of liberals, even as they are suspicious or grudging about the Biden victory. Unwittingly taking their cue from the Nation of Islam, they view American liberalism itself as a species of white supremacy, national in scope and operation: in their view, white supremacy is not an aberrant tradition rooted in the American South, as most twentieth century liberals saw it. They feel little solidarity with American liberals, except those they have dubbed radical and incorporated into what they call the “black radical tradition,” especially Fredrick Douglass, Ida B. Wells, and Martin Luther King, Jr. Like some of the activists in the street, they would topple Jefferson, Lincoln, and Grant along with the Confederate generals. They see liberalism, past and present, as a huge obstacle to the remaking of America into what amounts to a fully integrated society with a social welfare state for all. 

    One of the pillars of American liberalism under assault is the Thirteenth Amendment. Many Americans now believe that slavery never ended — not despite but because of the amendment that fulfilled the promise of the Emancipation Proclamation. In the words of Bryan Stevenson, the head of the Equal Justice Initiative turned historian, slavery never ceased, it merely “evolved.” In his thinking and that of other Thirteenthers, it was the great amendment of 1865 that led to the re-enslavement of black people and mass incarceration. The key to understanding its “evolution” is the exception clause in the amendment, which ended slavery and involuntary servitude “except as a punishment for crime.” Under cover of those words, the Thirteenthers claim, ownership of slaves shifted from individuals to the state, even as the Thirteenth Amendment gave the American people, especially its newly freed people, the false impression that America had ended slavery once and for all. Some Thirteenthers do not simply believe that the amendment led to mass incarceration; they also hold that the loophole represented a diabolical scheme concocted by whites as a race. What all Thirteenthers share is the belief that the loophole created a seamless history of black slavery from the seventeenth century until today.

    When a person of Stevenson’s commitment and stature gives such a dim appraisal of the efficacy of an amendment signed by Abraham Lincoln, attention must be paid. In his crusade to link mass incarceration to the Thirteenth Amendment, he is not alone. A wide array of historians, cultural studies scholars, activists, and artists have endorsed this view in full or in part, including Henry Louis Gates, Jr., Kimberlé Crenshaw, Khalil Muhammad, Alex Lichtenstein, Kanye West, and Ava DuVernay. Whatever chance this interpretation had for burning out on its own disappeared when DuVernay’s documentary 13th took the nation by storm in 2016. It is now taking root in the nation’s schools: after watching DuVernay’s film, my students believed that the convict lease system (about which more below) re-enslaved most blacks. They were shocked to learn that the percentage was much less than one percent. 

    An idea born in the 1960s has become a popular and pseudo-scholarly belief that many want to use as a basis for making public policy. Not many have gone as far as Kanye West, who— with all the erudition at his disposal — has called for the repeal of the Thirteenth Amendment. Most Thirteenthers aim for an amendment to close the loophole. Their objective is to put an end to mass incarceration, which is a fine objective. But the key to ending it, they suggest, lies in removing its supposed economic justification — black prison slavery. 

    Thirteentherism is best viewed as another episode in a long tradition of using history as a weapon in a political struggle. At times, the distinction between historical truth and propaganda gets lost. Yet in keeping with our era, bad history and worse social science have replaced truth as the intellectual underpinning for a great deal of thinking about social change. Rather than making the incontrovertible case that mass incarceration as an inherent evil, they seek to hitch their cause to the moral opprobrium that already exists against chattel slavery. They have little use for differences and distinctions, and simply wish to call incarceration slavery. Never mind that Americans of African descent have always held historical truth as sacrosanct, believing that the dispelling of falsehoods is the proper foundation for black people’s progress. Thirteentherism breaks with that black historical tradition of truth telling, hoping to end convict slavery and in the process misrepresenting some of the most momentous changes in American history.

    The intellectual origins of Thirteentherism lie in the intellectual ferment of the 1960s. Prisoners commonly described themselves as slaves, whether on the prison plantations in the South or the workshops in other regions, since they all worked for little or nothing. It took an epiphany by Lee Wood, a prisoner in the California system, to link the Thirteenth Amendment to his condition. As part of a radical reading group, Wood read the amendment aloud to his comrades, and the loophole — “except as punishment for crime whereof the party shall have been duly convicted” — suddenly appeared to explain his plight. Few would do more to spread the idea. Once he had served his time, Wood dedicated himself to ending this “slavery,” founding the Coalition Against Prison Slavery (CAPS). He became well known for spreading literature on  the role of the Thirteenth Amendment, and with funding from the Quakers he published, along with his wife, a short volume tracing the history of prisoners as slaves.

    Wood’s idea of removing the loophole from the Thirteenth Amendment gained some traction in prison activist circles by the mid-1970s. He was not so much interested in ending imprisonment as he was against ending the exploitation of prisoner labor. Not only did CAPS receive funding from the Quakers, he also got the American Friends Service Committee to endorse his idea of removing the exception clause from the Constitution. From CAPS, the idea spread. In 1977, the New Afrikan Prisoners Association in Illinois petitioned the United Nations: “We protest the 13th amendment which legalizes slavery…” In 1980, William Coppola, a prisoner in Texas, cited the amendment as proof that slavery was alive and well in America. According to increasing numbers of prisoners, the Thirteenth Amendment had done them dirt. Not only did it not end slavery, it created more of it.

    By the 1990s, the intellectual influence of prisoner advocacy spilled over into academic circles. Before joining the professoriate, Alex Lichtenstein worked on behalf of prisoners, then followed his interest into scholarship. He published a history of convict leasing, called Twice the Work of Free Labor. Notable mostly for its interpretation that the system contributed greatly to the industrialization of the South, the book promoted the Thirteenther view of the amendment to end slavery: “Ironically, this [convict lease] system emerged immediately on the heels of the passage of the Thirteenth Amendment to the Constitution, which intended to abolish bondage but permitted involuntary servitude solely as a punishment for crime.” He named one chapter, “New South Slavery,” and another, “Except as a Punishment for Crime.”

    Thirteentherism gained most of its academic visibility and activist credibility through the writing of Angela Davis, who embodies the continuity between the prison activism of the 1960s and the modern prison abolition movement, which seeks to make prisons obsolete here and abroad. Her academic labors made cultural studies a central venue for the study of “the carceral state.” Reminiscent of W. E. B. Du Bois, who laid the seed for whiteness studies with a passing comment about how whites benefited from black oppression, Davis wrote an essay on Fredrick Douglass’ failure to oppose convict leasing and other forms of labor oppression. Of the amendment’s clause, she wrote: “That exception would render penal servitude constitutional — from 1865 to the present day.” As if this would have been impossible without the clause, she went on to say “That black human beings might continue to be enslaved under the auspices of southern systems of justice (and that this might set a precedent for imprisonment outside of the South) seems not to have occurred to Douglass and other abolitionist leaders.” After her essay, the Thirteenth Amendment’s loophole became intellectually important.

    Wood, Lichtenstein, and Davis see a constitutional power in the Thirteenth Amendment to establish convict or prisoner slavery, yet they know that the various British colonies and American states had exercised legal authority to create systems of convict slavery. They often carry on as if the amendment was meant for blacks only — the original post-Civil War black code, if you will. But before and after the founding of the United States, convicts had been forced to labor against their will without recompense. During the colonial era, more than 50,000 whites convicts were given the most extreme Hobson’s choice: an indenture (contract) to slave for a term of years in British North America or to be put to death for their crime. They were often sold to work for masters at the auction blocks where Africans were sold, and both types of slaves, convict and chattel, were known to run away together.

    The American Revolution ended the importation of white convicts as slave labor, but the new sovereign states all put those deemed criminal, regardless of racial designation, to work without compensation in one form or another. In the new penitentiaries some worked directly under the supervision of the state, others worked at the prisons under the control of leases, and others still off site. By the end of the Civil War, the power of colonies and then states to inflict involuntary servitude or slavery for a term on whites and others as convicts had existed for over two hundred and fifty years — a period longer than the age of chattel slavery. Of those seeing a white conspiracy to re-enslave blacks as convicts, an obvious question needs to be asked: why would Congress need to create a special constitutional amendment for blacks to make convict slaves of them? They had done that very thing to whites for centuries. The exception clause merely recognized the existing police power of the states.

    The history of white convict slavery notwithstanding, Thirteenthers often treat the amendment as a federal black code that applied uniquely to the freed people and blacks in general. Among many others, Lichtenstein and Davis suggest as much when they imply that something special could have been done in the language of the amendment to prevent the criminalization and re-enslavement of the freed people. In their account, the language as it stands empowered the Southern planters, and they point to a Southerner or two who read the amendment as a veritable black code for the treatment of freed people. For an alternative that could have made things different, Thirteenthers point to Senator Charles Sumner’s attempt to offer a different version of the amendment that outlawed slavery and made no mention of crime and punishment. Many believe that his amendment without the exception clause would have changed history. 

    Yet Sumner simply wanted an amendment that clearly embodied the abolitionists’ belief that blacks and whites would be free and equal under the law. Removing the exception clause would have ended chattel slavery, but it would have left convict slavery — for blacks and whites alike — in place. Criminalization and imprisonment go well with Sumner’s desired wording of equality under the law. The difference of racism would have adversely impacted them at the hands of the states as it had plagued antebellum free blacks, North and South. Indeed, something more than an amendment touting equality under the law was needed. 

    The Republican-dominated Congress was interested in ending chattel slavery, nothing more, nothing less. They decided on the language that had been an effective chattel-slavery killer since 1787. The exception clause had become part of American federal law when Congress passed the Northwest Ordinance in 1787. Congress prohibited chattel slavery in the territories ceded to the federal government — except for those slaves found guilty of crimes, who could be subject to “involuntary servitude or slavery.” Thomas Jefferson, who most likely drafted the provision, wanted to end the expansion of chattel slavery. Congress required the exception clause as part of every constitution submitted by territories to enter the union as a free state. Over time, Jefferson’s proviso, as Sumner called it, ended chattel slavery wherever it was enshrined in a state constitution. It also made clear that Congress was not usurping a new state’s police power to punish criminals in a manner consistent with the original thirteen states — some of which, as colonies, included enslavement.

    Apparently few Republicans, including Sumner, understood that the exception clause allowed for a term of slavery for a conviction. The potential difference between Sumner’s equality under the law proposal and the loophole version became clear immediately. In early 1866, the United States Army quashed the enforcement of a black code in Virginia that allowed freed people who did not sign a contract to be sold as a slave for a term. In November 1866, a judge in Anne Arundel County, Maryland, sentenced three black people to be sold for a term of service to local planters. The decision alarmed Sumner and other Republicans. The judge, in effect, was seeking to apply the old free person of color laws. The sentences were never carried out and the judge did not ultimately face prosecution. Yet Maryland was soon forced to remove its discriminatory laws. The loophole  for any form of chattel slavery, even for a crime and even for a term, was closed. The Thirteenth Amendment was emphatically not a black code. 

    Without compromising the principle of equality under the law, the Republican-dominated Congress would have had to pass a version of the Thirteenth Amendment, or an additional one, that explicitly forbade convict slavery, not just chattel slavery. The new fundamental law would have done for whites what Thirteenthers wish it had done for blacks. It would have reduced the states’ police power to decide the appropriate punishment and pushed the costs for prisons entirely on state taxpayers. In her impressive book The Crisis of Imprisonment, Rebecca McLennan laments the failure of the Framers to end prison labor. She points out the ubiquitous tensions within states to, on the one hand, bar unfree, unpaid prisoner labor from competition with free labor, and, on the other, meet the needs of taxpayers to defray the cost of a penal system. 

    A vote to end convict slavery in the Thirteenth Amendment likely would have divided Congress and ultimately the nation. Northern and borders states would probably have been unanimously opposed to an additional section to the Thirteenth Amendment that usurped state power and left them with an expense. Even the version that retained the state’s power to use prisoners as involuntary or slave labor did not have universal support among Northern and border states; Delaware and Kentucky did not ratify until much later as it was. Texas and Mississippi held out. It was impossible to reach the necessary twenty-seven states to ratify the amendment without two of the rebellious states on board. With greater opposition from Northern and border states, a southern state movement to unite against the Thirteenth Amendment might have succeeded — but it was an unimaginable outcome. The political appetite to end convict labor, however it was defined, did not exist. 

    For the conspiracy theorists among the Thirteenthers, this insistence upon the limits of the politically possible will simply be received as further proof of an alliance between Northern and Southern whites. In their thinking the loophole is not happenstance, but a plan that allowed whites to catch and re-enslave black people. As early as 1977, the New Afrikan Prisoners Association in Illinois, in a petition to the United Nations against the Thirteenth Amendment. wrote, “It was never the intention of the rulers of the u.s. to ‘abolish’ slavery.”

    The other elements of the Thirteenthers’ re-enslavement plot are the black codes and the convict lease system. Most professional historians, including Thirteenthers such as Lichten-stein, know two things about the black codes that the amateurs ignore: first, that the aim of the black codes was to push blacks back onto the plantations, not into jails or prisons; and second, that the black codes lived and died before the rise of convict leasing as a system. The Civil Rights Act of 1866, various court decisions, and the Fourteenth Amendment eliminated them. In most states, the convict lease system started years after the black codes had been outlawed. Just as Southerners did not need a loophole to create the convict lease system, they did not need black codes to discriminate against black people and to convict them of crimes. In the Thirteenthers’ narrative, the exception clause and the black codes are best understood as narrative devices to enhance the effect of their propaganda.

    Although historians of convict leasing now argue that it served the industrializing New South, not the cotton, tobacco, or rice planters, Thirteenthers often cannot shake the image of their imaginary black codes being used to send the re-enslaved ex-slaves back to their former masters on the plantation. No less a figure than Henry Louis Gates, Jr. has produced a short video in which he argues that convict leasing and the black codes were part of a “labor system that took shape in the late nineteenth century [and] developed coercive means to ensure that cotton remained king.” The convict lease system is the indispensable element in the Thirteenthers’ narrative, and every effort is made to play up its size, its duration, and its profitability. They use percentages to show that the prison population shifted from white to black in a decade or so after the end of chattel slavery. And they emphasize the growth of the black prison population, how quickly it doubled. 

    In both cases, no effort is made to explain how a system largely closed to blacks in the antebellum years would show dramatic annual increases without much change in the size of the prison population. And little attention is paid to the size of the system throughout its duration. Instead the impression is given that re-enslavement captured a huge percentage of the black population. I repeat: the historical truth is that it captured less than one percent.

    The focus on the late nineteenth century gives a false image of incarceration then and now. In Georgia, where we have the best numbers, about one third of one percent of the black population was imprisoned for most of the convict lease era. In 2017, by contrast, 1.4 percent of the black Georgia population was in state prisons. That is almost five times as many as in the age of mass incarceration. This tracks well with the nation as a whole in the era of mass incarceration, when at its peak, in 2006, 1.5 percent of the black population found itself in the prison system. 

    The small, brutish system of convict lease proved to be shorter in duration than the Thirteenthers suggest. They point out that Alabama’s system existed until 1928, but rarely, if ever, do they note that it was an outlier. In the 1890s, Virginia, Tennessee, South Carolina, North Carolina, and Mississippi ended theirs. By 1913, only Florida and Alabama were engaged in leasing. DuVernay’s film gives the impression that convict leasing and lynching caused the Great Migration out of the South, making blacks “refugees.” Yet before the start of World War I convict leasing was already a moribund institution, barely a shadow of the monster it had been, and lynchings were in decline. Ironically, the white supremacist governments that brought the nation the illiberal institutions of state-mandated segregation and black disenfranchise-ment ended the system most associated with chattel slavery. Moreover, they put out of business the only profitable penal system in American history. 

    While trading on images of the Southern prison structure — convict leasing, the chain gangs, the prison plantations — Thirteenthers ignore the form of convict slavery that engulfed  most prisoners in America from emancipation forward. In the North and the West, the prisons predominated through most of the nineteenth and twentieth centuries, but in the Thirteenther narratives they simply do not appear, because the amendment is treated as a federal black code, enslaving only blacks, regardless of work life. In non-Southern prisons, leasing out prisons and prisoners stopped in the late nineteenth century, and production under prison control for state use became the system. With northern migration, blacks found their way into them. Undoubtedly, racism resulted in harsher treatment, but it did not Southernize the prison regimes as the Thirteenthers suggest. Convict leasing, road chain gangs , and prison plantations did not appear. Racism abounded, but it was hardly new in the West or the North.

    Together the Northern and Western prisons dwarfed the Southern system in size and scale. Before the rise of mass incarceration, roughly a third of all black prisoners were serving time in them. By that time prisons were on the brink of rebellion, but the nature of prison life was different. Despite the inherent repression in all prison life, black convicts in the North and West found time, like their white counterparts, to pursue self-improvement. Malcolm X and many others like him became autodidacts with the assistance of prison libraries. Some received more formal education through vocational programs. If Northern and Western prisons produced white writers, they also produced black ones such as Chester Himes and Eldridge Cleaver. It was in federal prison that Angelo 

    Herndon wrote his autobiography Let Me Live. To maximize the propaganda value of black men in conditions reminiscent of chattel slavery, the Thirteenther narrative ignores the growth of black incarceration outside the South, hinting that Southern ways moved north. Yet the rise of penitentiaries in the South, along with the decline of the road-building chain gangs, suggests that the Southern penal system increasingly became more like the rest of the country. 

    Having used the black codes and convict leasing to create the impression that Thirteenth Amendment had subjected black people to massive, profitable, and brutal re-enslavement, Thirteenthers continue their discussion into the true age of mass incarceration, from the 1960s forward, as if nothing of substance had changed since 1865. Little thought is given to the inclusion of Hispanic bodies among the slaves, and white prisoners remain merely unfortunate by-products caught in the nets of a system that was designed to enslave blacks. The image presented is that of the state raking in profits from selling black labor to Fortune 500 corporations, consuming the fruits of black labor in prison industries and the various and sundry centuries-old plantations in the South. The truth is that most “convict slaves” are actually idle, and that the state and federal governments make revenues but never profits. All this seems wholly lost in the conversation. 

    And as serious scholars know, the origins of the expensive and unprofitable system of mass incarceration are to be located in the changes of the 1960s, not 1860s. Thirteenthers have little use for the works of scholars such as James Forman, Jr. and Elizabeth Hinton, who see mass incarceration arising largely from party politics and political choices made by politicians and communities, including African Americans. They do not take seriously scholars such as Ruthie Gilmore, who argues for examining the political economy — not narrow politics or the pursuit of revenue from prison labor — to explain the rise of mass incarceration. These are traditional scholarly debates, and so they lack a grand narrative of slavery or an explosive Jim Crow metaphor. They are not useful for propagandists.

    Having penetrated the academy, popular culture, social media, and the classrooms, Thirteentherism has also become a basis for social activism and policymaking. Lee Wood’s old CAPS agenda of ending prison slavery by removing the loophole from all American constitutions has been taken up by many. An increasing number of activists believe that by removing the “profit motive” from mass incarceration, locking up millions of people would lose its rationale.

    Nationwide prison strikes have become almost annual occurrences. In 2016, the promotion and release of DuVernay’s film overlapped neatly with a nationwide prison strike to end the abuse of prison labor. Originating in Alabama, the prisoners leading the strike invoked the role of the Thirteenth Amendment in making them slaves and protested against their being forced to work with little or no remuneration. The strike involved more than twenty thousand prisoners in twenty-four prisons. In 2018, coinciding with the fiftieth anniversary of the uprising at Attica, prisoners in seventeen states struck again and made ending prison slavery one of their ten demands. “The Thirteenth Amendment didn’t abolish slavery,” said the strike’s spokeswoman, Amani Sawari. “It wrote slavery into the Constitution. There’s a general knowledge that the Thirteenth Amendment abolished slavery, but if you read it, there’s an exception clause in the abolishing of it. That’s contradictory — that something would be abolished and there would be an exception to that.” 

    Beyond the prison strikes of recent years, there has been ongoing pressure from activists to sever the purported link between constitutions, state and federal, and the use of convict slavery. Most of the calls from prison activists and reformers are for the country to “amend” the federal Constitution to end all forms of slavery. On August 19, 2017, for instance, the Millions for Prisoners March on Washington, DC proclaimed that “We DEMAND the 13th amendment ENSLAVEMENT CLAUSE of the United States Constitution be amended to abolish LEGALIZED slavery in America.”

    The activism on the ground had some impact on presidential politics in the recent election, but not much. Among the major candidates, only Bernie Sanders invoked the amendment. In making a case against the continuation of private prisons, he argued falsely and inexplicably that they had their origins in “chattel slavery.” After the Civil War, he held, “prison privatization expanded rapidly when the 13th Amendment, which outlawed slavery but continued to permit unpaid penal labor, was ratified… Due to an extreme shortage of labor caused by the emancipation of slaves, former Confederate states exploited the legalization of penal labor by incarcerating newly freed black people.” To his credit, Joe Biden, despite an unfavorable record on stoking the growth of mass incarceration while in Congress, did not pander or traffic in this nonsense. As he sought to reverse his record and come out for the reduction of incarceration rates, he did not invoke the Thirteenth Amendment in his policy statements.

    At the state level, however, the situation has been different, and in the long run might bear fruit nationally. Given how deeply rooted the link between the Thirteenth Amendment and convict slavery has become in African American social thought, state-level politicians are responding to activists’ calls to end the loophole. In various states, efforts are being made to remove the language. In Colorado, activists laboring under that assumption pressed for constitutional change and achieved the removal of the exception clause. In 2016, they succeeded in placing on the ballot “Amendment T,” which, they believed, would have prohibited the state from using prisoners as labors without their consent. Despite a lack of opposition, the amendment failed by two percentage points because of its confusing language. Two years later a similar amendment passed, but a strange thing happened along the way — no one, not even its advocates, believed that the new amendment, despite its removal of the exception clause, would prohibit prisoners from being forced to work. By the time the bill was put before the people of Colorado, it became clear, as Vox reported, that the removal of the clause would not end virtually uncompensated labor. This was not a reform, it was a gesture; and too often reformist energy is squandered on gestural politics.

    Even in the wake of Colorado’s cosmetic change to its social contract, the movement to purify all state constitutions has not declined but rather increased. Policymakers and activists from a number of states (Utah, Colorado, Nebraska, South Carolina, New Jersey) have banned together recently to form “a national coalition fighting to abolish constitutional slavery and involuntary servitude in all forms.” During the 2020 election, the red states of Utah and Nebraska revised their constitutions to eliminate the exception clauses with complete bipartisan support. These victories are largely symbolic because they seemed to interpret slavery as chattel slavery or as involuntary labor performed for private enterprises, not the state. Utah’s Department of Correction will continue to require prisoners to perform work within the prison and to volunteer for other prison-labor opportunities, including with private industries. In Nebraska, State Senator Justin Wayne introduced a constitutional amendment to remove the exception clause in the state constitution. He assured voters that prisoners were paid a nominal amount for their labor. For many prison reform advocates, that nominal amount represented nothing less than convict slavery. 

    The only state initiative thus far that has had the potential to end convict slavery or any form of involuntary servitude is the one recommended by policymakers in New Jersey. (This initiative did not get on the ballot in the last election.) As one of the original thirteen colonies, the state’s constitution never carried Jefferson’s proviso that was imposed on territories brought into the union as anti-chattel slavery states. With convict slavery stretching back to its early colonial history, New Jersey would be breaking not with the Thirteenth Amendment but with its most deeply ingrained tradition. 

    Tying the abolition of convict slavery to the Thirteenth Amendment implies that the institution has shallow roots. Moved by the myth of Thirteenthism, however, lawmakers are adding rather than subtracting language to the constitution to uproot an ancient practice: “No person shall be held in slavery or involuntary servitude in this State, including as a penalty or a punishment for a crime.” As the language on the ballot that will be presented to the voters of New Jersey explains, “This amendment would prohibit forcing an inmate to work as a penalty for a crime, even if they are paid. This amendment would not prohibit inmates from working voluntarily.” And as Democrat Ronald Rice, one of the amendment’s sponsors, put it, “We must set right the treatment of prisoners in our prison system and guarantee that no one is unwillingly forced to perform work, whether they are being compensated two dollars or not. Our justice system continues to tarnish our nations [sic] principles but this amendment would set New Jersey on the right path to finally ending indentured servitude in our state once and for all.”

    Only a new amendment to the Constitution of the United States could end convict slavery everywhere, and now, thanks to the bad history of the Thirteenther movement, such a legislative effort exists. With the support of the Congressional Black Caucus, the constitutional amendment introduced by then-Representative Cedrick Richmond (who now works in the Biden White House) reads: “Neither slavery nor involuntary servitude may be imposed as a punishment for a crime.” Here is the language that has been calling out to those opposed to convict slavery from the time of Jefferson’s proviso. If it makes it out of the House, it will certainly die in the Republican Senate, though Senator Jeff Merkley of Oregon has expressed agreement with the Thirteenther argument. More than likely, the Thirteentherist amendment will become a perennial legislative offering, like the late Congressman John Conyers’ reparations bill. 

    Born of the use of the Thirteenth as propaganda, the proposed Twenty-Eighth Amendment will ultimately rise or fall on its proponents’ ability to win on the merits. Rather than trying to persuade Americans that mass incarceration is an inherent and expensive evil, which is an indisputable proposition, Thirteenthers have sought to trade on America’s moral distaste for chattel slavery, pretending that convict slavery was its offspring. When the false association is stripped away, the proposed amendment will call for Congress and then three-fourths of the states to vote for millions of prisoners to shift from being mostly to completely idle with taxpayers footing the cost. It would not have won in 1865 and it is unlikely to do so now.

    And there is a larger issue, a different integrity, at stake here. The Thirteenther use of history as propaganda to achieve a political end marks a break with the tradition of black history. From the antebellum period forward, black historians, professional and amateur, have believed that historical falsehoods justified black oppression and that the truth would therefore be an ally in the movement for racial justice and equality. By distorting the history of the Thirteenth Amendment and by denying one of black people’s greatest triumphs in American history — the destruction of chattel slavery — this generation has sought to emancipate itself by diminishing its ancestors’ prized accomplishment. It has also sought to free itself from culpability for a system that all Americans, including blacks, had a part in making. The legion of black intellectuals who have conflated convict labor and chattel slavery have reached the limits of false persuasion. History as propaganda works better to rationalize the status quo than to usher in change. Rejecting the historical meaning of the Thirteenth Amendment is not an avenue to progress.

    Theseus

    A young king, swashbuckling, expensively schooled
    in rhetoric and swordplay, with your gold-threaded tunic and plumed
    helmet fitted over your patrician nose:
    so you tossed bandits off cliffs and captured a bull—what do you know
    about war? Labor is for peasants, labor pains
    for women. But you waded among the suppurating dead
    on the fields of Thebes and broke the pollution law
    by washing corpses with your own royal hands.
    “Which bodies are mine?” I thought, as Bill
    Arrowsmith paced back and forth holding out his hands—
    “With his own hands!’ he kept saying. “Defiled!”
    With his own hands he offered us glasses of dark red wine.
    We perched along his couch, on his armchairs,
    taking notes. We had not yet touched our dead.
    Our labors were just beginning, mainly
    in library stacks and the pages of dictionaries.
    “Sophrosyne,” Bill barked, “The virtue of moderation,”
    with his round, sun-browned, wrinkled satyr’s face
    and black eyes flashing immoderately
    just a few years before he toppled, alone
    in his kitchen, his heart ceasing its labors and his corpse
    becoming the labor of someone else’s hands.

    The Flood

    —when angels fell out of the bookcase along with old
    newspapers, torn road maps from decades past, and a prize edition
    of the Très Riches Heures du Duc de Berry : suddenly

    the catalogue tumbled. The painting, the show, Peter Blume’s
    Recollection of the Flood, the studio where I slept
    as a child those nights when moonlight fingered

    the looming canvases, the forest of easels, the jug of brushes like a spray
    of pussy willow boughs—all surged. In Peter’s dream
    the restorers stand on scaffolding to paint

    the frescoed shapes between lines the flood has spared:
    and won’t some massive wave of oil
    and shit always storm a city’s heart? Restore, restore—

    there on the ghostly grid the angels dance
    holding hands in a two-dimensional ballet
    of bliss, taking on substance with each cautious dab

    to whirl with wings spread over the very rich hours
    of what we’ve lost. For they are sleeping
    on the bench at the foot of the scaffold, the refugees—

    the exhausted woman clutching her purse, a scrawny girl
    collapsed in her lap, the huddled, bony old man,
    bald head in his hand. And everything they’ve saved

    lies at their feet in a quilt bundle, or stuffed in a box
    tied with twine, or in that suitcase, desperately genteel.
    Only the boy is awake. The artist stands

    apart. Holds in his hands a sketch we cannot see.
    Blonde curls, like Peter’s. Remembering, perhaps,
    Cossacks, the flight from Russia, the ship, the Brooklyn

    tenement where he learned to draw.
    A jug of brushes stands on the windowsill.
    The angels keep twirling. I hear, beyond the door,

    the growl of mountain streams all dragoning down.

    “Dead Flowers”

    If you hurt yourself before
    someone else hurts you, is that
    homeopathic? Watch me prick

    poison into my skin, sign
    my name in pain. Watch me miss
    the appointment, cancel the call. Watch me

    gulp smoke and receive a certificate
    of enlightenment between
    the smeared egg-yolk horizon to the west

    and the bone-white eastern sky:
    the emperor appoints
    me to the Poetry Bureau and I

    declare myself Queen of the Underground.
    On the back road, the turkey vulture
    plucked the guts from the squashed squirrel,

    then flapped up to the dead
    branch of the shagbark hickory
    to examine us examining

    the carcass. O sacerdotal bird
    with your crimson scalp and glossy vestments, teach
    us to translate the spasm, the cry, the dis-

    integrating flesh, the regret.
    What can be made of all this
    grief. Over the butter-

    yellow, humming, feather-grassed midday meadow
    skim the shadows of vultures: ghostly, six-foot
    wingspan, V, swiftest signature, turning death into speed.

    Burning the Bed

    Carefully you balanced the old mattress
    against the box spring to create a teepee on that frozen December patch
    behind the house, carefully

    you stacked cardboard in the hollow and touched the match
    to corners till flame crawled along the edges
    in a rosy smudge before shooting

    twenty-five feet into darkening air. Fire gilded each
    looming, shadowed tree, gilded our faces as we stood with shovel and broom
    to smack down sparks.  So much

    love going up in smoke. It stung
    our eyes, our lungs. Pagodas, terraces, domes, boudoirs
    flared, shivered, and crumpled

    as the light caved in, privacies curled to ash-wisp, towers
    toppled, where once we’d warmed each limb,
    fired each nerve, ignited

    each surprise. And now at dusk, our faces reddened in heat
    so artfully lit, we needed all that past, I thought,
    to face the night. 

    Balanchine’s Plot

    The great choreographers have all been more than dancemakers, none more so than George Balanchine. He was in truth one of the supreme dramatists of the theater, but he specialized in plotless ballets with no named characters or written scenarios, and so this aspect of his genius has gone largely unexamined. Instead, everyone accepts the notion — it has become the greatest platitude about him — that he was the most musical of choreographers — a notion that, for all his musical virtues, should be qualified in several respects. Even at this late date, there is much about Balanchine that we still need to under-stand. He belongs in the small august company of modern artists who shattered the distinction between abstraction and representation. His work renders such categories useless.

    Balanchine’s dance creations often eliminate ingredients that others regarded as the quintessence of theater. The performers of his works are verbally and vocally silent. Facial expressions and other surface aspects of acting are played down. In many of his works, costumes are reduced to an elegant minimum: leotards and tights, “practice clothes,” often only in black and white; or simple monochrome dresses or skirts. In particular, he pared away layers of the social persona of his dancers, so that on his stage they become corporeal emblems of spirit. Liebeslieder Walzer, for example, his ballet from 1960, has two parts. In the first part, the four women wear ballgowns and heeled shoes; in the second, they dance on point and in Romantic tutus. “In the first act, it’s the real people that are dancing,” Balanchine told Bernard Taper. “In the second act, it’s their souls.”

    Serenade, one of his supreme creations, made in 1934 to Tchaikovsky’s Serenade for Strings, is a masterpiece for many reasons. No ballet is more rewatchable. (If you don’t know it, there are at least two complete versions on YouTube.) Several of its configurations and sequences are among the most brilliantly constructed in all choreography. Pure dance is threaded through with threads of narrative, suggesting fate and chance, love and loss, death and transcendence. It consists almost as much of rapturous running as it does of formal ballet steps. Classicism meets romanticism meets modernism: it is all here. The opening image is justly celebrated, a latticed tableau of seventeen women who, in unison, enact a nine-point ritual like a religious ceremony. At its start, they are extending arms as if shielding their eyes from the light; at its end, their feet, legs, torsos and arms are turned out, open to the light like flowers in full bloom. This has often been interpreted as transforming them from women into dancers. Taking Balanchine’s point about Liebeslieder, we might go further and say that the opening ritual of Serenade transforms them into souls.

    Serenade also has an important place in history as the first work that Balanchine conceived and completed after moving to the United States of America. A serial reviser of his own work, he kept adjusting it for more than forty years. Only around 1950 did it begin to settle into the form we know now, with its women in dresses ending just above the ankle. (The nineteenth-century Romantic look of those dresses is now definitively a part of Serenade: it remains a shock to see photographs and film fragments from the ballet’s first sixteen years, with the women’s attire revealing knees and even whole thighs. Still, if you see the silent film clips of performances by Ballet Russes de Monte Carlo in 1940 and 1944, you can immediately and affectionately recognize most of their material as Serenade.) For more than fifty years, Serenade has been danced by non-Balanchine companies around the world; in the last decade alone, beloved by dancers and audiences, it has been performed from Hong Kong to Seattle, from Auckland to Salt Lake City.

    Even so, for musical purists it is unsatisfactory. Tchaikovsky’s Serenade for Strings, composed in 1880, was a score in which this notoriously self-critical composer took immediate and lasting pride: he conducted it many times, not only in Russia but in many other countries too. He made it in cyclical form: his opening movement, called Piece in the Form of a Sonatina, opens and closes with powerful series of descending marcato scales, while the final movement returns to descending scales with a jaunty Russian theme. Throughout the work, the composer plays with musical effects as if he had the alchemist’s stone — taking the weight off those descending scales by changes of orchestration in the first movement; reversing them in the climbing legato scales of the third movement, the Elegy; and returning at the end of the fourth movement to the work’s opening scales, only to accelerate and show how closely they are related to the Russian theme. Tchaikovsky was deeply proud of his status as the most internationally successful Russian composer of all: by naming the final movement Tema Russo he reminds us that, if he had any extra-musical agenda in his Serenade for Strings, it was to win renown for the music of his nation.

    Yet Balanchine made Serenade only to Tchaikovsky’s first three movements. He was following the precedent of Eros, a ballet by Michel Fokine in 1916 which Balanchine had known in Russia, and which likewise omitted the final “Russian Theme.” (Although Balanchine remarked at the end of his life that he had not much liked Fokine’s ballet, he took several other ideas from it for Serenade.) A devotee of Tchaikovsky’s music, he may have omitted the concluding Russian Theme in 1934 merely because his new American students did not yet have the speed and the brilliance that in his view the Russian Theme would require; later in the decade he sketched the Russian Theme with Annabelle Lyon, one of his original 1934 group, but was unable to stage it. He added the Russian Theme in 1940, by which time the youngest of his original students, Marie-Jeanne, had acquired the virtuosity he wanted to create its leading role. But not as a finale, as in the musical score: instead Balanchine inserted it between the second and third movements, thus erasing one of Tchaikovsky’s most magical transitions, beginning the fourth movement with the same quiet high notes that ended the third.

    How curious: Tchaikovsky ended his Serenade with a high-energy and dance-friendly finale, but Balanchine preferred to close his Serenade with the Elegy, which seldom sounds like dance music. His reason, I think, was dramatic: by ending his ballet with Tchaikovsky’s elegiac penultimate movement, he found a way to conclude the work with a passage into the sublime. A number of Balanchine’s ballets end with the leading character departing for a new world. This is one of them.

    Balanchine’s Serenade is quite as marvelous a work as Tchaikovsky’s score. No, it is even more marvelous. Yet it is not a faithful rendition of Tchaikovsky’s original. Instead the choreographer gave it its own enthralling musical existence. Balanchine took the liberty of revising this score — as he did with scores by several composers, but with none so much as Tchaikovsky — because he was impelled by a dramatic vision. If, as I say, Serenade is the most rewatchable ballet ever made, it is because, from first to last, the work is an exercise in theatrical drama. Its narrative is mysterious but undeniable. The work is an abundant kaleidoscope of changing patterns, images, encounters, communities; a tapestry of stories that movingly suggest fate, love, loss, death, transcendence, and the group’s support for the individual. It is also an object-lesson in ambiguity and metamorphosis.

    When Balanchine arrived in the United States in late 1933, at the invitation of Lincoln Kirstein, he was not particularly associated with pure-dance works. In Western Europe, between 1925 and 1933, he had staged Ravel’s L’Enfant et les Sortiléges, Stravin-sky’s Apollo, Prokofiev’s Prodigal Son, the Brecht-Weill Seven Deadly Sins, and other highly singular narratives. Once in New York, he abounded in ideas for new ballets, many of which Kirstein reported in his diary. The projects of which he told Kirstein — sometimes he developed them for days or months — include versions of the myths of Diana and Actaeon, Medea, and Orpheus, an idea of his own named The Kingdom Without a King, a new production of The Sleeping Beauty, Uncle Tom’s Cabin (Virgil Thomson was to compose the score), Brahms’ Variations and Fugue on a Theme by Handel, Schumann’s Andante and Variations, a ballet of waltzes starting with those of Joseph Lanner, and The Master Dancers, a Balanchine idea based upon the story of a dance competition. (Most of these ideas were never fulfilled, though some were probably inklings of dances that Balanchine choreographed much later.)

    Kirstein’s entry for May 6, 1935 gives us a vivid glimpse of the dramatically imaginative workings of Balanchine’s mind in this note about a Medea ballet that never saw the light of day:

    Bal. thought of a new ending for Medea: Her dead body is executed by the troops: told me a story or idea for another pantomime : a court-room where the condemned is faced by a three headed judge. She is two in one like AnnaAnna: As evidence, objects like Haupt-mann’s ladder are brought in — The whole crime is reconstructed. She is declared guilty, though innocent… Bal said it shd be like Dostoevski. 

    Anna-Anna had been the London title of The Seven Deadly Sins, in which the dancing Anna and the singing Anna express different aspects of the same person. Balanchine never lost this flair for radically reconceiving old radical stories. In that work and others, he was addressing different layers of being, in much the same way that D.H. Lawrence, when writing The Rainbow, described to Edward Garnett about how it differed from his earlier Sons and Lovers:

    You mustn’t look in my novel for the old stable ego — of the character. There is another ego, according to whose action the individual is unrecognizable, and passes through, as it were, allotropic states which it needs a deeper sense than any we’ve been used to exercise, to discover are states of the same single radically unchanged element. (Like as diamond and coal are the same pure single element of carbon. The ordinary novel would trace the history of the diamond — but I say ‘Diamond, what! This is carbon.’ And my diamond might be coal or soot, and my theme is carbon.) 

    The Balanchine ballets that seem to be reflections solely of their music — the specialty of the long American phase of his career, especially from 1940 onward — do not dispense with narrative. Not at all. They supply multiple narratives or fragmented versions of a single narrative. In one of his last masterpieces, Robert Schumann’s “Davidsbündlertänze,” in 1980, four male-female couples express diverse aspects of Schumann and his relationship with Clara, his wife and muse. As in Liebeslieder Walzer, Balanchine introduced the women in heeled shoes but then brought them back onstage on point, as if setting their spirits free. The complication of having four Roberts and four Claras suggests the tragic splintering of the composer’s tormented and echoing mind: not Anna-Anna but Robert-Robert-Robert-Robert alone with Clara-Clara-Clara-Clara. This is multiple personality syndrome at its most poetic.  

    There are ballets in which Balanchine moves from showing his dancers’ bodies to showing their souls without employing any change of costume or footwear. The outer movements of Stravinsky Violin Concerto, from 1972, the Toccata and the Capriccio, are festive, with four leading dancers (two women, two men) each joined by a team of four supporting dancers. The mood is largely ebullient. But then Balanchine brings the concerto’s two-part centerpiece, Aria I and Aria II, indoors, as it were, as if he were taking us into a marital bedroom for scenes of painfully raw, almost Strindbergian, intimacy. Different male-female couples dance each Aria, though Balanchine may have seen them as different facets of the same marriage.

    The woman of Aria I is amazingly and assertively unorthodox, constantly changing shape, using the man’s support to be even less conventional. In the most memorable image, she does bizarre acrobatics, bending back to place her hands on the floor and then turning herself inside-out and outside-in, fluently flipping through convex/concave/convex shapes in alternation. The duet is an unresolved struggle, not so far from the marital strife of Who’s Afraid of Virginia Woolf? The woman of Aria II, much needier, is more subtly demanding. Stravinsky’s music has a repeated chord that sounds like a sudden shriek. Here the woman strikes an X pose, balanced precariously on the points of both feet with legs and arms outstretched. She is both confrontational and insecure: it may be the most passive-aggressive moment in all Balanchine, as if the wife is demanding his support. As he goes to her, her knees buckle inwards; when he catches them before she crumbles further, it seems as if she has mastered the way to pull him back to her assistance. It works. He plays the protective husband that she needs him to be; she is the grateful wife. There are moments of touching harmony between them — one when he shows her a panorama view with his arm over her shoulder, another when he covers her eyes and gently pulls her head back. The tension between his control and her passivity is part of the scene’s poignancy.

    Serenade, too, tells multiple stories, or gives us dramatic situations that we are free to interpret in many ways. Balanchine’s narrative skill is such that few observers follow this ballet without tracing some element of plot in it somewhere. This is a ballet about the many and the one: about how a series of individual women emerge from the larger ensemble, sometimes in smaller groups and occasionally with men, but recurrently supported by the corps. Over the years — as with no other ballet — Balanchine amused himself with the redistribution of roles: there may have been as many as nine soloists in the 1930s performances, but in the 1940s he gave most of the largest sequences to a single ballerina. (Perhaps he privately thought of it as one woman; in his late years he told Karin von Aroldingen that the work could be called Ballerina.) Yet there are moments when we see more women than one: there are tiny solo roles of great brevity, and in most productions all the women have been dressed identically. Again and again Balanchine makes us ask, Who is this? What is happening to her? At times the answer scarcely matters; at others it matters greatly.

    After a string of quasi-narrative situations, the final Elegy has always seemed the most suggestive of plot. It is profoundly moving because of the story it seems to tell. At its start, one woman is lying on the floor as if abandoned, bereft, or even dead. A man is led to her by another, fate-like, woman, who keeps his eyes and chest covered with her hands until he reaches his destination. The woman on the floor is the first person he sees; he is the first person she sees. Balanchine presented the charged moment of their eyes meeting, with the man and woman framing each other’s faces with their arms, as a quotation from Canova’s extraordinary sculpture Psyche Awakened by Cupid’s Kiss, to which he drew the attention of some dancers. Although this Canova quotation was itself derived from Fokine’s Eros, it must have gratified Balanchine that the three chief versions of the statue are to be found in the main museums of the three chief cities of his career: St Peters-burg’s Hermitage, Paris’ Louvre, New York’s Metropolitan Museum of Art.

    What follows between the figures — we might also call them the principal characters — seems like love. But just as Diana of Wales once observed that “there were three of us in this marriage,” so this love is shadowed by the constant presence of the female fate figure known in Balanchine circles as the Dark Angel. Other women pass through, one of them lingering for a while. (The dancer Colleen Neary told me that Balanchine once jokingly likened these three women to the man’s wife, his mistress, and his lover. And added, “Story of my life!”) An unhappy ending ensues. With startlingly swift force, the man lowers his “wife” to the floor. The Dark Angel stands aloof, averting her eyes from this tragic parting. She then returns as the agent of fate, beating her arms like mighty wings; once again she covers his eyes and chest with her hands; and she leads him offstage as if continuing the same diagonal paths by which they entered.

    All this is a powerful re-telling of the myth of Orpheus and Eurydice — a myth to which Balanchine returned between 1930 and 1980, using music by Gluck, Offenbach, and Stravinsky, and to which he made many autobiographical connections. In the ancient myth, Orpheus, artist and husband, loses Eurydice when she is bitten by a snake. He is permitted to enter the realm of the dead — the Elysian Fields, the realm of the blessed — and to lead her back to life on condition he does not look at her until they both have reached the ground above. At the last moment, however, he looks back, and loses her forever. The Elegy in Serenade prolongs and suspends the bittersweet moment of their eyes’ climactic meeting.

    Unlike Balanchine’s other treatments of the Orpheus story, this one leaves us with Eurydice, the dead Eurydice whom Orpheus has lost a second time. She is left by him on the floor exactly where she had been when he found her. When she rises, she parts her hands before her eyes, as if to ask if it were all a dream. Just at this point, in the confusion of her awakening, she is joined by a sisterhood: a small cortège of the women who have characterized the whole ballet. In grief, she embraces one of them — known as “the Mother” — before kneeling and opening her arms and head to the sky, in a gesture of utmost resignation and acceptance. As the ballet ends, she is carried off like a human icon, by three men, while her sisters and her “mother” flank her. She opens her arms and face to the skies in a backbend as the curtain falls, entering a new plane of existence. It takes several viewings before you realize that when she opens her arms and her head this way to the heavens, she is repeating what all seventeen women did in the ballet’s opening sequence. One reading of Serenade, therefore, is that all of its dramatic narrative is set in the Elysian Fields. Those dancers we see at the beginning are ghosts consecrating themselves, as if saying their vows.

    It is revealing that the eyewitness accounts of the first day’s rehearsal of Serenade differ: not contradicting one another, but concentrating on different facets. Kirstein wrote in his diary:

    Work started on our first ballet at an evening ‘rehearsal class.’ Balanchine said his brain was blank and bid me pray for him. He lined up all the girls and slowly commenced to compose, as he said — ‘a hymn to ward off the sun.’ He tried two dancers, first in bare feet, then in toe shoes. Gestures of arms and hands already seemed to indicate his special quality.

    For Balanchine, looking back in the 1950s and 1960s, the compositional issue had been the fortuitous presence of seventeen women. Probably he knew anyway from the music that he wanted them to start with a slow arm ritual — but how do you take this unwieldy prime number, seventeen, and arrange it in space? His brilliantly geometric solution of this arithmetical problem was the diagonally latticed formation, a pair of two diamond shapes conjoined. These obviate the usual vertical lines of ballet corps patterns. Each woman commands space like a soloist, with genuine parity. Never mind the Elysian Fields of the dead: this pattern has often seemed like an image of American democracy (which may have seemed Elysian to Balanchine after his experience in Russia and Europe between 1918 and 1933).

    And we have a third source for that rehearsal. Ruthanna Boris — one of those seventeen young women, who stayed in Balanchine’s orbit for many years, dancing the foremost roles in Serenade for the Ballet Russe de Monte Carlo in 1944, and choreographing for New York City Ballet in 1951 — wrote an undated memoir in which she recalled that Balanchine — announcing that “we will make some steps!” — then spoke to the seventeen young women about his life in Russia (“it was revolution, bullets in street”) and his move to Europe.

    Little by little his talking became more and more like a report — less conversational, more charged with feelings of anger and distress: “In Germany there is an awful man – terrible, awful man! He looks like me only he has moustache – he is very bad man— he has moustache — I do not have moustache — I am not bad man — I am not awful man!”… It seemed to me he was tasting his words and trying to get past them. To the best of my memory no one knew what he was talking about. We were adolescent and young ballet dancers, mostly American, mostly aware of the dance world, unaware of governmental affairs in the world beyond it….

    Look again at that opening tableau: Balanchine choreographed here as if he too had the alchemist’s stone, transmogrifying the Nazi salute in space until it became a quasi-religious vow.  

    The ritual that follows is similarly an exercise in metamorphosis, every staccato pause on the way taking the dancers further away from politics and danger toward a great openness to experience. In 1927, Paul Valéry had written, in The Soul and the Dance, that dance was “the pure act of metamorphosis,” and no ballet by Balanchine better illustrates the idea than Serenade. The opening upper-body ritual has no logic in terms of moment-by-moment meaning, but it shows us change in action (and then leads to ballet’s logic of turning out the limbs and torso from the body’s center). Balanchine was a practicing Christian, and I like to think the start of Serenade comes close to Paul’s famous words in the first letter to the Corinthians:

    Behold, I shew you a mystery. We shall not all sleep, but we shall all be changed. In a moment, in the twinkling of an eye, at the last trump: for the trumpet shall sound, and the dead shall be raised incorruptible, and we shall be changed. For this corruptible must put on incorruption, and this mortal must put on immortality, then shall be brought to pass the saying that is written, Death is swallowed up in victory.

    Balanchine has started his ballet with what could easily be an ending. But this dance prolegomenon abounds in thematic material. Even after hundreds of viewings, we keep noticing how myriad details of what follows — the bringing of a wrist towards the forehead, the sideways pointing of foot and leg, the arching back of the neck — were all introduced here, in the beginning, as a prophesy of the ending.

    He took pride in relating how he incorporated rehearsal accidents into this ballet. One day, a girl fell over; he put that into the ballet. Another day, another girl arrived late; he put that in, too. The incidents began to look like a story. Balanchine never worked quite that way again. What was it about Serenade that made him so receptive to chance moments of non-dance? Perhaps he could do so because he could see how those two girls were images of Eurydice. He adjusted the “girl who falls” so that she spins on the spot as if losing control before collapsing to the floor, like Eurydice at the moment of death; and in early performances (particularly in a film of a Ballets Russes performance in 1940) he then presented her supine body as if it was a corpse in its coffin. Likewise the latecomer may simply be Eurydice taking her place among the heavenly dance choir in Elysium. Who can tell now whether these Orphic fancies are truly what Balanchine had in mind?

    Certainly Balanchine had hidden imagery that he seldom disclosed. His protégé John Clifford was surprised when Balanchine, during a fierce argument about the fit of movement to music, said that his choreography of the second movement in Symphony in C, the high-classical pure-dance that he created in 1947 to Bizet’s score of that name, was “the dance of the moon… The grands jetés where she gets carried back and forth at one point are supposed to be the moon crossing the sky.” This was not an image that Balanchine had ever given his dancers; but many readers of I Remember Balanchine, which contains the interview in which Clifford recounts this anecdote, have dutifully written of the moon crossing in the sky in that sequence. (I still don’t see a moon in those lifts, though I enjoy watching both the moon and Symphony in C.) 

    Similarly, in 1979, Balanchine coached the dancer Jean-Pierre Frohlich in the first pas de trois of Agon, his master-piece of 1957. Performed in black and white leotards, tights, and T-shirts to a commissioned Stravinsky score, this work has often seemed a peak of pure-dance radical invention, infusing classicism with a new high-density and “plotless” modernity that moved dance far away from drama and role-playing. Yet Frohlich has recalled that Balanchine explained his role as “the court jester.” For me, this made immediate sense: it did not change my understanding of the work as a whole, but it helped me to define one aspect of its character.

    It matters to notice just how Balanchine tells his stories. The interesting thing about the young woman who falls to the floor in Serenade is not the way she falls but the entrance of the corps. Fifteen young women march in on point in five different rows, like radii towards her focal point. Yet they do not rush to console or to help her. In one of the strangest images in all dance theater, they coalesce around her in the shape of a Greek theater, whereupon they simply do staccato arm exercises. Has one dancer fallen? Then the dance will continue with the corps.

    We can also interpret them as another facet of the Elysian sorority around Eurydice. Such a view, however, does not quite explain their formality and their impersonal behavior. Serenade may contain fragments of myths, but it is about a larger process than any myth: the constant subordination of the dancer to the dance. So what happens next? The fallen woman, the dead Eurydice, promptly picks herself up and dances the most difficult jumps in the ballet so far. She explodes in the air only to pounce precisely back down onto the music’s beat.

    As for the episode with the latecomer, what’s dazzling is that her colleagues have all just resumed the ballet’s opening tableau. Sixteen of them stand again just as they did in the beginning, yet they look quite different now: their statuesque immobility is in total contrast to the quietly informal way in which she, the missing seventeenth, traces her way through their ranks. (“Drama is contrast,” said Merce Cunningham.) Just as she takes her place to join them in the ballet’s opening ritual, Balanchine hurls two other masterstrokes. The other sixteen dancers softly turn into profile, beginning slowly to depart, as if leaving her to her destiny. And a man enters, walking toward her with the same inevitability with which they are walking away. Again Balanchine is the master of geometry: the man’s path is a straight diagonal, the corps’ path is a straight horizontal, but both his advent and their exit are focused on her, this innocent latecomer who sees none of them. Even if you do not imagine Orpheus coming to rescue Eurydice from the realm of the dead, you cannot miss how mysteriously fateful this strange scene is. Balanchine fits it perfectly to the final bars of the Sonatina, so that we reach the music’s end in complete suspense.

    Another of the strangest features of Serenade is that it abounds in echoes. The “mother” at the end of the Elegy enters from the same corner and along the same diagonal as another woman did in the Sonatina. Five women in the Sonatina dance in a chain that prepares us for five different women who form a chain at the start of the Russian Theme. The man who enters along the long diagonal at the end of the Sonatina prepares us for the other man who enters at the start of the Elegy (Orpheus I and II). The woman who falls in the Sonatina is echoed by one — added in 1940 — who tumbles more spectacularly at the end of the Russian Theme. The mysterious kingdom of Serenade is a land of second chances. And so too, for Balanchine, was America. A serious case of tuberculosis in 1932-1933 had rendered him unable to work for a year. Lincoln Kirstein, after inviting him to America in 1933, kept hearing from Balanchine’s ballet friends that he had a poor life expectancy. Balanchine, left with only one functioning lung, later told a friend, “You now, I am really dead man.” But he lived in his new-found-land for almost fifty years, prodigiously prolific until a few months before his death.

    Balanchine liked to envisage himself meeting his composers in the next life. When he died, I wrote an elegiac essay in which I gleefully imagined the scene with all of them waiting by the elevator door to greet him as he arrived and gushing appreciatively about the fabulous things that he had made from their scores. Yet prolonged acquaintance with his ballets now makes me imagine a different scenario. Gluck: “Okay, that’s a beautiful pas de deux he made to the Blessed Spirit music in my Orphée et Eurydice, but surely he could have seen that I meant it as the middle section of a da capo structure! It has to be A-B-A, but he cuts the return to A.” Tchaikovsky, who has quite a list of complaints, begins: “When I wrote my Third Symphony, I took a deliberate risk by giving it five movements. But he cut out the first movement in his Diamonds and made it just another four-movement symphony! Also I never wanted the Siloti edition of my second piano concerto — it tidies up all the irregularities in which I was changing concerto form! His Serenade I will forgive; it’s not my Serenade, but, yes, it is just as beautiful, 

    I can see that now. But why re-order my Mozartiana? And why all that tinkering to my Nutcracker? A genius, yes, but an impossible one.” And so on.

    Among all the dead composers impatiently awaiting Balanchine in paradise, I long most to overhear Stravinsky. “George and I were good friends for over forty years – and yet, the very year after my death, he makes all those ballets to my concert music as if I were writing plays about men and women! He uses my music for plots! And my blood boils about what he did to the Divertimento from Le Baiser de la fée. He cuts some of it, he interpolates another bit from elsewhere in Baiser, it’s really quite fraudulent. All right, what he created was quite beautiful — and it is so amazingly dramatic — no way is this a divertimento!” The composer’s aggrieved ghost would be right. Balanchine’s “Baiser” Divertimento is a misnomer. It is too profound for that name.

    Stravinsky composed the complete ballet La Baiser de la fée in 1928. It is his re-telling of Hans Christian Andersen’s story The Ice Maiden as if the protagonist were Tchaikovsky, whose music is employed throughout in a modernist and neo-Romantic collage. The story chillingly illustrates Graham Greene’s point that “there is a splinter of ice in the heart of a writer.” The ballet’s hero is singled out in infancy by the Fairy, who distinguishes him from other mortals by planting a kiss on his brow: a vision of the muse at her most heartless. He becomes engaged to a girl, but the Fairy, often disguised but sometimes revealing herself with terrifying clarity, keeps parting them. The ballet ends with him helplessly following the Fairy into her icy realm while his fiancée is left alone in desolation. 

    Balanchine first staged this complete Baiser in New York in 1937, at the Metropolitan Opera. For some fifteen years, he kept it in repertory of the successive companies to which he was attached (the American Ballet, Ballets Russes de Monte Carlo, New York City Ballet) until, in the early 1950s, he finally dropped it. But Stravinsky had arranged a concert suite of the ballet’s music, Divertimento from Le Baiser de la fée, in 1934, and Balanchine turned to it in 1972, as he created a flood of new ballets in celebration of Stravinsky (who had died the year before). Oddly for a Stravinsky Festival, Balanchine made major structural changes to this score. (Just to call it Suite from “Le baiser de la fée would have been more accurate.)

    In particular, as the dance scholar Stephanie Jordan first noted in 2003, he introduced, from elsewhere in the complete ballet, a dance for which he created the most poetically dramatic male solo of his career. The music depicts how the Fairy’s irresistible spell begins to infect the hero. The 1972 solo, beginning with great elegance and formal charm, is an accumulating soliloquy, in which the hero’s conflicting energies and self-contradictory aspirations pour forth with uncanny seamlessness. With no histrionics, he seems both inspired and tormented, changing speed and direction in one dance paradox after another. He pivots on his own axis as if keeling over; he jumps forward while arching back; he punctuates a briskly advancing diagonal with sudden slow turns that gesture upwards and away; he softly circuits the stage with jumps that arrive in slowly searching gestures. It is a completely classical statement within a classical pas de deux, and yet it turns the drama around: it tells us that this hero is no longer the fiancé he was.

    In 1974, Balanchine tinkered some more with this already remarkable non-divertimento Divertimento. He now added music from the ballet’s finale, in which Stravinsky makes a heartbreaking arrangement of Tchaikovsky’s famous song “None But the Lonely Heart.” Now, however, Balanchine omitted the Fairy that Stravinsky had signified in this music. The only two leading characters in his drama are the hero and his fiancée, who, though trying to embrace, are repeatedly interrupted by an impersonal line of women corps dancers. Yet this cruel interruption makes less impact than the way the man and the woman now part, evidently forever, as if accepting separation as destiny. They retreat on separate paths that depict them both as figures of tragic isolation. Both walk with their torsos backward, unable to see where they are going. Slowly they zigzag their ways into ever greater distance from each other, without resistance. Man and woman are sundered, the ballet suggests, not by an external figure of fate but by their own internal impulses, which are just as inexorable. It’s as if, in A Doll’s House, Nora and Helmer had agreed to end their marriage without anyone slamming the door at the end. This ballet begins as a divertimento but ends as a tragic psycho-drama; and the progression from plotlessness to plot, from the delight of form to the heartbreak of alienation, proceeds in an unbroken sequence.

    To praise Balanchine as the most musical of dancemakers is to persist in a cliché that misunderstands the full magnitude of his achievement. There are technical aspects of music — melody and harmony, in particular — of which his contemporary Frederick Ashton sometimes found more in the same scores than Balanchine did. Yet this does not make Ashton, an artist dear to me, the greater artist. It is better to see Balanchine as an incomparable exponent of Director’s Theater. His musicality was of a far more interventionist kind than has generally been admitted. He was not just the grateful servant of his scores; he imposed his own vision on his music, which was often an intensely dramatic vision, a vision of humans in the fullness of their relations, and where necessary he tweaked his scores to fulfill it. In the vast majority of his ballets, music and dance work in brilliant counterpoint, different voices that combine to dig deep into our imaginations and our nervous systems. Ear and eye collaborate closely and uncannily in a genre of dance theater that, even now, takes us where we had not been before.

    Naming Names

    Fiorello La Guardia was a great mayor of New York — he even has an airport named after him — but he made some boneheaded errors. Some years after the Sixth Avenue El in Manhattan was razed, La Guardia and the city council decided to rehabilitate the neighborhoods around the thorough-fare, which had become run down from hosting the elevated train. And so, in October 1945, they officially rebranded Sixth Avenue as Avenue of the Americas.

    City planners must have found the cosmopolitan-sounding name exciting. New York City was emerging as the global capital, on the cusp of the American Century: home to the new United Nations and soaring International Style skyscrapers, a hub of commerce, a dynamo of artistic creativity. But this act of renaming by fiat, against the grain of public opinion, failed spectacularly. A survey ten years later found that, by a margin of 8 to 1, New Yorkers still called the street Sixth Avenue. “You tell someone anything but ‘Sixth Avenue,’” a salesman explained to the New York Times, “and he’ll get lost.” Generations of visitors have noticed signs that still say “Avenue of the Americas” and wondered fleetingly about its genesis and meaning, but for anyone to say it out loud today would clearly mark him as a rube.

    Names change for many reasons. While designing Washington, DC in the late eighteenth century, Pierre L’Enfant renamed the local Goose Creek after Rome’s Tiber River. It was a bid for grandeur that earned him mainly ridicule. After Franklin Roosevelt was elected president, Interior Secretary Harold Ickes saw fit to cleanse federal public works of association with the most unpopular man in America, making the Hoover Dam into the Boulder Dam. With independence in 1965, Rhodesia ditched its hated eponym to become Zimbabwe, and its capital, Salisbury, became Harare. When it was conquered by the Viet Cong in 1975, Saigon was reintroduced as Ho Chi Minh City, however propagandistic the appellation still sounds. On Christmas Eve, 1963, Idlewild Airport became JFK. In 2000, Beaver College, tired of the jokes, chose to call itself Arcadia. (Et in Beaver ego.) Even old New York was once New Amsterdam.

    Like the misbegotten Avenue of the Americas moniker, though, new names do not always stick. Who but a travel agent calls National Airport “Reagan”? Where besides its website is the New York Public Library known as “the Schwartzman Building”? In 2017, the Tappan Zee Bridge formally became the Mario M. Cuomo Bridge, thanks to its namesake’s son, but everyone still calls it the Tappan Zee. (Few knew that for the thirteen years prior it had been named for former New York governor Malcolm Wilson; in fact, few knew that someone called Malcolm Wilson had been governor.) Everyone also still calls the Robert F. Kennedy Bridge the Triborough and the Ed Koch Bridge the Queensborough.

    Political events prompt changes, too. When in 1917 German aggression forced the United States into World War I, atlases were summarily revised. Potsdam, Missouri became Pershing. Brandenburg, Texas, became Old Glory. Berlin, Georgia became Lens — but after the war, with the rush to rehabilitate Germany, it reverted to Berlin. (During the next world war this Berlin declined to change its name again, though 250 miles to the northwest Berlin, Alabama rechristened itself Sardis.) In 1924, the Bolsheviks saddled splendid St. Petersburg with the chilling sobriquet Leningrad — “after the man who brought us seventy years of misery,” as tour-bus guides tell their passengers. Only with Communism’s demise could city residents reclaim their old appellation.

    The revision — and re-revision — of place names is thus a common enterprise. But how and why those in control choose to re-label streets, cities, schools, parks, bridges, airports, dams, and other institutions has always been a strange, unsystematic process — subject to changing social norms, political fashions, historical revisionism, interest-group pressure, the prerogatives of power, consistent inconsistency, and human folly. The current craze for a new public nomenclature, in other words, is far from the straight-forward morality play it is often made out to be. How we think about it and how we go about it deserve more deliberation than those questions have received.

    Today’s nomenclature battles mostly turn on a specific set of questions: about race and the historical treatment of non-white peoples. Every day, in the United States and abroad, new demands arise to scrub places, institutions, and events of the designations of men and women who were once considered heroes but whose complicity (real or alleged) in racist thoughts or deeds is now said to make them unworthy of civic recognition. Not only confederate generals, upholders of slavery, and European imperialists are having their time in the barrel. So too are figures with complex and even admirable legacies, as diverse as Christopher Columbus and George Washington, Andrew Jackson and Woodrow Wilson, Junipero Serra and Charles Darwin, David Hume and Margaret Sanger — even, although it sounds like parody, Mohandas K. Gandhi.

    What has led us to set so many august and estimable figures, along with the more flagrantly reprehensible ones, on the chopping block? It helps to look at the criteria being invoked for effacement. To be sure, advocates of renaming seldom set forth clear, careful, and consistent sets of principles at all. Typically, the arguments are ad hoc, each one anchored in some statement, belief, political stance, or action of the indicted individual, the wrongness of which is presumed to be self-evident. But occasionally over the years, governmental committees, university panels, or other bodies have gamely tried to articulate some criteria. Their language is telling.

    One body that recently made plain its standards for naming was a Washington, D.C. mayoral “working group” with the ungainly label “DCFACES.” (An ungainly name is an inauspicious quality in a body seeking to retitle streets and buildings.) That acronym stands for the equally ungainly “District of Columbia Facilities and Commemorative Expressions.” In the summer of 2020, DCFACES released a report declaring that any historical figure would be “disqualified” from adorning a public building or space in Washington, DC if he or she had participated in “slavery, systemic racism, mistreatment of, or actions that suppressed equality for, persons of color, women and LGBTQ communities.” These rules resulted, among other absurdities, in a call to re-label Washington’s Franklin School (which now serves as a museum) because Benjamin Franklin, though a magnificent patriot, politician, democrat, diplomat, writer, thinker, inventor, publisher, and abolitionist, also owned two slaves, whom he eventually freed.

    Here is how the report’s executive summary presents the rules:

    IMPERATIVES

    Commemoration on a District of Columbia asset is a high honor reserved for esteemed persons with a legacy that merits recognition. The DCFACES Working Group assessed the legacy of District namesakes, with consideration to the following factors: 

    Participation in slavery — did research and evidence find a history of enslaving other humans or otherwise supporting the institution of slavery.

    2. Involvement in systemic racism — did research and evidence find the namesake serving as an author of policy, legislation or actions that suppressed persons color and women.

    3. Support for oppression — did research and evidence find the namesake endorsed and participated in the oppression of persons of color and/or women.

    4. Involvement in supremacist agenda — did research and evidence suggest that the namesake was a member of any supremacist organization. 

    Violation of District human rights laws — did research and evidence find the namesake committed a violation of the DC Human Right Act, in whole or part, including discrimination against protected traits such as age, religion, sexual orientation, gender identity, and natural origin.

    Several difficulties with this formulation are immediately apparent. For starters, the list is at once too broad and too narrow. It is too broad because phrases such as “support for oppression” are so vague and subjective that they could implicate any number of actions that might be defensible or explicable. It is also too broad because it implies that a single violation is altogether disqualifying, so that someone like Hugo Black or Robert Byrd (both of whom joined the Ku Klux Klan as young men, only to repudiate their actions and go on to distinguished careers) can never be honored.

    At the same time, the lens is also too narrow. Its single-minded focus on sins relating to race and sex (and, in one instance, other “protected traits”) in no way begins to capture the rich assortment of human depravity. A robber baron who was untainted by racist bias but subjected his workers to harsh labor would seem to pass muster in the capital. So would a Supreme Court justice with a clean record on race who curtailed freedom of speech and due process. Dishonesty, duplicity, and cowardice are nowhere mentioned as disqualifying. Neither are lawlessness, corruption, cruelty, greed, contempt for democracy, any of the seven deadly sins, or, indeed, scores of other disreputable traits any of us might easily list.

    The Washington mayoral working group was not the first body to set down naming rules focused on racism and other forms of identity-based discrimination. In fact, commit-tees have propounded such frameworks for a long time. In 2016, the University of Oregon, in considering the fate of two buildings, adopted seven criteria that largely dealt with offenses “against an individual or group based on race, gender, religion, immigration status, sexual identity, or political affiliation.” (The Oregon list, to its drafters’ credit, also contained some nuance, adding the phrase “taking into consideration the mores of the era in which he or she lived” and making room for “redemptive action” that the individual might have engaged in.) In 1997, the New Orleans school board proscribed naming schools after “former slave owners or others who did not respect equal opportunity for all.” Few objected when this policy was invoked to exchange the name of P.T. Beauregard on a junior high school for that of Thurgood Marshall. More controversial, though, was the elimination of George Washington’s name from an elementary school, no matter how worthy his replacement appeared to be. (He was Charles Richard Drew, a black surgeon who helped end the army’s practice of segregating blood by race.) So the battles now being waged in city councils and university senates, though intensified by the recent racial ferment, long predate the latest protests or even the Black Lives Matter movement of 2014.

    Like so many skirmishes in our culture wars, these go back to the 1960s. That era’s historic campaigns for racial and sexual equality; the widespread criticisms of government policy, starting but not ending with the Vietnam War; the deepening skepticism toward political, military, and religious authority; the blurring of boundaries between public and private; the exposure of criminality in high places; the demise of artistic standards of excellence — all these elements conspired to render quaint, if not untenable, old forms of patriotism and hero worship. Debunking thrived. Not just in the counterculture, but also in the academy, there took hold what the historian Paul M. Kennedy called “anti-nationalistic” sentiment: arguments (or mere assumptions expressed via attitude and tone) that treated the nation’s past and previous generations’ values and beliefs with disapproval, disdain, or even a conviction, as Kennedy wrote, that they “should be discarded from … national life.” Growing up in the 1970s and after, Generations X, Y, and Z were never taught to passively revere the Founding Fathers or to celebrate uncritically the American experiment. On the contrary, we were steeped in dissidence, iconoclasm, suspicion, and wisecracks. At its best, this new adversarial sensibility instilled a healthy distrust of official propaganda and independence of mind. At its worst, it fostered cynicism and birthed a propaganda of its own.

    The thorniest questions of the 1960s stemmed from the challenge, thrown down by the civil rights movement, for America to live up to its rhetoric of equality. “Get in and stay in the streets of every city, every village, and hamlet of this nation,” the 23-year-old John Lewis said at the March on Washington in 1963, “until true freedom comes, until the revolution of 1776 is complete.” With uneven resolve, Americans devoted to human equality have striven to meet the challenge. And this effort has included, crucially, rethinking the past. To highlight and learn about our nation’s history of racial exclusion and discrimination is among the noblest goals we can have in our public discourse, because it is the intellectual and cultural condition of justice: we will not be able to achieve equality without understanding the deep roots of inequality in our society. 

    By the 1990s American society had become an irreversibly multicultural one. WASP values, assumptions, priorities, and interpretations of the past could no longer dominate. “We Are All Multiculturalists Now,” declared the title of a somewhat unexpected book by Nathan Glazer in 1996. But with that watershed, Glazer noted, it became necessary to pose a new set of queries (which Americans had indeed been asking for some time): “What monuments are we to raise (or raze), what holidays are we to celebrate, how are we to name our schools and our streets?”

    Probably no group of historical actors has been subject to as much contentious debate as the secessionists who founded the Confederate States of America. Yet by the third decade of the twenty-first century, there was not much of a debate left about their virtues. Arguments for their valor already seem hopelessly antiquated. Partial defenses of Robert E. Lee, of the sort that David Brooks earnestly mounted in the New York Times just five years ago, now induce cringes. (“As a family man, he was surprisingly relaxed and affectionate… He loved having his kids jump into bed with him and tickle his feet.”) Were the Times to publish a piece like Brooks’ in the current environment, the whole masthead would be frog-marched out of the building under armed guard.

    The public, or some of it, has now learned that Southerners imposed most of their Lost Cause nomenclature, iconography, and narratives not in innocent tribute to gallant soldiers, but as part of a rearguard racist project of forging and upholding Jim Crow. This new awareness — along with the political agitation of the last decade — has altered how many Americans think about a military base honoring Braxton Bragg or a park memorializing Nathan Bedford Forrest. The Lincoln scholar Harold Holzer confessed last year that statues and place names which “I long regarded as quaint were in fact installed to validate white supremacy, celebrate traitors to democracy, and remind black and brown people to stay ‘in their place.’” It became increasingly incongruous, if not bizarre, to see in  a redoubt of suburban liberalism such as Arlington, Virginia, a boulevard evoking the Confederacy’s leading general.

    Still, as the protests in Charlottesville in 2017 showed, Lee retains his champions. Plying his demagoguery that August, Donald Trump — at the same press conference at which he defended the Charlottesville firebrands — warned that if Lee were to be scrubbed from public commemoration, George Washington (“a slave owner”) and Thomas Jefferson (“a major slave owner”) would be next. “You have to ask yourself, where does it stop?” To this slippery-slope argument, many have given a sensible and convincing answer: Lee, Jefferson Davis, Stonewall Jackson, and the others were traitors to their country; Washington, Jefferson, and the founders were not. Removing the former from streets and schools while retaining the latter admits no contradiction. As far back as 1988, Wilbur Zelinsky, in his fascinating history Nation into State, remarked that “as the military commander of an anti-statist cause, there is no logical place for Lee in the national pantheon alongside Washington, Franklin, and others of their ilk,” explaining that Lee entered the pantheon (or stood just outside its gates) only “as an archetypal martyr — the steadfast, chivalrous, sorrowful, compassionate leader of a losing cause.”

    Yet the distinction between traitors and patriots, while perfectly valid so far as it goes, does not answer the big questions. It does not address, for example, whether every last venue commemorating a Confederate must be taken down. Yes, let us lose the Confederate flags and Confederate statuary, and change the place names that keep alive the Lost Cause. But would it be acceptable to keep a handful, for considered reasons? Doing so would show that we know that our history includes the bad along with the good, as all human history does; and it would remind us that our predecessors at times were not able to tell the bad from the good. It would remind us that our country was once riven to the core by a struggle over evil and inculcate sympathy for the difficulty, and the cost, of the struggle. It might also deflate a presentist arrogance that tempts us to think that our current-day appraisals of the past, fired off in the heat of a fight, are unerring and for the ages.

    The distinction between traitors and patriots also fails to address the larger and more humane question of whether there is a way, notwithstanding the hateful cause for which the Confederates fought, to extend some dignity to their descendants who renounce the ideology of the Old South but wish to honor forbears who died by gun or blade. In the right context, and without minimizing those forbears’ attachment to an evil institution, this goal should, I think, be achievable. At the Gettysburg battlefield, monuments to Southern regiments stand arrayed opposite those to Northern troops, but in no way does a walk through the austere, beautiful environs suggest an exculpation or a whitewash. To erase any possible doubt, a professionally designed and intelligently curated museum nearby spells out the war’s history, including the centrality of slavery, in cold detail.

    And the distinction between traitors and loyalists is insufficient for yet another reason, too: it speaks only to the period of the Civil War. Outright traitors are a small, discrete subset of those who have come under fire in the recent controversies; the nomenclature wars span much wider terrain. Identifying secession as grounds for censure is fine, but it provides no limiting principle to help us think through, in other circum-stances, whose names should and should not remain. It says nothing about Theodore Roosevelt, Winston Churchill, John Muir, Kit Carson, Louis Aggasiz, Henry Kissinger, Voltaire, or anyone else.

    Most regrettably, the distinction does not persuade everyone. In addition to the Lost Cause devotees, some on the left likewise deny the distinction. We saw New Orleans retitle George Washington Elementary School back in 1997. When Trump cited Washington in his press conference in 2017, he was unknowingly describing something that had already happened. Could it be that he recalled the campaign at the University of Missouri in 2015 to defenestrate Jefferson, whom students, apparently knowing little about his quasi-marriage to Sally Hemings, excoriated as a “rapist”? Even if Trump was ignorant of these precedents, as seems probable, he must have felt some vindication when protesters in 2020 targeted Abraham Lincoln, Ulysses S. Grant, Frederick Douglass (!), and other assorted foes of slavery. Trump and these leftwing activists agree that the current renaming rage should not “stop” with traitors to the Union. They share a fanatical logic.

    Few participants in the nomenclature wars have reckoned seriously with this slippery-slope problem. The Yale University officials who renamed Calhoun College because its eponym flew the banner of race slavery were well aware that Elihu Yale earned his fortune at a powerful British trading company that trafficked in African slaves. But Yale remains Yale, for now. Similar contradictions abound. Are we to make a hierarchy of hypocrisies? If Woodrow Wilson’s name is to be stripped from Princeton University’s policy school because he advanced segregation in the federal bureaucracy, by what logic should that of Franklin Roosevelt, who presided over the wartime Japanese internment, remain on American schools? If the geneticist James Watson’s name is scratched from his research institution’s graduate program because he believed that racial IQ differences are genetic, why should that of Henry Ford — America’s most influential anti-Semite, who published the Protocols of the Elders of Zion in his Dearborn Independent — remain on the Ford Motor Company or the Ford Foundation? In what moral universe is Andrew Jackson’s name erased from the Democratic Party’s “Jefferson-Jackson” dinners, but Donald Trump’s remains on a big blue sign near the 79th Street off-ramp on the West Side Highway? How can the District of Columbia go after Benjamin Franklin and Francis Scott Key but not Ronald Reagan, whose name adorns the “international trade center” downtown? It is not a close contest as to who made life worse for the city’s black residents.

    The problem with the contemporary raft of name alterations is not that historical or commemorative judgments, once made, cannot be revised. Change happens. It may have been silly for the Obama administration to rechristen Mt. McKinley “Denali,” but it was not Stalinist. The real problem (or one problem, at any rate) is that no rhyme or reason underwrites today’s renaming program. Like the social media campaigns to punish random innocents who haphazardly stumble into an unmarked political minefield, the campaign of renaming follows no considered set of principles. It simply targets whoever wanders into its sights.

    If we wish to impose some coherence on the Great Renaming Project, a good first step would be to create a process of education and deliberation. Our debates about history generally unfold in a climate of abysmal ignorance. How much is really known about the men and women whose historical standing is now being challenged? What matters most about their legacies? Were they creatures of their age or was their error perfectly evident even in their own time? What harm is perpetuated by the presence of their name on a street sign or archway? The answers are rarely straightforward.

    In many public debates, the participants know little about what the men and women under scrutiny did. In April 2016, a Princeton undergraduate and stringer for the New York Times wrote incorrectly in the paper of record that Woodrow Wilson “admired” the Ku Klux Klan. The next day the paper ran a letter correcting the error, noting, among other facts, that in his History of the American People Wilson called the Klan “lawless,” “reckless” and “malicious”; but just two weeks later another stringer, one year out of Yale, parroted the same mistake. That even Ivy-educated youngsters got things so wrong should not be surprising. The undergraduates I teach tend to know about Andrew Jackson’s role in Indian Removal, and that he owned slaves. But most know little of his role in expanding American democracy beyond the elite circles of its early days. Millions of young people read in Howard Zinn’s A People’s History of the United States about the horrors that Columbus inflicted on the Arawaks of the Caribbean. But Zinn was rebutting the heroic narratives of historians like Samuel Eliot Morison, whose Columbus biography won a Pulitzer Prize in 1943. How many students read Morison anymore? How many have a basis for understanding why so many places in North America bear Columbus’ imprint in the first place? Were all those places consecrated to genocidal conquest? Without efforts to educate the young — and the public in general — about the full nature of these contested figures, the good and the bad, the inexorable complexities of human thought and action, these debates will devolve into a simplistic crossfire of talking points.

    On occasion, mayors, university presidents, and other officials have recognized that a process of education and deliberation is necessary before arriving at a verdict on a controversial topic. In 2015, Princeton University came under renewed pressure to address the racism of Woodrow Wilson, who was not only America’s twenty-eighth president but a Princeton graduate, professor, and, eventually, a transformational president of the college. At issue was whether to take his name off the university’s policy school, a residential dorm, and other campus institutions (professorships, scholarships, book awards, etc.). Desiring a process that was democratic and deliberative, the president of the university, Christopher Eisgruber, convened a committee. Multiracial and multigenerational in composition, it included members of the board of trustees, Wilson experts, higher education leaders, and social-justice advocates. It solicited the views of students, faculty, staff, and alumni. Historians wrote long, thoughtful, well-researched letters weighing the merits of the case. Some 635 community members submitted comments through a dedicated website (only a minority of whom favored eliminating Wilson’s name).

    The committee weighed the evidence, which includes the record not just of Wilson’s deplorable racism but also his undeniable achievements. Although many students today know little about Wilson besides the racism — which, we must be clear, went beyond private prejudice and led him to support Cabinet secretaries Albert Burleson and William McAdoo in segregating their departments — he was for a century considered one of America’s very best presidents. Wilbur Zelinsky, in his meticulous study, called Wilson “one of four presidents since Lincoln whom some would consider national heroes” (the others being the Roosevelts and John F. Kennedy). Wilson could claim in his day to have enacted more significant progressive legislation than any president before him; since then, only Franklin Roosevelt and Lyndon Johnson have surpassed him. Wilson also built upon Theodore Roosevelt’s vision of a strong presidency to turn the White House into the seat of activism, the engine of social reform, that it has been ever since. Nor was Wilson successful just domestically. He was a historic foreign-policy president, too, and a winner of the Nobel Peace Prize. After exhausting all bids for peace with Germany, he reluctantly led America into World War I, which proved decisive in defeating Teutonic militarism, and he pointed the way toward a more democratic and peaceful international order — though, crippled by a stroke and his own arrogance, he tragically failed to persuade the Senate to join the League of Nations, leaving that body all too ineffectual in the critical decades ahead.

    The Princeton committee’s fair-minded report was adopted by the Board of Trustees in April 2016. It recommended keeping Wilson’s name on the buildings. But Eisgruber and the board of trustees simultaneously promised that campus plaques and markings would henceforth provide frank accounts of Wilson’s career and beliefs, including his racism. More important, the university would, it said, take bold steps in other aspects of campus life to address the underlying grievance: that many black Princetonians do not feel they are treated as equal members of the campus community. And there the matter rested, until 2020. Following the Memorial Day killing of George Floyd by a Minneapolis policeman, protests erupted nationwide calling for police reform and other forms of racial justice — including, once again, the reconsideration of names. This time Eisgruber launched no deliberative process, appointed no diverse committee, solicited no external input, convened no searching conversation. He simply declared that the Board of Trustees had “reconsidered” its verdict of a few years before. His high-handed decree, more than the ultimate decision, violated the principles on which a university ought to run. For Eisgruber, it also gave rise to some new headaches: in what can only be seen as an epic troll, Trump’s Department of Education opened an investigation into whether Princeton’s confession of rampant racism meant it had been lying in the past when it denied engaging in racial discrimination.

    Curiously, at the same time as Princeton banished Wilson, Yale University also performed a banishment — this one with regard to John C. Calhoun, whose name graced one of its residential colleges. But there were crucial differ-ences between the two cases. Although Calhoun has been recognized as a statesman, grouped with Henry Clay and Daniel Webster as the “Great Triumvirate” of senators who held the nation together in the fractious antebellum years, he is a far less admirable figure than Wilson. He made his reputation as a prominent defender of slavery and a theorist of the nullification doctrine that elevated states rights over federal authority — a doctrine that later provided a rationale for Southern secession. But beyond the huge political differences between Wilson and Calhoun are the differences in the processes that Princeton and Yale pursued. Princeton jettisoned a deliberative decision to implement an autocratic one. Yale did something like the reverse.

    Following the Charleston massacre of 2015, the president of Yale, Peter Salovey, told his campus that Yale would grapple with its own racist past, including its posture toward Calhoun. Then, the following spring, he declared that after much reflection on his part — but no formal, community-wide decision-making process — Calhoun would remain. Salovey contended, not implausibly, that it was valuable to retain “this salient reminder of the stain of slavery and our participation in it.” To get rid of Calhoun’s name would be to take the easy way out. At the same time, Salovey also announced (in a ham-handed effort to balance the decision with one he expected students and faculty would like) that one of Yale’s two new residential colleges would be named for Pauli Murray, a brilliant, influential, underappreciated midcentury civil rights lawyer who was black and, for good measure, a lesbian.

    Students and faculty rebelled. Salovey backtracked. He now organized a committee, chaired by law and history professor John Fabian Witt, to tackle the naming question systematically. Wisely, however, Salovey charged the committee only with developing principles for renaming; the specific verdict on Calhoun would come later, decided by still another committee, after the principles were set. To some, the whole business seemed like a sham: it was unlikely that after vowing to take up a question a second time he would affirm the same result. Still, the exercise of formulating principles—in the tradition of a storied Yale committee that the great historian C. Vann Woodward led in the 1970s to inscribe principles for free speech on campus — was worthy, and Salovey populated the Witt committee with faculty experts on history, race, and commemoration. Even more than the Princeton report, the Witt Committee’s final document was judicious and well-reasoned. When, in 2017, Yale finally dropped Calhoun’s name from the residential college, no one could accuse the university of having done so rashly. 

     

    Deliberation by committee, with democratic input, may be necessary to ensure an informed outcome on a controversial subject, but as the example of DCFACES shows, it is not always sufficient. Setting forth good principles is also essential. One mistake that the Washington group made was in asking whom to disqualify from recognition, rather than who might qualify. Historians know that the categories of heroism and villainy are of limited value. Everyone is “problematic.” And as Bryan Stevenson likes to say, each of us is more than the worst thing we have ever done.

    Thus if we begin with the premise that certain views or deeds are simply disqualifying, we have trouble grasping the foolishness of targeting Gandhi (for his anti-black racism), Albert Schweitzer (for his racist and colonialist views), or Martin Luther King, Jr. (for his philandering and plagiarism). In any case, how can we insist that racism automatically denies a historical actor a place in the pantheon when the new reigning assumption — the new gospel — is that everyone is (at least) a little bit racist? We all have prejudices and blind spots; we all succumb to stereotyping and “implicit bias.” By this logic, we are all disqualified, and there is no one left to bestow a name on the local library.

    A more fruitful approach is the one the Witt Committee of Yale chose: by asking what are the “principal legacies” of the person under consideration, the “lasting effects that cause a namesake to be remembered.” We honor Wilson for his presidential leadership and vision of international peace. He is recognized not for his racism but in spite of it. We honor Margaret Sanger as an advocate of reproductive and sexual freedom, not for her support of eugenics but in spite of it. Churchill was above all a defender of freedom against fascism, and the context in which he earned his renown matters. Of the recent efforts to blackball him, one Twitter wag remarked, “If you think Churchill was a racist, wait until you hear about the other guy.” Not everything a person does or says is of equal significance, and people with ugly opinions can do great things, not least because they may also hold noble opinions.

    Principal legacies can evolve. They undergo revision as people or groups who once had little say in forging any scholarly or public consensus participate in determining those legacies. It may well be that by now Andrew Jackson is known as much for the Trail of Tears as for expanding democracy, and perhaps that is appropriate. Arthur M. Schlesinger, Jr., made no mention of Indian Removal in his classic The Age of Jackson in 1945, but by 1989 he had come to agree that the omission — common to Jackson scholars of the 1940s — was “shameful,” if all too common among his peers at the time. But as the Witt Committee noted, our understandings of someone’s legacies “do not change on any single person’s or group’s whim; altering the interpretation of a historical figure is not something that can be done easily.” For all that Americans have learned about Thomas Jefferson’s racial views and his slaveholding in recent decades, his principal legacies — among them writing the Declaration of Independence, articulating enduring principles of rights and freedom, steering a young country through intense political conflict as president — remain unassailable. We will have to learn to live with all of him.

    The Witt Committee also asked whether the criticisms made of a historical figure were widely shared in his or her own time — or if they are a latter-day imposition of our own values. The difference is not trivial. As late as 2012, when Barack Obama finally endorsed gay marriage, most Democrats still opposed the practice. But norms and attitudes evolved. Today most Democrats think gay marriage unremarkable, and the Supreme Court has deemed it a constitutional right. It might be fair to condemn someone who in 2020 seeks to overturn the court’s decision, but it would be perverse to label everyone who had been skeptical of gay marriage ten years ago a homophobe or a bigot. Historians must judge people by the values, standards, and prevailing opinions of their times, not our own. No doubt we, too, will one day wish to be judged that way. Yet the pervasive impulse these days to moralize, to turn analytical questions into moral ones, has also made us all into parochial inquisitors.

    It is also worth asking what harm is truly caused by retaining someone’s name, especially if the person’s sins are obscure or incidental to his reputation. Many buildings and streets commemorate people who are largely forgotten, making it hard to claim that their passing presence in our lives does damage. A federal court forbade Alabama’s Judge Roy Moore from placing a giant marble Ten Commandments in the state judicial building, but the phrase “In God We Trust” is allowed on coins because in that context it is considered anodyne and secular — wallpaper or background noise — without meaningful religious content. By analogy, the preponderance of place names hardly evoke any associations at all. They are decorations, mere words. The State University of New York at Buffalo removed Millard Fillmore’s name from a campus hall because Fillmore signed the Fugitive Slave Act. But it is doubtful that Fillmore’s surname on the edifice had ever caused much offense, for the simple reason that almost no one knows anything about Millard Fillmore.

    Then, too, as Peter Salovey initially suggested about Calhoun, a person’s name can sometimes be a useful and educational reminder of a shameful time or practice in our past. In 2016, Harvard Law School convened a committee to reconsider its seal, which depicted three sheaves of wheat and came from the family crest of Isaac Royall, a Massachu-setts slaveowner and early benefactor of the school. While the committee voted to retire the seal, historian and law professor Annette Gordon-Reed and one law student dissented, arguing that keeping the seal would serve “to keep alive the memory of the people whose labor gave Isaac Royall the resources to purchase the land whose sale helped found Harvard Law School.” Historical memory is always a mixed bag — if, that is, we wish to remember as much as we can about how we came to be who we are. Sometimes, a concern for history is precisely what warns us not to hide inconvenient or unpleasant pieces of the past.

    Often context can serve the purposes of promoting antiracism or other noble principles better than erasure. Museums and other forms of public history are experiencing a golden age. Historic sites that once lacked any significant information for tourists are being redesigned to satisfy the hungriest scholar. Plaques, panels, touch-screen information banks, and other displays can educate visitors about the faults and failings — as well as the virtues — of the men and women whose names appears on their buildings and streets. Addition — more information, more explanation, more context — may teach us more than subtraction. But even here, there are limits. A recent show at the National Gallery of Degas’ opera and ballet pictures did not mention that he was a virulent anti-Semite. Should we care? If the museum had “contextualized” the tutus with a wall caption about Captain Dreyfus, the information would not have been false, but it would have been irrelevant, and in its setting quite strange. We don’t need asterisks everywhere.

    Above all, renaming should be carried out in a spirit of humility. The coming and going of names over the decades might inspire in some a Jacobin presumptuousness about how easy it is to remake the world. But what it should more properly induce is a frisson of uncertainty about how correct and authoritative our newly dispensed verdicts about the past truly are. “We readily spot the outgrown motives and circumstances that shaped past historians’ views,” writes the geographer David Lowenthal; “we remain blind to present conditions that only our successors will be able to detect and correct.” Public debates and deliberation about how to name our institutions, how to evaluate historical figures, and how to commemorate the past are an essential part of any democratic nation’s intellectual life and political evolution. Our understandings of our history must be refreshed from time to time with challenges — frequently rooted in deeply held political passions — to widely held and hardened beliefs. There are always more standpoints than the ones we already possess. Yet passions are an unreliable guide in deriving historical understanding or arriving at lasting moral judgments. In light of the amply demonstrated human capacity for overreach and error, there is wisdom in treading lightly. Bias is everywhere, even in the enemies of bias. Nobody is pure.

    The Student

    He acts it as life before he apprehends it as truth.
    RALPH WALDO EMERSON

    Entering an unfamiliar classroom for the first time, met by a cacophony of greetings, shuffles, and the flutter of unsettled nerves, a student experiences a particular strain of vertigo — a a kind of thrownness. Unbalanced, she glances about, wondering if her fresh peers are already friends, if they know or care more about the subject than she does, if the professor will command attention or beg for it. She wonders also about the subject — how it will stretch or resist or entice her; and what personal qualities, as well as intellectual qualities, she ought to bring to her studenthood. She must wait for an internal order to develop, and for the nerves to slow gently into a new rhythm. The experience catapults her from the grooves of ordinary life. She has the sensation of a swift transit. 

    That is what learning is meant to do. The developments that will occur in that homely but exotic room over those few months ought to confuse, not confirm, her. Each time she enters the classroom she must again try to recapture the vertigo and recover the instability — to distance herself from herself. She cannot learn, or learn well, if she conceives of that place and those hours as a sphere in which to calcify who she already is. Alienation is essential to study. The classroom is a community of the alienated. Genuine learning demands courage and adventure. The room must be a realm apart, a space with a strange energy and a different gravity — a foreign country, populated by real and imagined strangers. Discomfort is its air.

    The comfort of one’s own couch, then, is a bad place to set up school. And so the question is begged: Is remote learning possible? Is the setting of study a matter of indifference to the activity of study? The question was relevant before Covid19 bleakly introduced the age of Zoom. In the United States over the past fifteen years, enrollment in online courses has more than quadrupled. This trend, the success of which was meteoric, was a response to the equally monumental and endlessly mounting cost of college for the average student. In America, higher education now costs students thirteen times what it did forty years ago, and that price has swelled while state funding for public universities has decreased. As tuition has risen, returns on investment have dropped. The pioneers of MOOCs — “mass open online courses,” for those born too late to remember the old country in which they required introduction — explained that this disconnection is due to the uselessness of traditional curricula for the contemporary workforce and the “revolution in work.” All this reading and writing, all this training in thought — all this humanistic exploration — seemed impractical, and practicality has increasingly become the standard of judgement. If not for a job, then for what? And so they developed cheaper, skills-based models. Those models are online, a “convenience” which proclaims that the classroom, like the libraries cluttering university campuses, is redundant, and even archaic.

    For years now, Coursera has offered a fully online master’s degree from the University of Pennsylvania in computer and information technology for one-third of the cost of the on-campus version. MIT boasts a supply chain management degree which begins with an online segment on edX (a global non-profit founded in 2012 by Harvard and MIT). Similarly, Arizona State University’s Global Freshman Academy kicks off with a virtual first year. In both the Arizona State and MIT programs students complete the initial leg of their degree online and then are invited to apply for the on-campus portion at a fraction of its usual price. edX, like most similar platforms, considers education the process through which students are armed with tools to earn money. From its website: “[we are] transforming traditional education, removing the barriers of cost, location and access…. [our students are] learners at every state, whether entering the job market, changing fields, seeking a promotion or exploring new interests.” It tells us “edX is where you go to learn.” A professionalized application of the term, to be sure; but because of the overwhelming success and reach of these platforms, they have largely successfully redefined “learning” and “education.”

    Anant Agarwal, the founder of edX, called 2012 “the year of disruption” for higher education. Disruption indeed, and on what a scale! In its first year, edX had 370,000 students. Coursera, founded in January of 2012, reached over 1.7 million students within just a few months, and in the same stretch of time formed partnerships with thirty-three of the most elite institutions in higher education, including Princeton, Brown, Columbia, and Duke. Often when people talk about education now, they mean education as edX defines it. And when they talk about it in the years of the pandemic, they may be referring to the only pedagogical means possible. Imagine life in lockdown without the internet! And yet one must ask, in this field as in many other fields of contemporary life, at what price convenience?

    Of course, a certain kind of learning can be done online. Knowledge comes in many types and has many purposes and brings many satisfactions, and many people will find that better jobs and better lives will result from the acquisition of what can be obtained digitally. These are not trivial considerations. But the technological expansion of educational resources may also come with a significant cost. The critique of the digitalization of life is not Luddism. It is the only responsible way to reap the benefits of digitalization, and it is an intellectual duty now that there is no going back. It would be foolish not to utilize the new technological opportunities, except when we utilize them foolishly. So what, exactly, can a screen capture and transmit, and what can it not capture and transmit? If we are serious about the supreme value of education for the individual and society, we must not passively acquiesce in every online excitement and remain worshippers in the church of “disruption.” It sounds almost silly to say, and yet many realms of contem porary life often ignore this truth: there are significant things that numbers cannot measure.

    One way to evaluate the new technology is by the old purpose. If the new technology cannot serve the old purpose, and if we continue to believe in the old purpose, then the new technology must be judged by its limits. Learning has a long history, at all its levels. We know a lot about it. And by the standard of what we know about it, we have reason to ask whether digital learning is, strictly speaking, learning at all. Perhaps, owing to the constraints it imposes upon the student and the teacher, it is something else entirely: perhaps it is merely training, the communication of useful information, which may be structurally similar to learning, but which has the opposite mental and spiritual effect.

    That there is a difference between learning and training, between meanings and skills, has been noticed before, in a variety of traditions and eras. Here is an ancient example. The distinction is alluded to in the opening chapter of Pirkei Avot, or Ethics of the Fathers, a tractate of the Jewish legal text known as the Mishna. This particular tractate has no laws; it is an anthology of rabbinical wisdoms. Here it is twice stated “aseh lecha rav,” or “make for yourself a teacher” — in the sixth article of the first chapter, “make for yourself a teacher, acquire for yourself a friend, and give every person the benefit of the doubt”; and again, ten articles later, “make for yourself a teacher, and avoid confusion, and do not become accustomed to estimating tithes.” That the imperative appears twice indicates — I am reasoning here in the old Talmudic way — that each instance must refer to a different type of authority.

    In both cases the word “rav” is used. This is the traditional term for the figure to whom one turns for legal rulings, and also for the teacher with whom one studies. The same person can serve both functions and traverse the distance between the two roles. Less arcanely, think of a professor who offers expert insight to a journalist before meeting with a student about her doctoral thesis: she could have been discussing the same subject in both places, but her tonal shift, and the change in the scholarly level of her intervention, would be consider-able. In both roles she wields authority, but while speaking to the journalist her authority is meant to be the final word, whereas with her student it ought to stimulate curiosity and conversation. 

    The rav who is discussed in the latter dictum has the sort of authority that obliterates doubt. This figure gives rulings, and dispositive answers to practical questions, and the listener take note and acts accordingly. The students of this rav are not provoked, they are steadied. They have heard the stabilizing certainties of an expert — no thought is required of them, just trust and a willingness to follow instructions. This authority is different in kind from the first sort of rav, the one mentioned just before the “friend” and just after an article that treats of relations between wives and husbands. Extrapolating from this sequence — husband-wife; student-teacher; friend-friend — the rabbis establish that, after family, this sort of teacher-student relationship is the most intimate form of companionship, more intimate even than friendship.

    In both these articles the same unexpected verb is used: one must make a teacher. There are, predictably, centuries of argument in the Jewish tradition over exactly what this making means. Teachers are not found, they are made; and not only are they made, but they are made together with people other than themselves — by their students. Note that “make” is immediately distinguished from “acquire” (“make for yourself a teacher, acquire for yourself a friend”). “Acquire” intimates that a friend comes readymade, as it were — prepared for friendship. Both partners decide independently to commence the friendship. But a teacher cannot be a teacher unless he 

    (the Mishna assumed that all students and teachers were male) is made into one by a student. This is a more radical and obscure observation than that one becomes a teacher only through teaching, by means of practice. It is well known that no textbook or graduate study can inculcate the peculiar sensitivities that a teacher must develop: that only the work of teaching does that. But the Mishna makes a stranger and more stringent demand upon the teacher: he owes his status to a collaboration. His pedagogical certification derives from a personal relationship with the individual who comes to him for knowledge. Closeness and trust, intimacy and vulnerability: these are the terms of teacher-making.

    These conditions are not optional but obligatory, as it is also established from the article that it is a duty that the student make a teacher. One must not simply wait for a teacher to turn up, and one must not try to learn alone. Maimonides, whose reading of the ancient injunction is echoed by subsequent commentators, strikingly declared that the student must secure a teacher even if the teacher is not intellectually superior to the student. Not your equal or your better; just your interlocutor. This is an extraordinary refutation of our commonplace assumptions about pedagogical qualifications. This ideal of study is not hierarchical, it is dialogical. (The Jewish tradition has plenty of hierarchical reverence for teachers in other places.) Dialogical study is always superior to solitary study. In a significant sense, solitary study is oxymoronic.

    If a teacher does not have to be smarter than his student, then cleverness and even erudition are not the most important quality in the setting of study, or in the classroom. What matters most, it seems, is that it be a human encounter, an exchange of intellectual electricity. Maimonides’ notion has humbling implications for both teachers and students. Clearly, it humanizes the teacher, whom we may otherwise be tempted to cast as an infallible sage. In this scenario of study, the teacher, too, is vulnerable. And it also reminds the young and the bright that precocity is beside the point: in the classroom, obtaining knowledge and understanding not yet acquired is the overriding objective. One must not come to class eager to glitter. A student who is mesmerized by her own rhythms and insights will not grasp the subject and enter its new world, which is what study is. Better to be empty and attentive than clever and ahead. Learning is travel. “When you travel,” Elizabeth Hardwick observed, “your first discovery is that you do not exist.”

    All of which is to say that education, I mean of the deepest questions and themes, is first and foremost an experience.

    The difference between the first aseh lecha rav and the second is the difference between training, which transmits a practical skill, and learning. Skills make one useful; they provide the security of a straightforward purpose. The goal of training is problem-solving; and since life is full of solvable problems, two cheers for training. But not all of our problems are of the solvable, or easily solvable, or obviously and familiarly solvable, kind. Problems of meaning do not have technical or replicable solutions. Learning, therefore, is the opposite of training. It is a different sort of preparation for a different sort of difficulty. Learning acclimates students to the looming awareness that life is not governed by simple laws clearly stated. It is messy, murky, essentially contested, often mysterious. In the realm of meaning, neatness is not natural. (Though there have been philosophers who have thought otherwise.)

    It is certainly possible for trainees to train in the spirit of study — for example, through the rigors and drudgeries of a legal education a law student can be stimulated by the philosophical implications of her casebooks. It is also possible for disciples to study in the spirit of a trainee: to master the weeds and memorize the footnotes. This is Casaubonism, or humanism degraded, robbed of its soul — in sum, humanism minus doubt. True study does not obliterate doubt. The longer one spends inside a new world, the more acutely one recognizes that there are facets of it that can never be wholly penetrated. And the deeper into the world one goes, the more exasperating and incontrovertible that truth becomes. Moreover, the eventual comparison of another world with our own is itself one of the classical sources of doubt. Authority in a field does not confer certainty, as the greatest scholars know.

    It is impossible to become comfortable in an alien world without a guide — it is impossible to learn without a teacher. Even Emerson, the learner par excellence, whose enchanted mind thrived in unbalanced confusion and ecstatic chaos, had teachers whom he imitated, revered, differed with, and finally abandoned — but only after having been transformed. Emerson, to be sure, was a genius — but again, a teacher does not have to be smarter than her students. She simply has to have knowledge that they do not have, and a willingness to deliberate together. The distance between what a teacher knows and what a student knows will always be considerably smaller than the distance between what a teacher knows and what it is possible to know. No matter how many books and manuscripts and archives a scholar discovers and masters, there will always be secrets unknown, always someone who knows something the expert does not (even if this other person knows less than she does). And so the amount of information she has mastered will never be as essential to a learner as the attitude she has towards what is strange. It is the development of this attitude, an acquired openness, that all learners have in common.

    The objective of study is not self-expression. A genuine student must quiet her own rhythm in order to focus intensely on the rhythms of an alien system — another person, another religion, another civilization — they all have their own rhythm. Still, quieting one’s own is not the same as forgetting it. A student is not a blank slate, she brings her experiences with her to the classroom; it is after all her own mind, her own self, that she is cultivating by means of study. But she does not hold them at the forefront of her mind while she works. She must never find herself more interesting than what she studies. If she captivates herself, she is captive to herself. She is self-shackled. Instead she must strain to allow her subject to set the pace of study. If she is to understand thoughts that are not her own and lives that are not her own, the question that she must ask is how they are different from her, not how they are the same.

    The exploration of what is alien is not always exciting. In some stages of study it will almost certainly be tedious. Everything worth understanding demands discipline. There will be drills: amo, amas, amat, amamus, amatis, amant, flashcards, charts, red pens. These drills are not stimulating, but serious intellectual stimulation is impossible without them. They are the humanist’s training — training-for-learning, training that is only preparatory, that makes the student fit for the transit to a different and non-utilitarian plane. Drills are not learning, the way stretching is not running, but try running without stretching. 

    The result of this training for learning is a ready mind, a mind primed for and open to the unfamiliar and the alien. These monotonous exercises are the scaffolding that will hold and support the new universe into which the student ventures. Openness is finally the greatest quality of the learner. A student who is constantly comparing an alien grammar to the grammar to which she is accustomed will never experi-ence the tingly mental reorganization particular to thinking in and about a new vocabulary. This openness is a peculiar kind of emptiness: it is rigorous emptiness, well-equipped and well-appointed, a tensed readiness to be filled in. It withholds judgment only so as to judge more correctly later, which is especially necessary when studying ideas or figures for which the student lacks natural sympathy. After all, the only negative evaluation that has intellectual integrity is an evaluation made after an intimate understanding has been developed — in the way, for example, that Isaiah Berlin for decades dedicated himself to the study of his intellectual opposites.

    Why is this capacity useful? The question is often asked. It is a reasonable question, insofar as people deserve to be given reasons for humanistic exertions, but it is also a crass question, because it makes utility paramount. Answers have been given to the question on its own grounds: that the study of art, history, and philosophy can make the difference between brilliant lawyers, politicians, and doctors and ordinary ones, because the more professionals know about human existence, the wiser they will be when their professional activities may require a gloss of wisdom. All this is true and familiar: these are the apologias that adorn the welcome catalogs of liberal arts departments. These practical rationales for humanistic study are further proof of the infiltration and triumph of edX’s flattened “education.” The defense of learning in the terms of training, the justification of the humanities in economic and vocational terms: this is the hemlock that the humanities (and the arts more generally, starved for funds) now serve and swallow. Recall the English majors now flourishing at McKinsey. No, learning for its own sake is the only justification that treats the subject on its own terms — and so learning for its own sake is the only sake there is. In that spirit we may gladly acknowledge the social and personal “utility” of humanistic pursuits, as it is presented by writers and historians and philosophers, since it will inevitably inform and enrich the lives of students and teachers. Anyway, spiritually speaking, the enrichment of human life is useful.

    The obsession with outcomes is hard to resist in an outcomes-based culture. It may penetrate the most impractical of pursuits. In her admirable book Lost in Thought, Zena Hitz, a tutor at St. John’s College, bears witness to one iteration of this phenomenon: “[as a professor] my focus shifted — without my noticing — to the outcomes of my work rather than the work itself. I had lost much of the ability to think freely and openly on a topic, concerned lest I lose my hard-won position in the academic social hierarchy.” Her lament brings to mind Nietzsche’s strictures about the professionalization of philosophy. “It is probable,” he wrote in 1874 in On the Use and Abuse of History for Life “that [a professionalized philosopher] will attain cleverness, but he will never attain wisdom. He compromises, calculates, and accommodates himself to the facts.” He conducts research in order to publish, which he does in order to maintain a reputation for publishing, which he does in order to keep his job. The wonder and the vertigo disappear from his work.

    Pardon the unreconstructed idealism, but there are higher reasons.

    “What do you think about translation?”

    She asked me that question a few months after we met. In that time I had developed a familiarity with the cadence of her thoughts, so different from mine, gentle and complicated, and always swaying, studying, interpreting. This ruminative cadence was the first thing I noticed about her. I knew she would introduce me to a new rhythm, a different pace of thought. My pace unnerved her: it was too fast and forward, she got spooked. Slow down, slow down. It was difficult for me to slow down. I wanted to learn it from her. Too early, and incessantly, I would ask her the questions that occupied me because I wanted to hear them played back at her tempo. It would transform them, make them strange, open them up. Even the words we both use we do not use in the same way. She has cultivated her own relationships with language.

    “What do you mean?”

    (It was an act of generosity that she answered me instead of concluding that I wouldn’t be able to understand her, and then withdrawing from me. That is a particularly bitter kind of rejection. Once, years ago, a man pulled back from me and muttered, “No, no, I shouldn’t have tried to tell you.” I remember where I was standing when he said that.)

    “I mean — well, if you’re in love with someone and he’s asked you to explain a thought that you’ve had, or a fear or anxiety or any example of the many sorts of things that are specific to you, but you know he can’t understand it because it’s the kind of thought he wouldn’t have or even have imagined was possible (not because he’s stupid or self-centered, but because it just isn’t within his framework) you have to translate it for him. Is that bad? If he can’t understand it, does that mean he can’t understand me? That he can’t really love me if translation is necessary? ….. I suppose it’s all a question of degree.” (It was so characteristic that she added that last thought, a signature suffix.)

    Her trust reminds me of an exchange I had with a writer who asked me whether her use of esoteric language, of arcane foreign words, in an essay that she had written made it incomprehensible to uninitiated readers. I reread it and responded: Many of the terms you used felt foreign, like the language of an alien tradition or an exotic religion. I like that feeling. For the duration of your essay I could develop an acquaintance with the rhythms of the tradition of which you are an emissary. It is the rhythm that would have been lost in translation. You were right to be uncompromising about a taste of the original. Since you didn’t define those words, which would have ruptured or mangled their melody, their verbal music remained intact, even if I couldn’t explain in my own language exactly what you were saying. If someone who has never danced asks you what the sensation of dancing is like, the best you can do is show them. I trusted that you would compose your essay in such a way that it would eventually allow me to understand your meaning, and I was grateful that you trusted me to savor what I did not yet understand. You worry about uninitiated readers, but your essay is their initiation, and initiation is education.

    But books are not people. Isn’t reading a form of remote  learning, too? Isn’t a page somewhat like a screen — a blank surface for language to occupy?

    Emerson was a radical reader. Ravenously he sucked the souls of writers out of their books. His great biographer Robert Richardson marveled that “it sometimes seems as though no book published from 1820 until his death evaded his attention completely.” On its face, Emerson’s bookishness is odd given that he worshiped activity and had contempt for “meek young men grow[ing] up in libraries.” But Emerson’s reading was charged, active. It offered entry to a symposium out of time. Reading works of genius, he wrote, one “converses with truths that have always been spoken in the world and becomes conscious of a closer sympathy with Zeno and Arrian, than with persons in the house.” A relentless thirst for the nectar of intellectual companionship informs Emerson’s writing. This is what permitted him to read the way he read. He was able to coax what he sought from the pages of a book because of the enthusiasm (his holy word) that charged his entire approach to living. Wrestling with intellectual and spiritual possibilities in conversation with others was a familiar exercise for Emerson. He took this method, this experience, this dialogical energy, to his books, which he believed were as sure a portal as a classroom.

    Yet he never mistook a book for a person, or recommended reading as an adequate substitute for teaching, lecturing, conversing — for the experiential dimension of study. (“Books are for the scholar’s idle times. When he can read God directly, the hour is too precious to be wasted in other men’s transcripts of their readings.”) But if a book is an example of remote humanistic study, what are we to say of digital remoteness? The text or the image is there on the screen, and so is the tiny apparition of the talking teacher, hovering above it. Ideas in some form may certainly be imparted. But is this the full transit to another world that constitutes the fulfillment of humanistic education? Isn’t it rather the case that the screen leaves one where one began? That it is a buffer, a fancy buffer between the student and the world?

    A screen is too familiar to propel a student from her deepest grooves, particularly for a student who has never left her couch. On a screen everything, no matter how vividly presented, is flattened and made less real, and all the realms are compressed and equalized into a comfortable, closable haze. Most importantly, all the Zooming in the world has not established the screen as anything but a simulacrum of human interaction, a dim facsimile of pedagogical experience. One is no more than a partial student when one has no more than a partial teacher, or no teacher at all. Zooming is a stopgap measure that leaves one longing for actual presence, which is the condition of actual learning. It is a lot better than nothing, but nothing must never be the standard.