Another Life

    You like to read biographies of poets

    You rummage through another life

    That sudden shock

    of entering another life’s dark forest

    But you may leave at any moment

    for the street or the park

    or from a balcony at night

    you may gaze at stars

    belonging to no one

    stars that wound like knives

    without a drop of blood

    stars pure and shining

    cruel

    The Fashions in Trauma

    In articles that appeared in the American Journal of Psychiatry in 1969 and 1971, three army psychiatrists boasted that the policy of embedding psychiatrists in Vietnam’s combat units had been a wonderful success. And so the army’s statistics seemed to show. Peter Bourne, who headed the army’s psychiatric research team, announced that henceforth “psychiatric casualties need never again become a major cause of attrition in the United States military in a combat zone.” It was an assertion that provoked fury among many of those who had been sent to fight in the jungles of Southeast Asia. Soon enough, that anger had tangible consequences of a profound sort.

             Bourne was not the first psychiatrist to claim that his profession had the capacity to ward off the effects of war on the troops. As entry into the Second World War loomed, American psychiatry mobilized. It persuaded Washington that if it wanted to avoid the epidemic of shell-shock that had disabled so many soldiers in the First World War, the government should allow it to screen out the psychiatrically vulnerable. In that way, the damage to morale and to the army’s ability to fight could be avoided, as could the waste of resources involved in training soldiers incapable of enduring combat. 

             One and three quarter million potential recruits were rejected on psychiatric grounds. It was a triumph of science and forward planning. Except that once combat began, it swiftly became apparent once again that modern industrialized warfare was not ideally suited to the maintenance of mental health. Placed under sufficiently appalling stress, the soldiers of the greatest generation broke down between two and three times as often as those who had fought in the First World War. At war’s end, fifty thousand veterans, not all of whom had actually faced combat, languished in mental hospitals, and another half a million received pensions for psychiatric disabilities. Trauma and psychological stress, it seemed, could cause even the most apparently stable individuals to break down, to become maddened with fear, disgust, and horror.

             The traumatic effects of modern military conflict had, in fact, surfaced many decades before even the mass slaughter and maiming that marked the First World War. In the aftermath of America’s Civil War, the newly emerging subspecialty of neurology expected that much of its practice would consist of treating soldiers with obvious wounds of the brain and the nervous system. But such casualties were far outnumbered by veterans who displayed no obvious physical pathology, but who insisted that they were ill and incapacitated. By the end of the war, eight percent of the Union army alone, some 175,000 men, were found to be suffering from “nostalgia” (usually a synonym for depression or panic), or from a variety of “nervous” complaints extending all the way to outright insanity.

             Nobody thought to suggest that these men suffered from trauma, because in those years the word retained its original meaning. Derived from the ancient Greek word Τpαúμα, it had entered the English language in the late seventeenth century, and was used to refer to physical wounds. It was derived from other words meaning twisting, bruising, piercing, and the like. Its extension to encompass emotional wounds would not occur till the late 1880s, when the French neurologist Jean-Martin Charcot and then Sigmund Freud began to employ it in a more extended sense to mean psychological responses to deeply distressing events, and to suggest that the origins of the neuroses lay in shattering emotional disturbances.

    Even in contemporary medicine, of course, the original meaning persists in the designation of certain hospitals as “trauma centers,” that is, places that treat physical injuries of sudden onset and severity that require immediate interventions to save live and limb. Though the psychiatric casualties of the two World Wars gave greater cultural salience to the notion of psychological trauma, the word’s primary reference to physical injury still dominated. For example, when the American Psychiatric Association published the first edition of its Diagnostic and Statistical Manual in 1952, psychological trauma was entirely absent from its pages. The only mentions of trauma referred injuries caused by force or by seizures that were experienced by patients treated with electroshock.

             If the concept of psychological trauma enjoyed greater cultural and professional salience in the following two decades, this largely reflected the fact that psychoanalysis had come to dominate out-patient and academic psychiatry, at least in the United States, and to have an ever- greater influence in American culture. Freudians gave pride of place to the concept of repression and its traumatic effects in accounting for mental disturbances. Trauma in the sense of psychological injury entered common parlance.

    In this changed context, Peter Bourne’s suggestion that traumatic injuries were largely non-existent among the grunts who fought in Vietnam soon came under attack. Embittered veterans who were among the vocal minority that joined the anti-war movement fiercely rejected that pollyannaish narrative. On the contrary, they insisted, their experiences had left them traumatized, and they mobilized to secure official recognition of the damage that the war had done to their mental health. Sleepless nights, nightmares, flashbacks to scenes of sickening horror, unpredictable and uncontrollable episodes of anger, panic attacks, blackouts, emotional numbness, depression — a host of ills was blamed on the war by those who regarded it as an abomination, a crime against humanity in which they had been forced to collaborate.

             Crucially, amid a rising tide of anti-war sentiment, these veterans soon secured allies among the psychiatric establishment, especially two prominent psychoanalysts and psychiatrists, Chaim Shatan of New York University and Robert Jay Lifton of Harvard. Shatan and Lifton joined forces to secure an official psychiatric diagnosis that recognized the reality of these men’s suffering. Shatan published a piece in The New York Times in 1972 in which he coined the term “post-Vietnam syndrome.” He and Lifton insisted that its symptoms — the product of psychological damage inflicted on soldiers by a vicious war — could emerge and persist long after the trauma that caused them. Within eighteen months, Shatan and Lifton were campaigning to have the diagnosis recognized as a distinctive form of psychiatric illness by the American Psychiatric Association.

             They chose a propitious moment. For more than a decade, evidence had been accumulating that psychiatrists had a terrible time agreeing about diagnoses. For example, one important study comparing diagnoses of patients in London and New York found that while Americans called sixty-two percent of their patients schizophrenic, doctors in England gave only thirty-four percent that label. On the other hand, while less than five percent of New York patients were diagnosed with depressive psychoses, the corresponding figure in London was twenty-four percent. But studies like these were buried in academic journals and monographs, and drew little attention outside the profession.

    Then a sensational article appeared in Science in January, 1973. It recounted an experiment in which a series of pseudo-patients presented themselves at a variety of mental hospitals claiming to hear voices but otherwise acting normally. All were admitted, and all but one were diagnosed as schizophrenic, and they were generally institutionalized for weeks before being released as schizophrenics in remission, a devastating diagnosis. Massive media attention to the study, authored by the Stanford social psychologist David Rosenhan, held the profession up to ridicule, and in a panic the American Psychiatric Association formed a task force to revamp its whole diagnostic system, and ensure that such embarrassing findings would not be repeated. Nearly a half century later, a New York investigative journalist, Susannah Cahalan, showed that Rosenhan’s study was one of the great scientific frauds of the twentieth century. But it was a highly successful fraud, one that transformed psychiatry in ways that are still evident today.

             The charge of reworking psychiatry’s diagnostic system to deal with this crisis was entrusted to a Columbia psychiatrist, Robert Spitzer, who had trained as a psychoanalyst but by then had become hostile to its doctrines. He was given almost carte blanche in selecting the members of his task force, in part because the psychoanalysts who then dominated academic and high-status psychiatry regarded the assignment of diagnostic labels as silly and unworthy of their time. He and those he selected were the ones who were in charge of developing and writing the new edition of the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM, as it was called for short). Spitzer, who was targeted by Shatan and Lifton and their allies in the veteran community, and their efforts were in most respects remarkably successful.

             Publicly, Spitzer and his colleagues on the task force presented themselves as driven by data and science. The claim to scientific objectivity was a pose, an ideological proclamation that was sharply at odds with the way in which the DSM task force actually conducted its business. In reality, Spitzer was a politically savvy operator, and the decisions of his committee rested upon the same sorts of clinical intuition that they attacked as unscientific when proffered by psychoanalysts. Consensus was created by horse-trading and then ratified by votes of the members. Yet in trying to convince Spitzer to agree with them, Shatan and Lifton faced a difficult problem. The majority of the task force was determined to purge everything in the diagnostic manual that made reference to psychoanalytic ideas and claims. To do so, it had adopted a resolutely a-theoretical stance on the origins of the various disorders it recognized as varieties of mental illness, and instead undertook to define them via check lists of symptoms. Creating a new diagnosis of a mental disorder whose origins were trauma and its impact would represent a stark exception to the approach that the task force otherwise embraced. Worse yet, the proposed category of post-Vietnam syndrome would entail a particular moral and political stance toward the war, one that was intensely controversial.

             Spitzer, it turned out, was highly sensitive to political pressure. In particular, unlike some of the more obdurate members of the task force, he understood that the dramatic changes the group was preparing had to pass muster with the membership of the American Psychiatric Association, whose ranks were dominated by clinicians, many of them psychoanalysts. Hence he was willing to concede principle in order to make room for the whole range of problems that brought patients into psychiatric waiting rooms, if such a concession proved necessary to gain acceptance of the revised manual. Yet his willingness to embrace the arguments for a trauma-based pathology did not extend to accepting a politically charged label. What materialized, therefore, was a much broader term: post-traumatic stress disorder was born, commonly known by its acronym, PTSD.

    No field tests and no tests of reliability attended the birth of this new disease. In this way it differed from virtually all the other hundreds of diagnoses that made up the new third edition of the manual, commonly referred to as DSM III. The official recognition of PTSD marked a significant broadening of the conditions brought into the ambit of psychiatry, but just how much of an expansion it would create was not immediately apparent. At this stage, the traumatic stress that was seen as precipitating the disorder had to be a major or life-threatening event that would “evoke significant symptoms of distress in almost anyone.” One obvious example was exposure to the horrors of combat, but others were recognized as similarly traumatizing and liable to cause breakdowns. Rape and sexual assault, witnessing one’s parent or child being shot, and being victimized by a major natural disaster were obvious comparisons. Proponents of the new diagnosis argued that for some people, though they were a minority, traumatic experiences such as these produced involuntary, recurrent, and intrusive memories, hypervigilance, emotional numbing or outbreaks of explosive anger, and persistent and deeply troublesome emotions. These in turn often produced reckless or self-destructive behavior, or violence directed at the self or others.

             Embroiled in its foreign wars from the 1990s onwards, the American military and the Veterans Administration have by necessity had to grapple with a burgeoning group of veterans claiming disability and treatment for PTSD. By the late 1980s, the quarter-century long Freudian domination of American psychiatry that had marked the years after 1945 had almost completely faded away, and biology ruled the roost. The psychoanalytic perspective that had animated Shatan and Lifton’s campaign was largely sidelined in the United States, replaced by a search for neurological and biological foundations for PTSD. Many veterans were tempted to embrace the idea that PTSD was a brain disorder because, like comparable myths about the chemical origins of depression, it made the condition “real” and medical, and might thus reduce some of the stigma surrounding the condition. Logically, the explanation suggested to those in charge that PTSD would best be treated with the magic bullets of modern psychopharmacology.

             But, as with the drugs used to treat depression, bipolar disorder, and schizophrenia, those magic bullets, when shot at PTSD, turn out to be not so magical after all. Despite the presence of little systematic research to support their use, antidepressants such as Zoloft, Prozac, and Paxil were prescribed to deal with the symptoms of PTSD. Weak evidence in their favor persuaded the FDA to license them in 2002 as a first line treatment of the disorder. 

    A series of subsequent reviews of research on the pharmacological treatment of PTSD, the most recent of which was published by Matthew Hoskins and his colleagues last year, have repeatedly reached similar conclusions. As in the treatment of depression, drugs prove more effective than placebo in treating PTSD, but the differences, while statistically significant, are clinically quite modest, and the drugs fail to help many patients. Around forty percent of patients fail to respond to medication therapy, and of the remainder, the majority get some relief of symptoms, but not complete remission. At best, research to date shows that pharmacological interventions are a useful adjunctive therapy for patients with PTSD, not the gold standard of treatment they were once heralded to be.

             So back to psychotherapy — not Freudian analysis, but the briefer forms of treatment provided by cognitive-behavioral therapy and its analogues. Often these involve prolonged exposure to materials designed to flood patients with memories like the ones that originally produced their trauma, but in safe surroundings. The expectation is that these recollections will, over time, desensitize them, wean them from trauma-induced bad behaviors, and transform the original horrors into something like fiction. Unfortunately, that does not always happen. The Israeli army conducted a controlled study of the technique on soldiers traumatized by the intifada, only to discover an increase in the soldiers’ psychotic symptoms, post-flooding. The Cochrane Library, which publishes high-quality independent assessments of all sorts of medical interventions, recently surveyed the available evidence on the efficacy of psychological treatments for patients with PTSD and the substance abuse that commonly accompanies it. The paper’s authors comment on how clinically challenging such cases are, and they conclude that here, too, the benefits are modest, with very high dropouts in all trials, and no evidence that the forms of psychotherapy they examined provided any benefit in the treatment of the substance abuse. Of all these interventions, exposure therapy is the least popular and produces the highest drop-out rates. One expert has dubbed it “among the worst possible treatments” for trauma. In sum, though helpful for some, the various forms of psychotherapy are of only limited utility.

             If the proximate cause of the inclusion of PTSD in psychiatry’s diagnostic manual was pressure from some Vietnam veterans, and granting that a substantial fraction of those who have served in the armed forces from 1990 onwards have eventually been diagnosed with PTSD (estimates generally range north of ten percent), still the diagnosis that entered the DSM potentially encompassed a much broader range of precipitating events than combat exposure. As soon became apparent, the inclusion of a trauma diagnosis in the new DSM opened a Pandora’s box. 

    The first manifestation of the potential for diagnostic creep that the new category invited did not take long to surface. Enterprising psychiatrists and psychologists soon claimed to have uncovered a whole new category of victims, those who remembered not too much, but too little. Freudians had long argued that the origins of mental illnesses were murdered memories that refused to stay dead, their repression triggering the patient’s symptoms and persisting till the probing of the analyst made the unconscious conscious. Proponents of the recovered memory syndrome deployed an analogous argument: they claimed that they had found a host of new patients, people (often children) so traumatized that they had buried the memory of the horrors they had experienced, and now suffered from seeming unrelated mental disturbances. Perhaps the most famous explication of this argument, and certainly the most commercially successful, was The Courage to Heal, the best-seller written by Ellen Bass, a poet, and Laura Davis, a short story writer, in 1988. It sold three quarters of a million copies. It compiled an extraordinary laundry list of symptoms that could be indicative of past abuse. Some examples included feeling powerless or unmotivated; lacking interest in life; neglecting or minimizing one’s talents. Borrowing from Freud’s notion of resistance, the book suggested that even denying that one had been the victim of abuse could be a sign of past abuse. Accounts of abuse, Bass and Davis asserted, were never false, though their acknowledgement was complicated by the repressed memories that the abuse provoked.

    As Bass, Davis, and others emphasized, patients whose early sexual traumas were so powerful that they had suppressed all memory of them could, in the hands of skilled therapists, subsequently recall them in vivid and exacting detail. It was the return of the repressed with a vengeance. And vengeance was soon unleashed. A moral panic ensued, extending across the United States and rapidly creating echoes in Europe. Children, it seemed, were being abused on an extraordinary scale, often by parents (especially fathers), but also by other caretakers, such as the staff of preschools. An early example of mass hysteria was the McMartin preschool scandal, a gothic tale that gripped the media for months, and ruined many innocent lives. 

             Over the course of nearly a decade, beginning in 1983, members of the McMartin family, who ran a daycare center, were accused of hundreds of acts of sexual abuse of the children in their charge. The claims soon extended to allegations of satanic abuse, witchcraft, and orgies in secret tunnels under the school where unspeakable acts were perpetrated. At the hands of their “therapists,” children were induced to confirm even the most bizarre stories, and two of the longest and most extensive trials in California history eventually followed. The total costs of the investigations and trials were estimated at upwards of fifteen million dollars, and they received widespread coverage in the print and broadcast media, most of it highly biased in favor of the accusers and the prosecution. It was a modern-day Salem witch trial, and even the failure to secure any convictions failed to dent the confidence of many in the truth of the allegations. Besides ruining the lives of the accused, the episode left behind hundreds of emotionally damaged children.

             Throughout the 1980s and 1990s, cases of recovered memory proliferated. In Teeside, in northeast England, for instance, during five months two pediatricians diagnosed 121 local children from 57 families as suffering from sexual abuse. Numbers grew to crisis proportions, and hundreds of children were removed from their homes and placed in care. In this instance, however, the police remained highly skeptical of the doctors’ claims, though once more the accusations received massive media coverage. Eventually, the vast scale of the allegations and their growing implausibility led many to conclude that the crisis was manufactured. 

    But still reports kept surfacing elsewhere in Britain (as in the United States) of child abuse and satanic rituals, and academic conferences and some feminist writers insisted on the reality of recovered memories and the ubiquity of early childhood abuse. Such beliefs persist in some quarters even now, and are fiercely held, notwithstanding accumulating evidence that discredits the central tenets of recovered memory syndrome and demonstrates the suspect and deeply unsatisfactory quality of the “evidence” once proffered in its support.

             The models of human memory on which the enthusiasts for recovered memory syndrome relied were increasingly undermined as those studying the subject put them to the test. Those who experienced repeated trauma, it turned out, far from forgetting, remembered being abused all too well. In the words of the Harvard psychologist Richard McNally, “The notion that the mind protects itself by repressing or dissociating memories of trauma, rendering them inaccessible to awareness, is a piece of psychiatric folklore devoid of convincing empirical support.” Additionally, as critics examined how therapists had “helped” their clients to recover memories of past abuse, it became increasingly apparent that suggestion and even outright coercion had played vital roles in creating and structuring the patients’ reports. None of these counterfactuals convinced the true believers in recovered memory syndrome. “Trauma,” one of them truculently asserted, “sets up new rules for memory.” Just as recent public opinion polls show that seventeen percent of Americans accept that “a group of Satan-worshipping elites who run a child sex ring are trying to control our politics and media” and are impervious to any evidence to the contrary, those promoting recovered memory syndrome rejected the researchers’ criticisms out of hand. 

             By the turn of the century, though, the movement was facing rapid eclipse. Allan Horwitz has suggested that a major factor in its decline was a spate of lawsuits against some of the more prominent therapists, one of whom was forced to pay nearly eleven million dollars in damages in a case brought by one of his patients. The increasingly outlandish recovered memories also invited skepticism. Claims of Satanic rituals, murder of babies, and cannibalism ran aground when not a single such incident could be uncovered. And the media, having once embraced and spread the movement’s stories grew skeptical and then hostile, following the courageous lead of Dorothy Rabinowitz of the Wall Street Journal, who journalistically damaged the whole notion of repressed memories beyond repair. Within the academy, once in thrall to Freudian ideas, apostates such as Frederick Crews repeatedly ferociously dismissed Freud’s ideas about the authenticity of repressed memory. (Crews and other renegades regarded Freud as a complete charlatan.) And when insurance companies ceased paying for long-term treatment of the condition, it almost miraculously disappeared from the scene.

             That was not, of course, because child abuse somehow vanished as a precipitant of mental health problems, including major depression, substance abuse, and suicide. On the contrary, to cite just one major example that has proved international in scope and massive in scale, the Roman Catholic Church has had to deal with an avalanche of lawsuits and pay billions of dollars in damages for the sexual assaults of pederastic priests and the decades (if not longer) during which the church hierarchy connived in and covered up these predatory behaviors. Notably, however, the victims of these horrors did not forget the traumas they had experienced. Rather, for years they were intimidated into silence, or were afraid or too ashamed to make public what had happened to them.

    At the height of the moral panic about recovered memory, American psychiatrists had themselves contributed to the epidemic by modifying the definition of PTSD that appeared in the revised edition of the DSM that was issued in 1987. DSM IIIR, as it was called, added a clause to the diagnosis that allowed that trauma might not rapidly produce PTSD, but might emerge later in those with an “inability to recall an important aspect of the trauma.” A subsequent revision, DSM IV, loosened the diagnostic criteria even further. Before the trigger had been the experience of extraordinary trauma, far outside the range of normal human experience, something that would “cause distress in almost everyone.” The new edition dropped these criteria. Instead, it pronounced that PTSD could be provoked when “the person experienced, witnessed, or was confronted with events that involved actual or threatened death or serious injury, or involved a threat to the physical integrity of self or others” and felt “intense fear, helplessness, or horror.” That meant that those who witnessed a loosely described traumatic event, even those who saw it on television and experienced an intense emotional reaction to it, could qualify for the diagnosis. With a new emphasis on the subjective response of the individual, the stage was set for claims that all sorts of daily setbacks and experiences could suffice to induce PTSD. 

             Marital infidelity, the break-up of a relationship, the sudden and unexpected death of a loved one, unwelcome sexual attention that stopped well short of assault, reading upsetting material in academic classes, hate speech — any number of triggers could prove powerful enough to induce the requisite trauma. Other extensions were more plausibly related to the battlefield trauma that had led to the new diagnosis. First responders —- police and firefighters in particular — have come to be seen as especially vulnerable, since their work exposes them to highly stressful and traumatic situations on a regular basis. (The evidence that these encounters actually do heighten susceptibility to PTSD is thin, but the assumption certainly makes claims for compensation easier to establish, given the unavoidable diagnostic dependence on self-reporting.) 

             Symptoms might surface promptly, or manifest themselves months or years after the fact, and might change over time, in keeping with the chameleon-like and utterly protean character of the disorder. And since the diagnosis often rested on subjective complaints, and what counted as evidence of PTSD was so variable and capacious, it comes as no surprise that the number of patients given the diagnosis has soared. Given the poor track record of available treatments, that has ominous implications for the fiscal burdens that American society will face. The new edition of psychiatry’s diagnostic manual, DSM-5, published in 2013, did make some effort to tighten PTSD’s definition, so that watching something on television, for example, is no longer to be seen as a sufficient trigger, and it removed references to one’s emotional response to trauma, but in many ways the genie was already out of the bottle. The loosening of the definition of trauma became a fact of American culture.

             If we look first to the figures for America’s fighting forces, the magnitude of the problems associated with trauma begins to become apparent. Military leaders have routinely been profoundly suspicious of complaints of combat-related trauma. They had and have a disposition to dismiss those who exhibit its symptoms as cowardly malingerers or worse, and sometimes they have acted on those beliefs, as when the British army executed some of its shell-shocked soldiers as deserters. The secondary gains of succumbing to shell shock or combat neurosis or PTSD were only too obvious: escape from the imminent probability of mutilation or death. In the end, though, the preternatural tenacity with which shellshock victims, for example, clung to their symptoms made simple claims of malingering hard to sustain. All but the most benighted generals (George Patton comes to mind) eventually acknowledged what trauma could do. Yet the problem of distinguishing the genuinely traumatized persisted, and was magnified after the World Wars and their successor conflicts by the pensions, free medical treatment, disability payments from the Social Security Administration, and other benefits a psychiatric diagnosis brought in its train. 

    Some statistics indicate how difficult it has become to keep the epidemic of trauma-related disorders within bounds. Between 2003 and 2012 alone, the number of veterans seeking PTSD treatment from the VA grew from 190,000 to more than half a million. In just three years, 2010-2012, the VA spent $8.5 billion on PTSD treatment, and the Department of Defense another $790 million. More than forty percent of the troops who served in Iraq and Afghanistan have already been approved for lifetime disability benefits. Although those numbers include compensation for medical as well as psychiatric disorders, a substantial fraction of the costs of disability is being incurred for mental health treatment. Before 9/11, ten percent of veterans sought assistance for trauma-related disabilities. That fraction has recently risen to thirty-six percent. Perhaps, though it has become heretical to broach the subject, that rise is not unconnected to the very substantial monetary and other benefits the diagnosis brings in its wake. In 2014, for example, as Allan Horwitz points out, “successful claimants with 100 per cent disability received an annual tax-free benefit of $34,884 if they were single and $39,216 if they were married with two children.”

             In the civilian world, new legions of trauma therapists stand ready to help, attending not just to those actively complaining of traumatic disorders, but to those thought to have been exposed to traumatic events. The destruction of the Twin Towers in Manhattan, the widespread damage and destruction associated with Hurricane Katrina, the horrific aftermath of the multitude of school shootings that are now such a routine and appalling feature of American life: natural and man-made disasters are everywhere, and now are accompanied by an influx of trauma counsellors, often funded by special appropriations from Congress. Yet those moneys generally go unspent. Only a fraction of the large amounts Congress provided to treat the traumatic effects of 9/11 were used. Most people turn out to be quite resilient even in the face of these horrors, and even the great majority of those who are initially traumatized seem to recover quite promptly without professional treatment. By contrast, others exhibit symptoms of PTSD and claim the diagnosis in the aftermath of what most would regard as a quite minor stressor.

               

    This massive broadening of the concept of trauma has had a host of undesirable effects. Perversely, encouraging people to see themselves as victims of trauma pushes them to embrace that identity. In Nick Haslam’s words, we “risk over-sensitivity: defining relatively innocuous phenomena as serious problems that require outside intervention.” Adversity and emotional upset are, after all, inevitable parts of all our lives. To label such misfortunes as trauma and thus to suggest that they may trigger permanent psychological damage is all too likely to prove a self-fulfilling prophecy, provoking people to see themselves as irretrievably damaged, wounded, or disordered. Such diagnostic creep gives those afflicted with the minor challenges of everyday life the same label as those who are forced to cope with catastrophically severe and life-threatening events, thus devaluing the suffering of the latter group. It conflates things that distress, disturb, or upset us with events that crush us, that shatter our lives. 

             There is another danger that arises from the broadening and devaluing of the concept of trauma. Universities proclaim themselves bastions of free speech. Increasingly, however, students have begun to complain about readings, discussions, and course materials that they find distressing. Their claims are amplified by the assertion that these encounters are traumatizing them. Hence the demand for so-called trigger warnings, even though there is evidence that they do not work and may even be harmful. Worse yet are demands that certain readings and viewpoints be banned because some find them injurious or “traumatizing.” Those demands have come mostly from people who call themselves progressive, and it is clear they have had some success in intimidating faculty who want to consider themselves “progressive,” sometimes even securing the termination of those who fail to go along, as in two notorious cases at Yale. But the new fashion in trauma is not limited to the left. Thirty-two states in the more conservative parts of the country have employed similar arguments in arranging to restrict the teaching of the racial history of the United States. Students, as the Florida legislation proclaims, should not be made to “feel discomfort, guilt, anguish, or any other form of psychological distress on account of his or her race, color, sex, or national origin.”

             The specter, and even the reality, of censorship (or self-censorship) predicated on the need to avoid distress and trauma is real and immensely troubling. Many universities, institutions that ought to know better, have instead fudged the issue or actively failed to mount a principled defense of what is supposed to be their core value. In 2014, the University of Chicago spoke vigorously against the climate of intimidation. “Concerns about mutual respect,” the university said, “can never be used as a justification for closing off discussion of ideas, however offensive or disagreeable those ideas may be to some members of our community…. In a word, the University’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.” At last count, eighty-seven other universities have signed this statement. What is remarkable, and disturbing, is the long list of those that have not, a list that includes Harvard, Yale, Stanford, and my own institution, the University of California.

             Traumatology is now a major industry. Estimates are that the market for its wares in 2021 exceeded $1,740 billion, and it is forecast to grow even larger over the next decade. Originally an American-made diagnosis, PTSD has spread world-wide, and aggregating the United States, five major European countries (Great Britain, Germany, France, Italy and Spain), and Japan, the total number of diagnosed cases of the disorder now exceeds five million a year. As with the statistics for major depression, that number includes under one umbrella a vast spectrum of sufferers. What Chaim Shatan and Robert Jay Lifton thought was a cultural and time-specific disorder brought into being by the impact of an immoral war on American soldiers has become a platitudinous and ever-expanding part of the psychiatric universe. Its continued growth and cultural salience are the more remarkable given the feeble therapeutic weapons that the profession is able to muster against its depredations, and given growing evidence that early interventions often prove unhelpful and even actively harmful. But as Nancy Andreasen, a leading American psychiatrist, commented at the end of the last century, “It is rare to find a psychiatric diagnosis that anyone likes to have, but PTSD seems to be one of them.” If the mid-twentieth century was, in Auden’s words, the Age of Anxiety, perhaps the early twenty-first century is the Age of Trauma.

    Proust and the Mystification of the Jews

             The controversy over whether Proust was in any sense a Jewish writer or, on the contrary, in some way essentially a Jewish writer, began in France only weeks after he was buried. It still persists there. But before we dip into these muddied waters, some clarifications are in order about the contradictory milieu from which he sprang.

             Proust, who was born in 1871 in Auteuil near Paris, was, of course, a half-Jew, though that is not how he defined himself to others. His mother, Jeanne Weil, was the daughter of a wealthy Jewish stockbroker. Her son remained deeply attached to her all his life, and her image is affectionately inscribed in In Search of Lost Time in the figure of the Narrator’s mother and to some degree in that of his grandmother as well, together with his actual maternal grandmother. Jeanne’s marriage with Adrien Proust, a perfunctory Catholic, was undertaken for social reasons on her part and for economic reasons on his. He was a highly ambitious physician who would attain considerable professional success, and the substantial dowry that Jeanne Weil brought to the union helped launch him on his career. The son of a grocer, he could not offer her lofty social standing through his background, but his identity as a Catholic gave her and the two sons she would bear him the necessary entrée into French society. A stipulation of their marriage contract was that any children of their union would be baptized as Catholics. Jeanne, however, never contemplated conversion, nor did her husband attempt to persuade her to convert, as far as we know. 

             There appears to have been no great romantic element in the marriage, and as was very common in haute bourgeois circles at the time, he had mistresses, including, it seems, one that he shared with his wife’s uncle, Louis Weil. Proust himself was by no means a pious Catholic, though for a brief period around the age of twenty he actually thought about a vocation as a priest. The abiding appeal of Catholicism for him was aesthetic, as well as serving as a marker of identity. In 1896, in an often quoted letter to his friend Robert de Montesquiou, he states flatly that “I am Catholic like my father and my brother; on the other hand, my mother is a Jew.” Why this attenuated connection with Jewish origins of a self-affirmed Catholic should enter into his enterprise as a writer is by no means evident, but the connection cannot simply be dismissed. To address this question, one must understand something of the ambiguous — or perhaps one should say, amphibian — nature of the Parisian Jewish milieu of which the Weil family was a part.

             A recent book by James McAuley, The House of Fragile Things: Jewish Art Collectors and the Fall of France, happily offers a vivid and detailed account of this milieu. During the nineteenth century, a concentration of vastly wealthy Jewish families gathered in Paris. Many of them came from Germany or from Alsace-Lorraine (Balzac’s Jewish banker speaks with a heavy German accent), though one prominent family, the Camondos, were Sephardic, their origins in Istanbul. To get some idea of the wealth these families possessed, the family patriarch Salomon Camondo was said to be the richest man in the Ottoman Empire. Many of these people remained for a time self-identified Jews, supporting synagogues and communal institutions, marrying and burying as Jews, and mostly wedding within their own social circles. The list of the affluent Jewish families is long: the Camondos, the Rothschilds, the Ephrussis (chronicled by their descendant Edmund de Waal in The Hare with Amber Eyes), the Reinachs, the Cahen d’Anvers, the Weils. Everyone in this milieu aspired to enter the highest echelons of French society. 

    Given that aspiration, it is hardly surprising that many of them intermarried, like Jeanne Weil, or converted to Catholicism, as she did not, and sought to put entirely behind them any traces of their Jewish origins, as she also did not. Alas, in the early 1940s the descendants of these families, including those who thought their Catholicism and their high social standing would protect them, were deported and murdered in the Nazi death camps. One must, of course, resist the temptation to say haughtily, “Little did they know…,” an attitude against which Michael André Bernstein argued vigorously in Foregone Conclusions, his indispensable book on thinking about past lives after historical catastrophe. But for a time — and certainly in Proust’s lifetime — it seemed as though the offspring of these wealthy Jewish immigrants to France would continue to flourish splendidly, shining at the heights of French society and culture, whether as Jews or otherwise.

             As McAuley’s fine book richly illustrates, a principal avenue for this flourishing was aesthetic, and this aspect of upper-class French Jewish life has obvious relevance to Proust. Many of these families became great collectors of art. McAuley shows that their tastes tended to focus on paintings, furnishings, and objets d’art from the ancien régime, evidently out of a desire to identify with the traditional and aristocratic elements of French culture. There is an approximate analogy in this drive to collect with the wealthy New York families of the Gilded Age — one thinks of the Fricks, who left a legacy in the museum that bears their name — a group that was mainly nouveau riche. But the aestheticism of their French counterparts was more pronounced. In due course, they would leave their collections to national museums or turn their own grand mansions into museums. To get some notion of the sheer opulence of these collections, one has only to visit the Musée de Camondo, once the family’s Paris residence. 

    All this ostentation of wealth in the collections elicited contradictory responses from those who considered themselves Français de souche, authentic native French. As the great Jewish collectors became public benefactors, they were appreciated by some for contributing to French culture. Others, predictably, resented them for their display of wealth, which in their eyes confirmed their preconception that Jews, inevitably vulgar, were concerned with nothing but the conspicuous display of wealth. The very taste of their collections was excoriated by some as a violation of true French aesthetic values, an act of cultural subversion. 

    Such conflicting views are brilliantly etched in the representation of Jews and of the responses to them in the pages of In Search of Lost Time. This, after all, was the milieu that Marcel Proust knew through his mother. And it should be said that, Catholic though he officially declared himself to be, many of his schoolmates were acculturated secular Jews, mostly unbaptized. Although the young Marcel certainly had no desire to be thought of as a Jew, or even a so-called half-Jew, he could scarcely disengage from the awareness that the Jews were a distinct social constellation in France, encompassing even those who did not see themselves as part of it, and this visceral awareness came to play an important role in his novel.

             Where does this ambiguous status of Jews in France lead one to think about the Jewish dimension, if there is one, of Proust’s writing? If one may judge by the range of responses to his novel from the 1920s to the present, it leads to some strange places. Many of these responses have been documented in detail in an exhaustively researched new book by Antoine Compagnon called Proust du côté juif. (The extreme systematic thoroughness of documentation is perhaps not surprising in a country where doctoral dissertations often run to a thousand pages.) Compagnon is a prominent literary scholar who has written on Montaigne, Baudelaire, literary theory, and, repeatedly, on Proust. In this book, he tracks the published responses to Proust by Jewish writers from the early 1920s and then forward decade by decade. Many of these responses are rather curious. Let me add that Compagnon is party to a current debate on Proust’s Jewishness, to which I shall presently turn.

             Perhaps the oddest thing about the early views of Proust is that a group of young French Zionists, writing in La Revue juive, Menorah, Palestine, and sometimes in other journals, recruited him as an inspiration for their movement. They did not claim that he was actually a Zionist, and the famous — for some, notorious — remark in the introductory chapter of Sodom and Gomorrah — that “the fatal error [of homosexuals] would consist, just as a Zionist movement has been encouraged, in creating a sodomist movement and in rebuilding Sodom” — clearly does not make him a Zionist. The reasoning of those young Jewish intellectuals was along the following lines: Proust was already widely regarded as one of the major French novelists. In his big book, as they wanted to see it, this great writer offered a compelling example, chiefly through the character of Swann, of a staunch Jewish identity that was entirely secular. And this was precisely what they hoped to achieve through Zionism. In this oblique way, they contrived to see him as a Jewish writer, an extraordinary novelist they could claim as their own. The argument hardly requires refutation.

    Another line of argumentation for claiming Proust as a Jewish writer was taken up in these early responses: that it was a matter of heredity. This view was typical of the common tendency to create a mystique about Jews. Frequent comparisons were made in those journals between Proust and Montaigne as purportedly Jewish writers — to be sure, with some pushback in the publications of the era. Montaigne had a grandmother whose maiden name was Lopez; she was a convert to Catholicism, and some have proposed that she was a Marrano, a crypto-Jew, despite her outward Catholicism. She was, of course, a forebear two generations removed from the writer, and there is no evidence that she could have influenced him, even if in fact she did remain a secret Jew. Yet these young Zionist writers assumed that she preserved a distinctively Jewish consciousness which somehow percolated down to her grandson. Perhaps it was through “the blood”? 

    In this uncertain light, it was claimed that Montaigne’s strikingly innovative mode of writing, with its frankness, its unblinking reflectiveness, and its bold analysis of mores and men, derived from his supposedly Jewish identity. Montaigne, it should be said, remained a loyal if not entirely a devout Catholic, taking care to leave instructions that he should be buried according to the Catholic rite. In the famous tower where he wrote, he arranged for a balcony to be built so that he could participate in mass being conducted down below. He had, moreover, nothing like Proust’s intimate relationship with a mother who remained a Jew. Yet he and Proust were said to share a distinctively Jewish ruminative style, even if the prose of one does not read much like the prose of the other. This mode of thinking, sad to say, is the positive mirror-image of the negative idea of heredity — the racist idea — that flourished after 1492 in the Spanish notion after of limpeza de sangre, or “purity of blood,” which must be preserved by Old Christians against invasive aliens. The notion reached its monstrous apotheosis in Nazi Germany — namely, that a few drops of Jewish blood were forever determinative.

             Another method of claiming Proust as a Jewish writer was through a proposed link with traditional Jewish sources, and this has persisted until our day. In the 1920s, it was contended by some that his innovative and often complicated style was “talmudic,” although there is no evidence that he had the slightest acquaintance with the Talmud. The Talmud is an important Jewish text that is exotic and seemingly impenetrable to outsiders, and so it has been tempting for some as part of a general process of exoticizing the Jews to attribute a “talmudic” character to any writer having a Jewish heritage. But “talmudic” means much more than dense or allusive. The association of Proust with the Talmud met published resistance in this early period, the objectors rightly countering that Proustian prose, however original, was thoroughly rooted in French literature.

             The “talmudic” argument, with occasional exceptions, has not persisted, but an even stranger one, the association of Proust with another quasi-canonical Jewish text, the Zohar, still flourishes. The seductive appeal of the Zohar as a ground of Jewish writing is clear. If the Talmud is exotic, the Zohar, a mystical text composed in thirteenth-century Catalonia that is often deeply mystifying, is exotic to the second power. No less a French intellectual eminence than Julia Kristeva, in a study of Proust in 1994, stated with perfect confidence, though without any evidence (as Compagnon notes), that Proust drew on a translation of the Zohar when he was writing his novel. She throws into the mix of sources, moreover, the old contention about the Talmud: “One knows that Jewish tradition” — about which, of course, “one” knows almost nothing — “and especially the talmudic, to which Proust was responsive, proliferates interpretations.” Having established by fiat that Proust was responsive to talmudic tradition, she confidently concludes that “in this light, Proust’s experience can be said to be talmudic.” This is all nonsense.

             In the current French scene, the Zohar connection has been revived by Patrick Mimouni in a series of articles and in a recent book called Proust amoureux: Vie sexuelle, vie sentimentale, vie spirituelle. Mimouni, born in Algeria to a Jewish family, began his career as a film-maker with a sequence of films about AIDs and homosexuals. In the 1990s he began writing about Proust, soon immersing himself in the minute details of Proust’s biography and repeatedly focusing on the role of homosexuality in Proust’s work. In his new book, he conveys a clear sense, following others, that Proust was not a garden-variety homosexual. It is known that, though he had a few lasting relationships, he frequented the kind of homosexual brothel catering to special tastes in which Baron Charlus is seen in the novel. Reports have circulated, passed on by Mimouni with, I think, a certain relish, that Proust attained orgasm by watching two rats savagely attacking each other, and Mimouni accepts the story that the horrific scene in the novel in which Madamoiselle Vinteuil, in the presence of her lesbian lover, spits on a photo of her dead father, actually mirrors an act by Proust in a gay brothel spitting on the image of his beloved mother. 

             For our present concerns, however, Mimouni devotes attention to what he contends is an important connection between Proust and the Zohar. This would be a significant feature of his vie spirituelle, the term in French spanning “spiritual” and “intellectual.” The slender basis for this proposed familiarity with the Zohar is a passing reference by Proust to “reading” the Zohar around the time of his journey to Venice in 1900. But when he was working on In Search of Lost Time, a French translation of the Zohar was not available. His only access to the arcane work would have been through a Latin version. It is far from clear that his lycée Latin would have enabled him to get anything out of such a difficult esoteric text. Thus, Compagnon, in a rejoinder to Mimouni, expresses warranted skepticism about the notion that Proust was acquainted with the Zohar in any way. Mimouni on his part has accused Compagnon of deleting a reference to the Zohar in his edition of Proust’s notebooks — a rather grave charge. The weight of evidence looks to be in favor of Compagnon, but for Mimouni and others determined to make Proust into a Jewish writer the temptation to see a link with the Zohar is irresistible. And since few actually know what is in the Zohar, it is easy to claim that Proust drew on it, a claim that will be attractive to the many who have no informed connection with Jewish tradition but are either striving to be affirming Jews or are philosemitic gentiles, thinking of Jewish culture as something magical and mystical that was somehow transmitted to writers of Jewish extraction, however removed they were from Judaism and the Jewish tradition. To cite a small symptomatic instance in Proust’s case: a brief mention in a letter to a friend in 1908 of the Jewish mourning custom of laying a small stone on the grave of the departed has been leaned on heavily by some commentators as a sign of their hero’s extensive familiarity with traditional Jewish practices.

    What, then, can reasonably be made of the Jewish side of Proust’s great novel? A kind of baseline proposition was put forth by Edmund Wilson nearly ninety years ago in Axel’s Castle, his pioneering study of literary modernism. With a flourish of characteristic common-sense intelligence, he wrote that “it is plain that a certain Jewish family piety, intensity of idealism and implacable moral severity, which never left Proust’s habits of self-indulgence and worldly morality at peace, were among the fundamental elements of his nature.” This seems just, though I am not quite sure about the intensity of idealism. One readily sees how he could have drawn these attributes from his mother, with no mystery of the blood as conduit. But the picture requires some complication.

             In Search of Lost Time is certainly not a Jewish novel, but there is a noticeable presence of Jews within it, one that becomes increasingly evident in the last stages of the book. Snobbery is involved in much of this. Proust is surely one of the most probing and subtle anatomists of snobbery in the history of the novel. He himself could definitely be regarded as a snob, but this only enabled him to understand the phenomenon all the more keenly. He was a man who loved to be loved — in the first instance, of course, by maman, but then especially by well-placed people in society. He did not hesitate to hobnob with vicious antisemites as long as they were sufficiently prestigious. Thus he dined at the home of Lucien Daudet, son of Alphonse Daudet, a contributor to Edmond Drumont’s violently antisemitic La France juive, while Lucien’s brother Léon belonged to the extreme nationalist right. Proust sat in silence while his host delivered himself of vituperative pronouncements on the Jews, and only later did he object in a letter to a friend. Similarly, he said nothing when his friend Robert de Montesquiou spewed a tirade against the Jews, though the next day he wrote a letter to Montesquiou, which I cited above, saying he had to differ with him on this because his mother was a Jew, though he himself was a Catholic. Evidently, he saw as the admission price to French high society a willingness to hear Jews vilified and to bear it in silence.

             The vehemence of French antisemitism in this era may be a little hard to imagine now. It scarcely yields pride of place to the growing anti-Jewish fury across the border in Germany over the next several decades. That widespread hostility toward the Jews is the context of the Dreyfus Affair that split France apart as Proust was coming of age, and put a decisive stamp on the later pages of In Search of Lost Time. Drumont was the leading spokesman for this unrestrained hatred. Characteristically, early in the Dreyfus Affair, he writes of Joseph Reinach, who belonged to one of those wealthy Parisian Jewish families, “If his ape-like face and his deformed body carry all the stigmata of his race, all the faults of the breed, his hateful soul swollen with venom better sums up all its evil, all its genius, disastrous and perverse.” Der Stürmer did not invent anything new. It is important to realize that Proust, longing to be accepted in the best French society, was hardly moving through a neutral environment.

             A number of Jewish characters, for the most part in relatively walk-on roles, cycle through the many episodes of In Search of Lost Time. Two of them are rather important in the imaginative economy of the novel. They are, of course, Bloch and Swann. In most respects they are altogether antithetical portraits, negative and positive, respectively. The Narrator has been an acquaintance of Bloch since the early days of both, and the Narrator has taken care to preserve a certain distance from him. Bloch is in certain ways the embodiment of the off-putting qualities that antisemites attribute to Jews. He is vulgar, unpleasantly assertive, conspicuously ambitious, and a social climber. 

    One of the hallmarks of Proust’s greatness as a novelist is that in the course of time Bloch undergoes a transformation, whether seeming or real, like many of the other characters. When we encounter him later in the novel, he has assumed a noble-sounding name, Jacques de Rosier, married a Christian from a well-placed family, refined his manners, and contrived to attach himself to the French aristocracy. The Narrator may well invite us to regard this self-transformation as a disguise, perhaps a different manifestation of Bloch’s initial vulgarity. The portrait of the new version of Bloch is amusing and etched in acid: “Thanks to the haircut, the removal of his mustache, to the general air of elegance, to the whole impression, his Jewish nose had disappeared, in the way a hunchback, if she presents herself well, can seem almost to stand straight.” 

             Yet there may be a certain element of the author in Bloch. He was, after all, the son of a Jewish mother and on his father’s side a grocer’s grandson. He by no means grew up as a Jew, like Bloch, but he must have been perceived as a Jew by at least some in the social stratosphere that he chose to inhabit. It was widely asserted that in the famous Nadar photo of a bearded Proust on his deathbed he looked like an Old Testament prophet — though not to me — as though at the end the Jew had emerged from the mondain. Even more pointedly, some months earlier, an acquaintance, Fernand Gregh, remarked: “One evening, after for a time he had let his beard grow, it was suddenly the ancestral rabbi who appeared behind the charming Marcel we knew.” Endowed with a good education and impeccable manners, Proust cultivated a polished persona that gave him easy entrée to exclusive places, and in those places he tolerated vehement antisemites. Proust certainly disapproved of Bloch, the character he invented, but I suspect that he put a little of himself in Bloch.

             Swann, of course, is a much more substantial figure. Indeed, he is arguably the most complex and attractive character in the novel. Readers will recall that in the first pages of In Search of Lost Time, it is Swann as the dinner-guest of the Narrator’s parents who distracts maman from going upstairs to her child desperately awaiting a goodnight kiss, although we do not immediately realize his identity. For a while in the novel we do not even know that Swann is a Jew, and I think this is a shrewd strategic choice on the part of the writer. Unlike Bloch, there is no suggestion that he has hidden his origins, but he does nothing to make people conscious of his ethnic identity. This is not a matter of pretense or disguise. Swann is what he appears to be — a perfect gentleman, a poised socialite, a person of exquisite taste (one recalls the care he takes to go from florist to florist in order to assemble the perfect bouquet for the hosts who have invited him to dinner). He is also a loyal friend, something by no means true of everyone in this world. 

    Swann’s one major slip is to fall in love with Odette, a young woman not as refined as he is and also a woman who has been far from chaste. She is, as the Narrator tells us more than once, not really his type. This paradox derives from Proust’s always interesting assumption that the psychology of love is often quirky, unpredictable, contradictory. Proust’s hapless love for his chauffeur, Alfred Agostinelli, who was obese when they met, uncultivated, and never much cared for his extravagantly indulgent benefactor, is a case in point from the writer’s own life. 

             It is the Dreyfus Affair that flips our perception of Swann’s Jewish identity. It may also have flipped something in Proust. The trumped-up charge that Dreyfus, a Jewish army captain, was guilty of treason in passing military secrets to the Germans became a litmus test for where one stood in French society. The accusation was widely believed; Dreyfus was convicted, stripped of his rank, and sent to imprisonment on Devil’s Island. A retrial four years after the initial one in 1896 issued in another conviction. Only in 1906 was he exonerated and his military appointment restored with a promotion to major, after the real culprit had been exposed and fled the country. The false accusation, however, gained considerable credence and was embraced by staunch Catholics, conservatives, aristocrats, and also by some of the leading artists of the period (Degas, Cézanne, Renoir). The readiness of large numbers of the French to embrace a blatant and bigoted falsehood has one sad explanation: there was a widespread suspicion, even among many one would have thought should have been better informed, that the Jews had never been altogether French. Their indelible foreign character thus made it plausible that one of them would be prepared to betray the vital interests of the nation to a foreign power. In Proust’s novel, after Swann has declared himself a Dreyfusard, it occurs to the Duc de Guermantes that though he had always thought of Swann as a Frenchman, now he realizes that he was mistaken.

             Proust himself became a defender of Dreyfus, even attending sessions of the first trial. His character Swann emulates him as a Dreyfusard. The aristocratic Guermantes, on the other hand, with just one exception, were anti-Dreyfusards, and they were prepared to discard their friendship of many years with Swann, now seeing him as a Jew. It even strikes them that Swann has been revealed to look like a Jew. The last physical description offered of him in the novel is shocking. He appears at the kind of elegant social gathering he has always frequented, but now he is wasted by old age and disease:

     his nose, for so long reabsorbed into a pleasing face, now seemed enormous, tumid, crimson, more of an old Hebrew than an inquisitive Valois [The Valois were a French royal line.]. Perhaps in any case, in recent days, the race had caused the physical type characteristic of it to reappear more pronouncedly in him, at the same time as a sense of moral solidarity with the other Jews, a solidarity that Swann seemed to have neglected throughout his life, but which the grafting, one onto the other, of a mortal illness, the Dreyfus Affair and anti-Semitic propaganda, had reawakened.

    When Proust wrote these lines, he could have not known that he himself would be perceived by many in his photographic desk-mask as a Hebrew prophet. For the purposes of his novel he was drawing a polemical antithesis: Bloch’s Jewish nose has receded through a kind of illusionist’s trick while Swann’s, hitherto barely noticed, had emerged in the uncompromising authenticity of his impending mortality.

             A complement of sorts to this last image of Swann become a Jew occurs in the famous moment in which the Duchesse de Guermantes can spare no attention for her dying friend as she prepares to depart for a ball because she is entirely preoccupied with finding her red shoes to wear for the occasion. Proust’s point in showing her in this light is clearly to illustrate how for women like her social vanity takes precedence over the mere human obligation of kindness and concern for a friend in extremis. One suspects, nevertheless, that her discovery through the Dreyfus Affair that the dying Swann is, after all, merely a Jew may have encouraged her to ignore Swann in his ultimate hour of distress.

             Proust’s ability to show how someone may change through time, both in regard to identity and in what happens to one’s physical presence, is a signature aspect of his greatness as a novelist. It is a corollary of the long duration that he has devised for his novel, and one finds few parallels to it in other novels. One surely cannot detect any sign in this of those putative Jewish sources for his writing, the Talmud and the Zohar. Two rare precursors may be the Biblical stories of Jacob and David in the Hebrew Bible, and though these are texts that Proust knew, there is no evidence he paid any attention to scriptural precedents.

             So it is hardly helpful to think of Proust as a Jewish writer, much as some have sought to do so. He evinces no distinctive cast of mind, no special mode of writing and thinking, that can plausibly be attributed to the Jewish side of his family background. He writes about Jews in his novel because they were a visible part of the world that he set out to represent. In this limited respect, he does not differ from Philip Roth, who always objected to being labeled a Jewish novelist, even if the presence of Jews in Roth’s fiction is much more predominant than in Proust’s. As the son of a Jewish mother, Proust was acutely attuned to the precarious location of Jews in French society in his time, and he represents this fraught condition with penetrating understanding. If his Jewish background enters at all into the achievement of his novel, it is that, as a person pulled between two forces, something that perhaps he would not himself have admitted, he proved to be an unusually keen observer of both the French and the Jewish side of the tension. A writer needs to stand a little on the outside to see things with the greatest clarity.

             The general lesson to be drawn from this strangely persistent controversy over Proust’s identity as a writer is that there is nothing particularly mysterious or unique about the Jews. Granted, they are a people that has persisted in history over many long centuries, during which they produced remarkable cultural and spiritual achievements. But the Jews are like everybody else, and not even more so. To think that they possess some magical esoteric heritage that is manifested in the creative work of many writers of Jewish extraction is in the end foolish, and sometimes racist. Proust is pre-eminently a French writer whose literary lineage includes La Rochefoucauld, the philosophes, Stendhal, and Flaubert. Inevitably, he made use of what he knew from his mother’s world and from his sometimes uneasy negotiation as his mother’s son with the French world beyond it, but then most writers make use of whatever is part of their familiar experience. Proust provides no inspiration for Jewish identity, and why should he?

     

    What is a Statesman

    We yearn for great leaders, but we seem to resist them when they come along. This is a paradox inherent to democracies, between the demand for liberty, equality, and self-reliance among citizens and the continuing need for leadership in the unruliness of an open society. We vacillate between power and drift, between embracing strong leaders and endorsing a kind of leaderless rule. Our confusion about statesmanship is partly because we have lost the language in which to understand the term.

    The term statesman has an old, even an antiquarian ring about it. Herbert Storing, a great historian of the American founding, once noted that there seems something almost “un-American” about the word. While politicians pay lip service to the concept, for the most part the term is regarded as outmoded, elitist, and vaguely anti-democratic. Harry Truman once joked that a statesman is just a politician who has been dead for ten or fifteen years. 

    Yet it is hard to deny that today we are experiencing a dearth of statesmanship. With the exception of Volodymyr Zelensky, bless him, who is doing a stirring impression of Winston Churchill, statesmen are in short supply. Our current moment has certainly witnessed a renaissance of authoritarian figures — Putin, Xi, Modi, Bolsonaro, Orban, Trump — but none of these seem to qualify as statesmen. What is a statesman, and how do we know one when we see one?

    The confusion about the concept is due in part to its unavoidably normative character. Isn’t one person’s statesman another’s demagogue? Historians are often wedded to a kind of social determinism that regards the statesman as an agent of powerful classes, interests, and social forces which he or she may only dimly understand. Political scientists, who only feel at home in the world of big data that can be quantified and analyzed by mathematical methods, contribute to the flattening out of experience. But it is impossible to study political phenomena without evaluating them. If we are unable to distinguish a magnanimous statesman from a humble mediocrity from an insane imposter, we will be unable to understand anything about politics. 

    Like much of our political vocabulary, the concept of the statesman is of ancient origin. It is a translation of the Greek word politikos. Plato devoted an entire dialogue to this concept, although his most famous discussion of the statesman occurs in the Republic, where he famously asked what kind of knowledge a statesman had to possess. His answer was that the politikos was required to be a philosopher-king, someone who blended a high degree of intellectual excellence or expertise with the skillful management of public affairs. Many people disagreed with Plato’s answers — most notably Aristotle, his student — but his question is the one we have been grappling with ever since.

    The understanding of statesmanship has been compromised by two tendencies fostered by modern democracy. The first view conceives the statesman as a technocrat, someone who is guided by scientific experts and who is then able to apply this knowledge to the various problems deemed to be plaguing society. This kind of statecraft is rightly called “progressive” because it regards progress in politics as dependent upon advances in scientific (and social scientific) knowledge. According to this view, as scientific knowledge increases so, too, does our ability to apply its insights to the most pressing issues of society, whether these are hunger, disease, poverty, inequality, or climate change. Social problems are regarded here as largely technical in nature and politics is seen as a form of policy science.

    The idea that politics is reducible to policy is at the core of what is sometimes referred to as the administrative state. This view was imperishably expressed in Alexander Pope’s couplet:

    For forms of government let fools contest;

    That which is best administered is best. 

    On this account, politics can be reduced to a form of problem-solving not unlike that encountered in the worlds of science, technology, business, and other aspects of a modern capitalistic economy. The claim that we frequently hear from public officials that they are just “following the science” is a perfect illustration of this approach. Politics becomes a matter of implementing the insights of scientists, medical professionals, and other policy experts. We do not necessarily expect our leaders to be experts, but we expect them to follow the advice of experts, certainly in arcane areas such as public health and monetary policy. 

    The second misunderstanding confuses the statesman with the populist leader. As William Galston has argued, populism and democracy are to some degree inseparable. Every democratic leader claims to have the mandate of the people even if he holds power by only the slimmest majority, and sometimes not even by a majority at all. The populist leader was best characterized by Max Weber with the term “charisma”. In his renowned essay “Politics as a Vocation,” he invoked this term to distinguish the charismatic leader from the party politician. The charismatic leader is someone who claims to stand above special interests and party loyalty and speak directly to the people, and who can serve as their voice. In modern American politics, Woodrow Wilson was the greatest representative of this viewpoint.

    The test of the charismatic leader is the claim to authenticity, that he speaks for the people. But how does one measure the authenticity of a leader? How does one distinguish the charismatic leader from the demagogue, the mountebank, and the fraud? How does one distinguish the true prophet from the false prophet? This is one of the oldest questions in the history of human affairs. Weber provides no acid test for charisma. There are no fixed principles, no program for action. There is only the personality of the leader. As Machiavelli said about the charismatic preacher Savonarola, he lost authority only when the people ceased to believe in him. Charisma is very much in the eye of the beholder. 

    The problem with charismatic politics is its almost complete lack of content. In recent American history, Barack (“Yes We Can”) Obama and Donald (“Make America Great Again”) Trump have been regarded as charismatic leaders, but George H. W. Bush and Joe Biden have not. Yet a leader’s charismatic properties have no bearing on the quality of his or her governance. Charisma is value-free: it can be used for good or evil. It is a means, not an end; except when it becomes an end in the form of personalist dictatorship. It is a kind of political mesmerism. There are no fixed principles of action beyond a certain theatrical gift and a demand for authenticity. In charismatic leadership, the message of the leadership is the ruler himself. Underlying this is a complete absence of, even contempt for, constraints on power. For this reason, charismatic leadership is often a recipe for extremism and violence. Weber regarded the charismatic leader as a remedy for the problems of political gridlock and stalemate, but there is only a short step from the populist leader to the Duce and the Führer. Charisma may not be incompatible with democracy, but it is dangerous to democracy. 

    To start at the beginning, statesmanship is about the care and oversight of states. It presupposes the bounded political units — call them states or nation-states — that have been the basis of international order ever since the Peace of Westphalia. The leaders of empires — Cyrus, Alexander, Napoleon — no matter how gifted they were in other respects, were not statesmen but conquerors and military despots. The same is true for those who exhibit leadership skills in business, university administration, and criminal enterprises. They may express expertise but not political know-how. Statecraft fundamentally concerns securing the conditions of political legitimacy.

    There are three kinds of statesmen I wish to consider: founders, preservers, and reformers. 

    The greatest statesmen in history are political founders, lawgivers, responsible for introducing “new modes and orders.” These are inevitably revolutionary figures, people like Machiavelli’s “new prince” who promise freedom and redemption from an oppressive political order. Founding statesmen have no authority other than their own words and deeds to justify them. It is their capacity to mobilize and to shape opinion that is the basis of their legitimacy.

    Political founders come in all shapes and sizes, from mythic figures such as Moses, Lycurgus, and Romulus to Oliver Cromwell, George Washington, Lenin, and Mao. I would also add less well-known figures such as Atatürk in Turkey, Bismarck in Germany, Ben-Gurion in Israel, Sun Yat-sen in Taiwan, Nkrumah in Ghana, and Lee Kuan Yew in Singapore, as further examples of this creative type. These are all the “fathers of the Constitution” who create the frameworks within which later and lesser statesmen can handle changing situations.

    The study of political foundings is exciting, as the never-ending flow of books, movies, and musicals about the American founders illustrates. Political founders typically try to set up the widest possible gap between themselves and the old regimes that they are attempting to overthrow. They wish to represent a rupture. In France in the 1790s, they renamed streets, remade the calendar, abolished historical provinces, reformed the language, and created new religious cults. William Wordsworth famously recalled his own enthusiasm at the outbreak of the French Revolution, 

    Bliss was it in that dawn to be alive,

    But to be young was very heaven!

    Books such as Hannah Arendt’s On Revolution and J.G.A. Pocock’s The Machiavellian Moment celebrate revolutionary beginnings as the only times of truly creative political action. Such revolutionary moments seem almost to be breaks in political time, as Stephen Skowronek has argued. 

    But while it is easy to romanticize revolutionary moments in history, it is just as easy to forget precisely how tenuous and dangerous such moments are and how easily things can turn dark. Consider how quickly the Arab Spring turned into the Arab Winter, as exhilarating hopes for democracy foundered on the shoals of political reality. Even at their best, founding moments introduce a principle of disruption into political life that, once started, cannot easily be stopped. As Aristotle warned, the habit of disobeying law, even a bad law, has the tendency of making people altogether lawless. Revolution is not a bus that you can get off at will. 

             In 1838, in his speech on “The Perpetuation of Our Political Institutions,” Lincoln warned against the dangers of the “towering genius” in politics. The American founders were men who staked their all on creating free institutions. But their nobility and their sincerity of democratic purpose does not preclude the possibility that later generations will produce Alexanders, Caesars, and Napoleons of their own. Such men will not rest content with perpetuating what has been established; they will seek new fields of glory as a testament to their own ambition and love of fame, and often this will involve the repeal of the accomplishments of those who preceded them. “Towering genius disdains a beaten path,” Lincoln warned. “It thirsts and burns for distinction and, if possible, it will have it, whether at the expense of emancipating slaves or enslaving freemen.” How to channel this overreaching ambition remains a permanent challenge for any theory of statesmanship. What is a durable constitutional republic, after all, if not a beaten path? 

    If the political founder is the rarest type of statesman, the most familiar type is the preserver, who works within an established set of laws and institutions to maintain the coherence of a tradition. Preservers are typically conservatives such as Walpole, Burke, Adams, and Disraeli, who are responsible for maintaining the social fabric often after periods of war or social upheaval. They see themselves as custodians of continuity. Preservation is the policy of adjusting old traditions to fit new situations. Like the “Parting Hours” described by George Crabbe,

    The links that bind those various deeds are seen,

    And no mysterious void is left between.

    The art of preservation may seem an unambitious role in comparison with founding, but it is the mode of political action fitted for most occasions. A founding that is not followed by preservation is doomed. Few will ever have the chance to construct a political order de novo, but many have the opportunity to shape and secure the polities that they already inhabit. Preservation is distinctly non-charismatic in tone. It must present its innovations as derived from traditions, law, and institutions in order to give its methods an air of legitimacy. 

    The classic study of statecraft as the restoration of order is Henry Kissinger’s A World Restored, which analyzed the role of two great European diplomats — Metternich and Castlereagh — and their part in the restoration of the balance of powers after the Napoleonic wars. Other notable restorationists were Konrad Adenauer, who helped to restore the moral and political dignity of Germany after a time of war and dictatorship, Angela Merkel, who did so much to restore a sense of German unity after the fall of communism, and Margaret Thatcher, who restored a sense of order and stability in Britain after a period of strikes and moral decay.

    The best description of this form of statecraft was provided by the English Whig George Savile, Marquis of Halifax. In 1688, in his classic essay “The Character of the Trimmer,” Halifax defended a policy of what might be called principled inconsistency. The first goal of the statesman, he argued, is to keep the ship of state afloat. The true statesman — or trimmer: he was not using the word pejoratively — must be prepared to shift his position, moving from one side of the boat to the other in order to correct its list. If this offends the demand for adherence to principle, so much the worse for principle. While such a policy might be condemned as opportunism or flip-flopping, Halifax argued that such prudent flexibility was the essence of political wisdom.

    This image of the ship of state was repurposed by the English conservative philosopher Michael Oakeshott in his lecture on “Political Education” given at the London School of Economics in 1951. For a people who had only recently emerged victorious from a world war, Oakeshott offered only words of caution, describing himself as a skeptic “who would do better if only he knew how.” Rather than speaking about justice, liberty, or equality — the standard fare of political philosophy — he insisted that politics consists of “the pursuit of intimations,” by which he meant acting in a way that is most likely to retain or to restore the coherence of a tradition. “In political activity,” Oakeshott told his listeners, “men sail a boundless and bottomless sea; there is neither harbor for shelter nor floor for anchorage, neither starting-place nor destination. The enterprise is to keep afloat on an even keel.” 

    These words capture concisely the image of the statesman as preserver, not the larger-than-life personalities of a Churchill or a de Gaulle who led their countries through times of crisis, but the George Marshalls and George Kennans charged with preserving peace and stability. Preservers are typically not futurist visionaries possessed of a grand strategy and a singular sense of purpose; they are more often diplomats and parliamentarians accustomed to working the back rooms and the corridors of power. In Machiavelli’s metaphor, they tend to be foxes, not lions. Their goal is not to create justice on earth, but to establish legitimacy where this means the maintenance of stability and authority. 

    Finally, the third type of statesmen are the reformers, who regard statecraft as a means of affecting moral and political change. Reformers are often, but not always, outsiders to politics who agitate for change through popular protest and acts of civil disobedience. This may be because the path of ordinary political participation is closed to them, as was the case with the abolitionists and suffragettes of the nineteenth and early twentieth-centuries, or because it may seem to them that agitation and protest are the only ways of effecting meaningful change, as Black Lives Matter advocates today seem to believe. In either case, this outsider status often gives reformers a richer sense of possibilities that may not occur to those who have spent their lives operating inside the corridors of power. 

    The classic expression of this kind of protest politics was Thoreau’s On Civil Disobedience, which argued that any law that violated the sacred right of conscience has no claim on our loyalty. For Thoreau, this line was crossed with the American war with Mexico and the annexation of Texas, which he considered little more than a land grab. His appeal to conscience has had a powerful hold on our moral imagination, and has been an inspiration to generations of peoples worldwide, but there is a problem. This particular brand of conscience politics is essentially empty. Conscience politics has inspired everything from the abolitionist movement, to protest against the Vietnam war, to the refusal of the Kentucky county clerk who refused to issue licenses for gay marriages. Are all expressions of conscience equally valuable? Is radicalism a mark of truth? How do we know when appeals to conscience are sincere expressions of a person’s deeply held moral and religious beliefs and when they are just a mask for prejudice and self-delusion? What happens when it is hard to draw the line between beliefs and prejudices? One person’s voice of conscience may be another person’s hypocrisy.

    The task of the reformer may be the most difficult of all, because she must know how to split the difference between revolution and restoration. The question posed by the reformer is not “all or nothing” but “how much or how little.” Exemplary reformers have been men such as Mikhail Gorbachev in the former Soviet Union trying to navigate the transition from communism to democracy, Deng Xiaoping opening China after the disasters of Mao’s Cultural Revolution, and Nelson Mandela and F. W. de Klerk in South Africa working to bring an end to apartheid. The interesting feature of leaders such as Gorbachev, Deng, and de Klerk is that they were once insiders who ended up advocating for change and leading their countries, at least temporarily in the case of Russia and China, into a more hopeful democratic future.

             The best kinds of political reformers are those who manage to hold together both loyalty to founding principles or ancient traditions with agitation and critique. They are examples of what Michael Walzer has called “connected critics,” because their standards for reform do not come from some private voice of conscience or some putatively universal principle of natural right, but from an appeal to the very standards of justice espoused by the systems they were criticizing. Examples of connected critics are Camus but not Sartre, Orwell but not Lenin, Gandhi but not Fanon, Martin Luther King but not Malcolm X. This is an idea that would benefit many social justice advocates today. 

    Statesmanship, as I noted earlier, rests on knowledge, but what kind of knowledge? Is statecraft a science, like mechanics or engineering, that can be codified in rules and then learned, memorized, and put into practice, as Machiavelli seems to have believed? Or is it more like an art that can only be mastered through practice and experience, and that requires the capacities of insight, imagination, and intuition, like having an ear for music or a knack for languages? I want to consider statecraft as more of an art than a science, based on the mastery of three essential skills. 

    The first is the statesman’s role as teacher. The statesman is not simply a problem-solver in possession of technical expertise or a tribune of the people’s will, but an educator who is able to shape a vision of the political regime. By a regime I mean not just a form of government but an entire way of life, what gives a people’s collective life a sense of wholeness and meaning. Each regime type creates in turn a different range of human possibilities. The differences between regime types form the basis of our ability to distinguish between the various politically relevant ways of life.

    As educator-in-chief, the statesman must always be aware that opinion is the medium of society. As Hume ringingly observed, “It is on opinion only that government is founded.” By opinion he did not mean the kind of information elicited through polling data or focus groups, but a structured set of sentiments, habits, and beliefs that shapes a people’s character and way of life. Without a settled body of opinion, no government, not even the most authoritarian, could last a single day. No one understood the importance of opinion better than Lincoln. “With public sentiment,” he wrote, “nothing can fail; without it nothing can succeed.” He then went on to add: “Consequently, he who molds public sentiment goes deeper than he who enacts statutes or pronounces decisions.”

             The statesman’s art consists, then, in the ability to educate the public mind by helping to form its beliefs and opinions. In the American case, these opinions are rooted in our founding texts — the Declaration of Independence and the Constitution — as well as the immense superstructure of laws, interpretations, and rulings that have been built upon them. These texts have in turn shaped our fundamental experiences of right and wrong, of who should rule and who should be ruled, of who governs and why. These structures of opinion are what make future change possible.

    A second feature of statecraft is the capacity for communication. In the phrase frequently applied to Ronald Reagan, the statesman must be a “great communicator.” This is what used to be known as the art of rhetoric. The importance of rhetoric is especially true for democracies, where statecraft consists in the ability to communicate with fellow citizens whose views are decisive in politics and to some extent in governance. It is not enough for a statesman to craft a vision for society; she must be able to harness it in language and persuade others of its power and beauty. As Edward R. Murrow said of Churchill, “He mobilized the English language and sent it into battle.” This is what the greatest leaders have been able to do, namely, to immortalize in words and images what a people stand for, what they believe in, and what they look up to. 

    More than any other regime, democracy gives to speech a pride of place in determining the legitimacy of policy. Unlike autocracies that are governed from above, democracies require a continual flow of communication not only from the top down but from the bottom up. Democratic politics has for this reason rightly been called “logo-centric” or talk-centered, because most of what goes on takes place through the medium of language, whether in legislative assemblies, jury rooms, courts of law, newspapers, and increasingly the internet. Mill said that democracy is government by discussion.

    To be sure, the importance of speech can be vastly overstated. The ideal of a rhetorical democracy — parrhêsia in the Greek sense of “to speak boldly” — is at the core of what Jurgen Habermas has called “an ideal speech community.” Habermas apparently considers politics as a vast public seminar that should be adjudicated by the neutral standards of “public reason,” in which only the force of better argument decides policy outcomes. There are two significant problems with this view. First, it holds public deliberation to a high standard of language and thought that is rarely realized in existing democracies and their media. Consider only what passes for public deliberation in contemporary America: our debased discourse does not exactly rise to the standard of parrhêsia. Second, this view too quickly ignores the more coercive and disciplinary aspects of politics. Democracy is about public deliberation, but it is also about authority, command, and decision. A country, or a legislature, or a court, is not a seminar. The reduction of politics to speech is at the root of what the ancients called sophistry. 

    The dependence of democracy on speech or rhetoric can be both a strength and a weakness. This focus on speech may be a source of frustration to those who demand swift, decisive, and concerted action. But deliberation is also necessary for providing a sense of legitimacy for public decisions. As Pericles said of Athenian democracy in his Funeral Oration, “instead of looking on discussion as a stumbling-block in the way of action, we think it an indispensable preliminary to any wise action at all.” 

    The third characteristic of the statesman is political judgment. Aristotle named this capacity phronesis to indicate the sphere of prudence or practical reason that he regarded as the political virtue par excellence. Judgment is the form of reasoning appropriate to citizens situated in juries, legislative assemblies, and deliberative bodies of all sorts. It is knowledge of the fitting or the appropriate thing to do under the circumstances. He associates it with the man — and for Aristotle it is always a man — who possesses the skills necessary to manage well the affairs of the political community. 

    The knowledge of the statesman differs from both the theoretical knowledge of the philosopher and the technical expertise of the specialist. While philosophical knowledge aims at the true or the universal — the ideal regime, the idea of justice — and technical knowledge with the mastery of rules, political judgment is necessarily local and improvisational. The relation of judgment to circumstance is essential for successful statecraft. “Circumstances,” Burke wrote, “give in reality to every political principle its distinguishing color and discriminating effect.” In politics, circumstance is everything. 

    No one has thought more deeply about the role of political judgment than Isaiah Berlin. Berlin was especially interested in what distinguishes successful statesmen from philosophical genius and why, by implication, the latter often appear politically foolish. Albert Einstein may have been a brilliant theoretical physicist but his reflections on world peace seem almost touchingly naïve. Bertrand Russell was a brilliant logician but his writings on marriage, religion, and war display an alarming indifference to the complexities of political reality. What Berlin deemed essential for the statesman was what he called “a sense of reality.” This meant not merely possessing more facts or information about society but having an almost intuitive grasp of its texture, both its constraints and potentials.

    Judgment in politics is the ability to see possibilities that had not previously been seen or imagined. It not a matter of knowing more but of seeing further than others. It is the capacity that we associate not with the scientist who uncovers laws and uniformities, but with the creative artist, the poet, the novelist, and the playwright who seeks patterns and connections between colors, shapes, characters, and words. Judgment is almost an aesthetic perception, something like the ability to see pattern and coherence in a painting or work of art where others see only chaos and confusion. It is not just a quality of the mind but involves the entire personality, the unique temperament, of the individual. 

             Good judgment consists not least in the ability to improvise on the spot. Like a musician creating unfamiliar riffs on the familiar chords of a jazz standard, having judgment consists in the skill of working within an established idiom but expanding upon and developing the possibilities that are latent within it. It is a kind of analytical imagination. It is the same skill possessed by the master chef who is able to create new combinations of tastes from a familiar palette of choices. Judgment is not the mechanical application of a rule or a fixed standard to changing circumstances — something like the demand for strict consistency — but the ability creatively to adapt rules to new and unforeseen situations and master them. 

             Good judgment in politics is the quality necessary for successful statecraft. This is not to say that good judgment necessarily guarantees success. It is often difficult to say whether success is the outcome of judgment and foresight or just good luck. Sometimes even the best plan may need a little luck. The wisest historians have often considered luck — accident, happenstance, contingency — as a causal power in history. Machiavelli could speak of fortuna as a goddess that can dispense as well as withhold her favors. He claimed that even the most far-seeing statesmen could only control events half the time, leaving the other half to the vicissitudes of fate. In Guys and Dolls, Sky Masterson pleaded that “luck be a lady tonight.” It is a sign of wisdom to recognize the limits of our capacities. Near the end of the Civil War, Lincoln confessed to Albert Hodges that “I claim not to have not controlled events, but confess plainly that events have controlled me.”

    Great statesmen are judged not only by how they respond to success but how well they handle failure. Do they respond with bitterness and resentment or with a sense of magnanimity in the face of defeat? Contrast, if you will, Richard Nixon’s petulant concession speech after his failed gubernatorial run in 1962 to Al Gore’s magnanimous concession speech after the Supreme Court stopped the Florida recount in 2000. The statesman must know how to turn failure into success. FDR’s defeat for the Vice Presidency in 1920 and then his struggle with polio was a better training for leadership than his earlier life of privilege. Occasional setbacks are a valuable test of character. Churchill is reported to have said that the mark of success is the ability to go from failure to failure with no intervening loss of enthusiasm. This is witty, but it cannot be true. A person who has met with repeated failure, however enthusiastic, could not possibly be said to have good judgment even if such a person meets with occasional success. As the saying goes, a stopped clock is still right twice a day.

    Judgment is, finally, the ability to respond to unforeseen situations. Preparing for the unpredictable is always the better part of judgment. To be sure, there is no one model for the exercise of judgment. It is all a matter of context and circumstance, but if there is one feature that distinguishes the successful statesman from the day-to-day politician, it is the ability to articulate the permanent and aggregate interests of the community. This requires the ability to plan not just for today or tomorrow but for the future. As Tocqueville — who has acted indirectly as the educator of many statesmen and legislators — said in the introduction to Democracy in America, his book was written not to satisfy any faction or class but to see deeper and further than the different parties. “While they are occupied with the next day,” he wrote, “I wanted to ponder the future.”

    I have said, ruefully, that the concept of statesmanship seems old-fashioned, out of touch with the times. Perhaps it is. We vacillate between the view that all politics is essentially a power struggle while at the same time holding our leaders to impossibly high standards of moral perfection. We either expect too little from our public figures or too much.

    The ethics of statecraft can be summarized, I believe, in a single word: responsibility. The concept of responsibility grows out of moral and legal language. A person can be called responsible for something when she acts on her own initiative or when her actions can be regarded as the cause of some state of affairs. To be held responsible is connected with terms like causation, guilt, and accountability. Responsibility may seem to lack the grandeur of such ancient moral terms as “duty,” “conscience” and “virtue,” but it is also more suited to the politics of a democratic age. 

    This brings us back to Weber. He explored the theme of political responsibility in great depth, and regarded it as the defining characteristic of the statesman. Political life, Weber argued, is torn between two competing moralities. The first he called an ethic of conviction, which takes different guises but is typical of the moral idealist in politics. The classic form of this ethic was the Sermon on the Mount, with its injunctions to “turn the other cheek” and “resist not evil with force,” but it finds similar expression in the Kantian demand that politics must give way to morality or in the Rawlsian claim that justice is “the first virtue of social institutions.” In each case, politics is regarded as morality by other means. “The believer in an ethic of ultimate ends,” Weber wrote, “feels ‘responsible’ only for seeing to it that the flame of pure intentions is not squelched.” 

    Weber tended to associate conviction ethics with the Christian pacifism and revolutionary socialism of the World War I generation, but it is in fact a category of belief and sentiment that has manifested itself throughout history in various times and places. It is the attitude of the moralist who identifies a particular evil — slavery, war, exploitation, injustice — and demands its eradication, not tomorrow, not next year, but today, here and now, immediately, regardless of the cost. A sense of indignation, no matter how well-meaning, then gives rise to the demand for moral action, and soon after issues in fanaticism and violence. “Those, for example, who have just preached ‘love against violence’ now call for the use of force for the last violent deed,” the conviction ethicist proclaims, in Weber’s words, “which would then lead to a state of affairs in which all violence is annihilated.” Needless to say, the final act, like the end of history, is a condition that never arrives.

    A pure example of conviction ethics was the radical abolitionist William Lloyd Garrison, who advocated no compromise with the Constitution in his opposition to slavery. On an issue like slavery, certainly, it is important to keep alive a pure moral vision, but such visions can only be held by saints, reformers, and “intellectuals.” It took Frederick Douglass and Abraham Lincoln to understand that if slavery were to be abolished, it would be not be by shredding the Constitution but by embracing it. Or consider two more recent cases: the journalists Daniel Ellsberg and Julian Assange both published official state documents during wartime without considering how such materials would inevitably be distorted and misused. Men such as Garrison, Ellsberg, and Assange are what Raymond Aron once called “technicians of subversion,” who prefer to see their country dismembered or defeated rather than compromise the purity of their ideals. 

    At the other end of the spectrum, in Weber’s analysis, is the ethic of responsibility. This phrase has been often understood to mean a form of consequentialism, a concern with what will work and what will not, or what is expedient over what is morally right. This description is not false but it fails to grasp what philosophers often call “agential” responsibility. This ethic is concerned not with the purity of intentions but with the consequences of action, especially the unintended consequences, that may follow from them. This is not to say that it is simply Machiavellianism by another name. Rather the art of the statesman is concerned with the uses of power and its moral and psychological effects on those who wield this power. What, exactly, are these effects?

    First, the ethic of responsibility requires taking ownership for decisions that will invariably inflict harm upon others. Politics, as Weber warned, is always a bargain with infernal powers. There will always be situations where even the nobility of the end will be compromised by the sordidness of the means necessary to achieve it. Lincoln’s decision to prosecute a war to end slavery ended up costing over six hundred thousand lives. He never doubted the rightness of the cause, but the length and the destructiveness of the war took an immense psychological toll on him. Sometimes anguish accompanies virtue. Consider Truman’s decision to drop the atomic bomb on Hiroshima effectively ending World War II. The decision was, in retrospect, the correct one, but it cannot help leaving a sense of disgust in its wake. Truman never second-guessed his decision, believing that in the end he saved lives, especially American lives, although the philosopher G. E. M. Anscombe subsequently attacked him as a mass murderer. Whatever Truman’s faults — he was not particularly given to moral self-reflection — his decision ended a terrible war and brought peace more swiftly than any other option. 

    Anscombe’s protest over the decision of the University of Oxford to confer an honorary degree on Truman is a clear example of the kind of conviction ethics that Weber deplored. No complex decision will be morally blameless, and those who seek a clean conscience and a pure heart should pursue their satisfactions in private life. “The safety of the morally innocent and their freedom to lead their own lives depend upon the ruler’s clear-headedness in the use of power,” the philosopher Stuart Hampshire wrote, perhaps with Anscombe in mind. I would only add that whereas we should not necessarily expect our leaders to be morally paradigmatic human beings, we should at least expect them to be attentive to the needs and the interests of their fellow citizens and call them to account when they fail in this task.

    Second, the ethic of responsibility accepts that moral conflict will always be the norm in politics. “We are placed into various life-spheres each of which is governed by different laws,” Weber wrote. Unlike the idealist convinced that all issues must be subordinated to a single cause, the responsible statesman is aware that he operates in a world of conflicting values that are qualitatively heterogenous. There is no summum bonum that is equally good for all individuals, there is only a range of values the importance of which will be determined by circumstances, education, and personality. The thesis of “value pluralism,” which is now most associated with Isaiah Berlin, has led some critics, notably Leo Strauss, to label Weber (and Berlin) a moral relativist who accords equal legitimacy to all values, however evil, base, or insane. This is an unfortunate misreading that robs moral life of its difficulty and its pathos. The fact that our deepest commitments stand in inalterable conflict was not meant as an exhortation to extremism or to nihilism, but as a counsel of sympathy and moderation. To Barry Goldwater’s call that “extremism in defense of liberty is no vice,” Weber would have replied that not all things are permitted even in the pursuit of a just cause. 

    Statecraft is ultimately a matter of choice, not so much between good and evil, but between rival and competing goods that cannot be tidily ranked by some hierarchy of ends or derived from some first principle. There is no one value, whether it be peace, equality, freedom, justice, or rights, that always trumps all others. Rather than seeking the best, responsible statecraft seeks to avoid the worst. If we cannot expect our leaders to follow the Hippocratic Oath to “do no harm,” we should at least expect them to do as little harm as possible. In politics, as in love, a sound maxim is “you can’t always get what you want,” which means that statecraft will always involve the art of balancing conflicting ends and purposes. Deal-making and compromises are the inevitable costs of a morally diverse and politically conflicted society. And the problem of “dirty hands” remains an ever-present possibility.

    Third, an ethic of responsibility suggests responsibility to oneself. “Whoever wants to engage in politics at all,” Weber warned, “is responsible for what may become of himself.” Politics can do strange things to people. It can turn ordinary men and women into monsters. What Reinhold Niebuhr once said of religion is equally true of politics: it makes good people better and bad people worse. Only those who can approach politics with a sense of self-restraint — a feat akin to Ulysses having himself bound to the mast — are capable of responsible leadership. Responsibility requires a “sense of proportion,” the control of the passions, and a degree of detachment from friends and associates. When presidential candidate Bill Clinton told a supporter, “I feel your pain,” he was deliberately attempting to create a sense of intimacy between them, breaking down barriers of formality and restraint, yet it is a characteristic of the greatest leaders to put a sense of distance between themselves and their followers. Lincoln remained aloof even from those who knew him best. In The Edge of the Sword, de Gaulle wrote movingly of the loneliness of command.

    Finally, statecraft is an autonomous sphere of political activity related to but under-determined by external moral, legal, scientific, or economic principles. It must be distinguished from the narrowness of the administrator, the dogmatism of the moralist, the pedantry of the lawyer, and the zealotry of the partisan. It is a realm of its own. Statesmanship is not something for which rules — natural law, the Categorical Imperative, the greatest happiness for the greatest number, even raison d’etat — can be given precisely because statecraft requires a freedom or latitude to act as the situation requires. Strategy without flexibility is futile. Statecraft consists in the concrete decisions made under the force of circumstance. When the moment of truth arrives, when it becomes necessary to say, “Here I stand, I can do no other,” then the statesman has found his calling and this calling will not be provided by Morality, Science, or History, or by any power other than one’s own individual judgment, which is formed by education and experience. There is no set of principles that can define in advance what is to be done in all situations, because no set of principles can control for the mutability of life. 

    Man’s yesterday may ne’er be like his morrow;     

    Naught may endure but Mutability.

    The Shadow Master

    On July 15, 1945, Rembrandt’s 339th birthday, the Rijksmuseum in Amsterdam re-opened with the most emotionally charged exhibition in its history. Called “Weerzien der Meesters,” or “Reunion with the Masters,” the show gathered one hundred and seventy-five paintings that had spent the five years of the Occupation hidden in bunkers. During those five years, private collections were looted and museums stripped of their greatest works. For all the average person knew, these treasures, like so many others, had been stolen or destroyed in the Nazi terror.

    Now they were making a triumphant return to the center of Amsterdam. From The Hague came Fabritius’s little goldfinch, Potter’s big bull, Vermeer’s pearl earring. From Haarlem came the great Hals group portraits, which were displayed alongside the Rijksmuseum’s own collection — including, of course, the nation’s famous Rembrandts. 

    In 1939, with war looming, the huge Night Watch, eleven by fourteen feet, had been taken to a castle in North Holland, where it was stored in a vault of reinforced concrete. The location proved too dangerous. In 1940, when the Germans invaded, the masterpiece was covered with a canvas borrowed from a local farmer and hastily removed to a bunker in Castricum, closer to Amsterdam: a journey of fifty kilometers that took twelve hours. At one point, when an enemy plane appeared overhead, its escorts took refuge in neighboring fields, leaving the great painting alone in the middle of the road. 

    When it finally reached its destination, its caretakers discovered that it was too large for the entrance, and they had to roll it up. Finally, in 1942, it was taken to a special storage site near Maastricht, where it was kept in a limestone quarry, thirty-three meters underground. The director of the Frans Hals Museum, Henk Baard, recalled the scene: “Through the slow progress of the silent bearers the remarkable spectacle, under its ghostly lighting, recalled a princely funeral.” 

    Now it was back. One hundred and sixty-five thousand people eventually visited the show. In a country that still lacked basic provisions, many of these visitors came on an empty stomach. All understood its promise: that past glory would bring future resurrection. “A people that can display such a parade of greatness shall reclaim its special place,” a journalist wrote. At the opening, a minister declared that the Canadians who liberated Holland “have also liberated Rembrandt and Frans Hals.” 

    More than Hals, Rembrandt needed that liberation. He needed it more, in fact, than any other Dutch artist. Hitler himself had declared him “a true Aryan and German,” and under the quisling regime he had become the focus of a bizarre cult, his birthday even replacing the exiled Queen Wilhelmina’s as the national holiday. Now this unwitting German hero could become, once again, the symbol of the dignity of a free people. 

    Light had chased out darkness. It was precisely the kind of cosmic struggle that Rembrandt had illustrated in his works, though that struggle rarely had such a clean outcome. History was recapitulating the trajectory of Rembrandt’s own evolution. If Vermeer was a painter of light, Rembrandt was a painter of dark, or more precisely, of dark commingled with light; and in his work the question of evil recurs more than in any other Dutch artist’s — so insistently that looking at his pictures is sometimes unbearable. There are more scenes of murder, cruelty, torture, rape, betrayal, malediction, and death in Rembrandt than in any other Dutch painter’s work — by far. 

    Vermeer painted no such scenes. Neither did Hals. Neither did any of the blither spirits, Avercamp or De Hooch or Jan Steen. At the very most, the landscapists and the still-life painters will allude, with a graceful symbol, to mortality, to the passing of time. Most Dutch paintings were made for the wealthy middle-classes, and they show things that those people liked to see. Who among them would have wanted a picture such as The Blinding of Samson, in which a silver dagger is plunged into the protagonist’s eye? The painting is so gigantic, two by three meters, that it is hard to look away from it. It is just as hard to look at it: even Delilah can hardly contemplate her victim without a shiver. 

    Rembrandt was so prolific that even the most ambitious museum survey will never capture more than a slice of his work. Books aren’t much use, either: the images end up crammed onto the page, and that monumental quality of Rembrandt’s paintings — their patina, their glow, the sense that they give of something physical, like an extraordinary geological phenomenon — goes missing too, flattened onto smooth paper, removing the tactility, the brazenness, of his surfaces.

    The etchings and drawings, originally made on paper, fare better in reproduction. But there are hundreds of them, and even the most avid eye can only absorb so much. To try to see more than a few at a time is to be reminded that, as with Dante and Shakespeare and Bach, you cannot rush an acquaintance with Rembrandt. His work took a long time to make, after all: nearly half a century between his earliest productions and the works he made in the days before his death at sixty-three. 

    Add, to the quantity of works and media, the quantity of genres. In England, writers were comedians or tragedians or poets, but only Shakespeare was acknowledged as the greatest in every field. In Holland, Rembrandt, who was ten when Shakespeare died, worked in nearly every specialty known to Dutch art, each of which absorbed the energies — the entire lives — of his most talented contemporaries. 

    And then there is the profusion of his styles. What makes it even harder to form a coherent image of Rembrandt is that he painted in so many different styles. Many Rembrandts do not look anything like the popular idea of a Rembrandt. His early work looks so different from his later work that the early works were not even recognized as such until deep into the nineteenth century. Over the years, all sorts of ghastly paintings have been attached to his name — including some that he actually painted.

    There are still rediscoveries today, though these are nearly always of lesser works that seldom add more than a footnote to the image that has emerged from two hundred years of dogged scholarship. The lacunae of that scholarship only yawn in comparison to a demand for completeness. We now know as much about Rembrandt as we know about almost any figure, artistic or otherwise, of his century. We have a large group of works. We know what they show. We know when they were painted—and, often, why and for whom. We can see that some themes interested the master only for a short period. We can see that others were there from the beginning and stayed with him until the end. Some come back in every medium, in every style, at every point in his career. 

    One such recurring theme is violence — evil — darkness.

    The spectacle of cruelty is there in his earliest signed painting, The Stoning of St. Stephen, painted in 1625, when he was nineteen. This work shows a crowd surrounding the first Christian martyr: stones held high, ready to smash him to pieces. Unlike the Samson, it is not hard to look at — it is too much of an apprentice piece to stir real emotion; but though he is not visibly joining in, it is a bit disturbing — and premonitory — to find a chubby teenaged Rembrandt among the crowd. 

    A few years later, the novice has matured into a master. Rembrandt was twenty-six when he painted The Anatomy Lesson of Dr. Nicolaes Tulp. This is a group portrait of eight men around the corpse of Aris Kindt, who had been convicted for armed robbery and executed earlier that morning. The men are wearing neat clothing, and their expressions range from technical curiosity to a keen realization that they themselves will soon be just as dead as the body that lies before them. The way they are arranged around the body is such that you, the viewer, step right up to the circle, invited to join the grisly academic proceedings. Despite the decorous scientific proceedings and the sober expressions on the doctors’ faces, the smell of rot tickles your nostrils. 

    You could derive a positive message from this painting. You were, in fact, intended to derive uplift by those who commissioned it, if not by the artist. The march of science! The doctor is instructing the public with lessons that demonstrate the Dutch commitment to progressive education, lessons that were commemorated with such paintings because the Dutch medical societies were prestigious and famed; Rembrandt was one of many artists invited to paint them. When, thirty years later, he returned to the theme, he showed Dr. Jan Deyman dissecting Johan Fonteyn, who had committed the crime of breaking into a draper’s shop and pulling a knife. This anatomy lesson took place on the day after his execution. Of this painting, damaged in a fire in the eighteenth century, only two central figures, a spectator and the criminal, survive. All we see of Dr. Deyman are the hands peeling back Fonteyn’s bright red brains.

    Rembrandt’s criminals have a presence that bodies in other portrayals of anatomy lessons do not. Sometimes, in these, the doctor is showing a skeleton. Frederik Ruysch, father of the still-life painter Rachel Ruysch, who succeeded Dr. Deyman as Amsterdam’s city anatomist, was painted with rosily blooming cadavers, like Greek nudes, on the dissecting table. He was famed for making dead bodies look alive. Rembrandt’s bodies, by contrast, are unequivocally dead—and he arranges us, like the doctors, around them. 

    We have to look at these cadavers, just as we have to look at poor Elsje Christiaens, a teenage girl who was strangled in 1664 for killing her landlady, apparently in self-defense. She was executed on the Dam, the central square from which the city takes its name (“the dam on the Amstel”). Her body was hung on a gibbet in Volewijk, across the River IJ, where it was to remain “until the winds and birds devour her.” It was there that Rembrandt saw her, strung up like a doll. He drew her twice. 

    Did any other Dutch artist show a dead body this way? A well-known image of the disemboweled De Witt brothers, murdered by a mob in 1672, comes to mind; but the painter, Jan de Baen, was undistinguished, and the picture is remembered mainly because the De Witts were among the most powerful politicians in the Netherlands. The painting, shocking and grotesque, shows a significant historical event — not the death of an obscure eighteen-year-old girl.

    So it goes with many other themes. Often enough, you can find something comparable, somewhere. There are plenty of dead animals in Dutch painting, for example, but there is no picture quite like the Still Life with Peacocks in the Rijksmuseum. Here one bird lies in a pool of its own blood. Another is hung by its feet, its mouth still agape, as if to protest its murder. It is both exquisite and excruciating: the hallmark of Rembrandt. 

    Look, too, at The Slaughtered Ox. Red as the brains of Johan Fonteyn, the dead animal hangs from a wooden beam. “Slaughtered” is not quite the right word. The French title uses écorché — mangled, flayed — and no Christ in the whole Louvre captures the pathos of sacrifice like this harrowing carcass. The painting contains no religious references. But it stirs the same feeling that the most sacred mysteries evoke. 

    Do we identify with the ox? Or with the servant girl, barely visible, looking at it? If the anatomy lessons invite us into the circle of the learned doctors, into their progressive and prosperous institutions, our eyes go nonetheless directly to the dead men at their center. The light is on them; they radiate sanctity. They are not martyrs, but they are somehow numinous. It is not an accident that Rembrandt placed the criminals in the position where the dead Christ was placed in earlier paintings — or that he crucified the ox, and Elsje Christiaens.

             These works are not overtly religious, but Rembrandt painted plenty of religious works, too. If their contents reflect the mood of the man who created them, so does their very existence, since there was so little commercial incentive to create them. In post-Reformation Holland, to the contrary, they were unfashionable to the point of career suicide. The great German art historian and curator Max Friedländer (who in 1939 had to quit Berlin for Amsterdam because he was a Jew) could credit an entire tradition to a single artist:

    A view of the whole of Dutch production in the seventeenth century tells us that, where it was animated by any receptive interest in the religious picture, this was Rembrandt’s personal achievement or was at least set in motion by him. It was Rembrandt who, from spiritual predilection, bequeathed the non-ecclesiastical religious picture to the reformed North, which was ready to only a very limited extent to accept this present with gratitude.

    Rembrandt’s “spiritual predilection” sought extremes, and ways to portray them. Bourgeois moderation was not to his taste. The rough clash of light and dark could be rendered graphically, in paint, as in The Supper at Emmaus, painted when he was twenty-two: as the resurrected Christ reveals himself to a disciple, the dark Savior is surrounded by a halo of blazing light whose hidden source lends him the majesty of a mountain. Light triumphs over darkness. 

    But not always. Often light does not have the last word. Rembrandt shows the damned as well as the saved. In Belshazzar’s Feast, the impious Babylonian king gazes in terror at the ominous writing on the wall. In Uzziah Struck With Leprosy, a Judean king is punished for profaning the temple. There is nothing picturesque about these exotic scenes of hubris humbled. They fulminate, they rage, they terrify, they denounce; and if their warnings are warnings to others, they are also, you feel, warnings to the artist himself. 

    There is no gentleness here, nothing tame or easily digested. We are in the presence of an Old Testament prophet — he painted many — who, we sense, was well acquainted with the extremes that he depicted. The biographical evidence bears this feeling out. Though Rembrandt became a kind of secular saint in the nineteenth century, twentieth-century researchers discovered that he was, in Gary Schwartz’s words, a “cocktail of litigiousness, untrustworthiness, recalcitrance, mendacity, arrogance, and vindictiveness.” It turned out that Rembrandt’s contemporaries had almost nothing nice to say about him. No artist of his time could boast the number of disagreeable incidents that peppered his life. He didn’t pay his bills; he was tactless and rude and prickish; he was cruel to his mistress. “To sum it up bluntly,” Schwartz writes, not uncontroversially, “Rembrandt had a nasty disposition and an untrustworthy character.”

    In a list of appalling incidents, Rembrandt’s treatment of Geertge Dircx stands out. When his wife, Saskia, died at the age of thirty in 1642, she left him a nine-month-old son, Titus. Rembrandt hired Geertge to take care of Titus. He and Geertge became lovers, and they were together for six years. Geertge intended to marry the widower, and she claimed that he had promised to do so — until he began a relationship with his housekeeper, Hendrickje Stoffels. Lawsuits ensued. Rembrandt promised Geertge alimony. In the meantime he collected unflattering testimony about her, which he used to have her committed to the Gouda spinhuis, an atrocious institution for women who had “fallen” for a long list of reasons, from prostitution to insanity. She was desperate to get out. Rembrandt was desperate to keep her there. After five years, she was released, and died soon thereafter. 

    A long-ago dispute between embittered former lovers can be read in any number of ways. In books and films, Geertge has been portrayed as a conniving, gold-digging temptress — and, more recently, as a victim of a man’s determination to get her out of the way. If it weren’t for all the rest of the abundant evidence of his rebarbative personality, we might, in this case, be more inclined to give Rembrandt the benefit of the doubt. 

    Why, in any case, should we care? Surely other painters were obnoxious in ways that history has hidden. For all we know, Adriaen Coorte liked little girls, and Jacob van Ruisdael cheated on his taxes: when names fade and personalities fall away, only an artist’s work — and then usually only a portion of it — remains for us. We do not wonder whether the painter of an Egyptian fresco was likeable. 

    Four hundred years later, we wouldn’t wonder about Rembrandt either — except that the conflicting accounts do bother us. The reason is that we love him. We know him so well, after all. We can see him: his work, and also him, since no artist ever exposed himself as repeatedly and as nakedly. From the adolescent among St. Stephen’s tormentors to the valediction that Jean Genet described as “a sun-dried placenta,” around eighty of his self-portraits survive. The number is astronomical. We have no idea what most painters look like, but except for his childhood we can see Rembrandt at every phase. From the proud young man to the imperious genius to the wrecked patriarch, we can watch his life pass before us; and when, in a museum, we come across him in a new guise, we greet him as an old friend. We know him so well. Is this despite his darkness, or because of it? 

    The darkness is not, in any case, a secret. Even today, when personal confession has become a painterly genre of its own, it is hard to think of an artist who revealed himself so remorselessly. We see him as we see Aris Kindt or Johan Fonteyn or Elsje Christiaens. The difference is that, if the ox was flayed by the butcher and Aris Kindt was dissected by the doctors, Rembrandt did this to himself. In the light of this destiny, do earthly transgressions matter?

    “The West too has known a time when there was no electricity, gas, or petroleum, and yet so far as I know the West has never been disposed to delight in shadows,” the Japanese novelist Junichiro Tanizaki wrote in 1933. He described traditional lacquerware that “was finished in black, brown, or red colors built up of countless layers of darkness, the inevitable product of the darkness in which life was lived.”

    Was Tanizaki familiar with Caravaggio, whose tenebroso style, which used darkness to make light shine all the more radiantly, Rembrandt perfected and transcended? (In contrast to much Japanese painting, which dispensed entirely with light and shade.) In the West, the age without electricity had not quite ended when Tanizaki wrote those words. It stretched into living memory: the Mauritshuis, where The Anatomy Lesson of Dr. Tulp hangs, did not acquire electric light until 1950. How did these paintings look when our grandparents saw them in “the darkness in which life was lived” — and how did they change when first seen under artificial light? Nobody I know of recorded their impressions. 

    Did the Japanese really assign a moral value to darkness? The temptation to such symbolism is universal. Perhaps Tanizaki was a romantic, but at the very least — as you feel that Vermeer’s light has a meaning that exceeds the visual requirement to illuminate — it is possible that Rembrandt’s darkness has a role akin to the one that Tanizaki describes: “Our ancestors presently came to discover beauty in shadows, ultimately to guide shadows towards beauty’s ends.” 

    Yet Rembrandt’s contemporaries did not always consider his shadows beautiful. In his lifetime and beyond, they were often criticized as no more than a murky waste of space. Many great Rembrandt portraits are indeed little more than heads, and sometimes hands, peering out of the gloom, or floating in it, and if we imagine them in pre-electric rooms under the moody skies of Holland, we have to imagine them even darker.

    Look at the late portrait of Margaretha de Geer, from 1661. In the alert and penetrating eyes, and in the right hand that grips a handkerchief, and in the left hand that resolutely holds on to the arm of her chair, as if to launch her at the viewer, you can see her great power. She is one of the richest women in Europe — yet she is very old, and her face, served on a bright round millstone collar as on a platter, seems about to dissolve into the darkness that surrounds her. 

    For all the startling physicality of their surfaces, there is a ghostliness to these portraits, an immateriality, including to those that Rembrandt made of himself, that makes them more haunting than any other art of their time. If The Blinding of Samson is awful to look at, it is so theatrical that it troubles us less than Margaretha. The picture was designed to hang in her children’s house. One shudders to imagine them walking past it at night, the matriarch illuminated by flickering candles.

    Eventually the critical tide turned. Rembrandt’s darkness acquired a positive value, eventually coming to form a crucial part of his myth: a forerunner of the nineteenth-century Parisian bohemian. Especially the tenebrous late works — those final utterances of the discarded old genius, scorned by those who once had courted him, rotting in his cheap lodgings in the Rozengracht — were equated with spiritual profundity.

    Rembrandt’s darkness was viewed so positively that his works were even darkened artificially. The varnish that protects paintings needs to be replaced every fifty or so years, before it decays and darkens; but sometimes, at the insistence of curators, it was deliberately left on, so as not to lighten the work. Deep into the twentieth century, some restorers added pigments to new varnish to darken the pictures. The practice was not restricted to Rembrandt. “A good painting, like a good fiddle, should be brown,” wrote the painter and patron Sir George Beaumont in the nineteenth century. This brownness, known as “gallery tone,” may have seemed appropriate to objects prized, among other qualities, for their antiquity; perhaps here is an echo of the “beauty in shadows” that Tanizaki did not believe existed in the West. 

    In unlit galleries, covered with decaying varnish, how shadowy these paintings must have been! Did the darkness make Rembrandt more mysterious, more de profundis — or did it make him illegible? If darkening seems inappropriate from a scientific perspective, it doesn’t strike us as inappropriate for Rembrandt as it would for another painter. You wouldn’t darken an Avercamp or a Metsu — much less a Vermeer, whose genius lies in the uncanny suffusion of light in space. 

    But Rembrandt is, after all, dark. Yet his darkness does not always have a negative implication. In his several renderings of the apocryphal story of Tobit, for example, he shows the old man whom God has blinded in order to test his faith. While their son Tobias seeks a cure — this turns out to be the entrails of a monstrous fish — Tobit and his wife Anna stay home, patient and impoverished: resigned to their lot, firm in their faith. In Anna and the Blind Tobit, the old man sits in a ramshackle room, his face turned from a light he cannot see. Anna uses a ray from the window to wind wool on a frame; but most of the room, and most of the painting, is dark. The mood is of humility, not of expectation. We know that Tobit will be cured, but he himself has no such knowledge. His reward is unseen, and unforeseen. Faith — darkness — faith in darkness — is all he has.

    In her preface to The Passion According to G.H., Clarice Lispector warned that the book should only be read by “those who know that the approach, of whatever it may be, is done gradually and painstakingly — passing through even the opposite of what it’s going to approach.” The phrase applies to Tobit — and to Rembrandt too, since his darkness, even in his bleakest paintings, always contains an admixture of light. 

    The master was a moralist. He was a sensualist, too. The early works reveal a love of splendor — ostentation, even — and the paintings often contain a glint of gold. This was more than a color, or a taste. In the dark rooms where these paintings hung, it had a practical purpose (“the extravagant use of gold,” Tanizaki wrote, “gleams forth from out of the darkness and reflects the lamplight”) and a representational one. And in their way they reflected the shiny prosperity of the society in which they were painted. 

    In the self-portraits, as life takes its toll on the cocky young man, the light that had illuminated his figures from the outside begins to move inside. The background darkens, the atmosphere turns into mist — and the figures glow, like the fierce eyes of Margaretha de Geer, with something otherworldly. The artist becomes sadder and older — and grander, more imposing, more profound, the inner light shining all the more intensely — because of the approaching dark.

    As Rembrandt ages, the light in his paintings takes on an added luster, and with it an added meaning. It reveals a view of the world as a contest between light and dark that is, at heart, religious: of the world as a theater in which good and evil are intertwined, and in which good only occasionally triumphs. Sometimes, as for Belshazzar, crime meets its just punishment. Often, as for Samson, it does not. Does Rembrandt think it matters? “He didn’t care about being nice or mean, surly or patient, grasping or generous,” wrote Genet of the late Rembrandt: “He didn’t have to be anything more than an eye and a hand.” Now life has taken everything from him. All that matters was his art — and so, “with dirty fingernails,” the erstwhile lover of gold is now shuffling “from the bed to the easel, from the easel to the shitter.” He is beyond good and evil, or trapped in their mixture, living with light and with dark, beyond perfect clarity.

    A conflicted personality has been reconciled. The artistic and the spiritual are no longer in conflict; and in his last months Rembrandt returned, after thirty years, to the parable of the prodigal son. This is the story, from the Gospel of Luke, of two brothers. One stays home faithfully with his father while the other squanders his fortune carousing with whores. Eventually, forced to work as a swineherd, he envies the pigs. The theme had occupied Rembrandt since his youth. In the mid-1630s, he painted himself and his wife Saskia in a tavern scene, The Prodigal Son in the Brothel. There is a peacock pie on the table, and Rembrandt’s golden sword pokes out at the viewer. It is not conventional for a painter to portray himself as a wastrel, or his wife as a hooker; but this, the painting declares, is not a man interested in convention.

    Thirty years later, Saskia was dead. Titus was dead. Geertge and Hendrickje were dead. He himself would follow soon; but before he went, he painted the story one more time. Yet this time he chose another moment: the return of the prodigal son, when he comes back, humbled and repentant, causing his father to rejoice. He dresses him richly, “putting a ring on his hand, and shoes on his feet,” and orders the fatted calf killed. “Lo, these many years do I serve thee, neither transgressed I at any time thy commandment: and yet thou never gavest me a kid,” the virtuous son protests. “But as soon as this thy son was come, which hath devoured thy living with harlots, thou hast killed for him the fatted calf.” But as Christ explains, one repentant sinner causes more joy in heaven than “ninety and nine just persons, which need no repentance.”

    Rembrandt, who painted so many Hebrew scenes of vengeance and sacrilege, now paints this epitome of Christianity, a scene of forgiveness and redemption and love that, even by the master’s own standards, is stately and symphonic: the ragged, pathetic son kneeling before his old father, who gazes at him through half-open eyes. Though they are surrounded by darkness, the light is upon them.

    Without passing through darkness — through the opposite of what he meant to approach — the son could never come into the light of the father. Light cannot exist without darkness, nor virtue without sin. They are intertwined in every life. The electricity coursing between these magnetic poles — between Dr. Tulp and Aris Kindt, between Delilah and Samson, between the butcher and the ox — was the very subject of Rembrandt’s art.

    Forced to a Smile

             An epitaph — the short inscription on a tombstone — normally names and praises admirable qualities of the person buried there, and then hopes for a benevolent future after death. The gravestone may speak to the viewer in the dead person’s voice (as Coleridge imitates the Latin Siste, viator: “Stop, Christian passer-by, stop, child of God! / O, lift one thought in prayer for S. T. C.”) or it may speak as a mourner addressing the buried person (as in the Latin, Sit terra tibi levis, “May the earth lie light upon you”). In his Essay on Epitaphs, written a few years after William Cowper’s birth, Dr. Johnson restricts epitaphs to “heroes and wise men” deserving of praise: “We find no people acquainted with the use of letters that omitted to grace the tombs of their heroes and wise men with panegyrical inscriptions.” The readers of “Epitaph on a Hare” by William Cowper (pronounced “Cooper”) would have expected just those qualities in any epitaph: it would celebrate a male either wise or heroic, and its praise would be public and formal. (The Greek roots of “panegyric” mean “an assembly of all the people”.)

             Against such prescriptive forms, the only obligation for an ambitious poet writing an epitaph is to be original. The form becomes memorable by dispensing with or altering conventional moves: Yeats brusquely repudiates Coleridge’s Christian “Stop, passer-by,” in his own succinct self-epitaph: “Cast a cold eye / On life, on death. / Horseman, pass by!” Keats, dying in his twenties, refused the first, indispensable element of an epitaph, a name, and wanted only “Here lies one whose name is writ in water.” 

             As soon as animals became domestic pets, they could become the subject of an epitaph; Byron wrote a long epitaph on his dog, and had it inscribed on a large tombstone. (On the grounds of at least one of the colleges at Cambridge, there is a cemetery for pets of the dons which includes inscribed tombstones and small sculptured monuments.) Nowadays, in a practice that would have scandalized the pious of past eras, newspaper death notices in the United States commonly include, among the named survivors, domestic pets. The subject of Cowper’s epitaph is not domesticated, but wild — “a wild Jack hare” — not a hero, not a human being, hardly even a pet, but one nonetheless named and distinguished from its fellow hares.

             The most original epitaph for a pet in English literature, Cowper’s “Epitaph on a Hare” is a poem utterly dependent on charm. Poets writing on death have traditionally preferred to create either a somber “philosophical” meditation (on time, regret, the afterlife, and so on) or a direct expression of personal grief. By contrast, charm in lyric requires a complex management of tone: it cannot be single-mindedly earnest nor single-mindedly sorrowful, nor can it be unconscious of its hearers. It is a social utterance. It needs a stylized attitude of wistfulness and irony, a blending of the impersonal with the personal, of the independent mind with the troubled heart, and above all, it requires an evident awareness of itself and its listeners. 

             In real life, charm is almost as rare as exceptional beauty: beauty is Fate’s gift, but charm is a quality of personality and behavior. And charm is always remarked with a lightness of tone; it concerns something small, not sublime or heroic. The praise of charm is always tinged with pathos, charm being such a transient quality. Yeats, reflecting in “Memory” on the women he had loved (if imperfectly) over a long life, comments on the relative rarity of loveliness and charm among those women: “One had a lovely face / And two or three had charm.” But neither loveliness nor charm could transfix him for life, as had the wild beauty of Maud Gonne’s presence:

    One had a lovely face,

    And two or three had charm,

    But charm and face were in vain,

    Because the mountain grass

    Cannot forget the form

    Where the mountain hare has lain.

    That his love for Gonne was a quality of the flesh is stipulated by Yeats’s finishing this little poem with an unignorable match of the botanical and the animal: the mountain grass cannot forget the “form” (the image impressed on it) by the couched mountain hare. Grass-bed and hare belong to each other not because of any human kinship of “mind” or “soul,” but because (Yeats’s repeated noun tells us) both are denizens of the mountain, grass and flesh born of the same territory. 

             Yeats chooses a formal rhyme scheme for his poem on unforgettable beauty, but his slightly unsettling scheme does not employ the familiar couplet or quatrain; instead, it is a freestanding sestet, abcabc. And its slant rhymes are at first uncertain: does “grass” indeed rhyme with “face?” Will “form” eventually rhyme with “charm?” Only at the sixth line, where “lain” emphatically rhymes with “vain,” is the scheme fully intelligible. So unprecedented, so confusing, is heroic beauty that an unsettled air must hover over the lines until the conclusive arrival at “lain.” 

             In Yeats’s “Memory,” charm is somewhat bewildering, a possession of only “two or three” in an erotic lifetime; it comes etymologically from the Latin carmen, “song,” and is related to “incantation.” It has magic power, it lays a spell, it is alluring, it overcomes resistance, it “pleases greatly” (according to my dictionary). On the other hand, unlike striking beauty, charm has to be ascribed to something relatively approachable, of a domestic size, like the “charm” on a “charm bracelet.” It never claims too much; it can never be theatrical. And something about it is odd, as Robert Herrick knew: it is odd to be sexually “bewitched” by something which is rationally off-putting (distracting, neglectful, careless) but psychically fascinating, since it intimates a “wantonness” within: 

    A sweet disorder in the dress

    Kindles in clothes a wantonness;

    A lawn about the shoulders thrown

    Into a fine distraction;

    An erring lace, which here and there

    Enthrals the crimson stomacher;

    A cuff neglectful, and thereby

    Ribands to flow confusedly;

    A winning wave, deserving note,

    In the tempestuous petticoat;

    A careless shoe-string, in whose tie

    I see a wild civility:

    Do more bewitch me, than when art

    Is too precise in every part.

             Our contemporary master of charm in verse was James Merrill, who, at 62, dared to close his eight-sonnet sequence on opera, “Matinées,” with a version of the “naive note of thanks (made into halting verse) that he had sent, at the age of twelve, to his mother’s friend who had invited him to join her at the Metropolitan Opera for Das Rheingold. Miraculously, the note has mutated into a childishly “awkward” sonnet (following on seven sonnets of symphonic eloquence):

    Dear Mrs. Livingston,

    I want to say that I am still in a daze

    From yesterday afternoon.

    I will treasure the experience always — 

     

    My very first Grand Opera! It was very

    Thoughtful of you to invite

    Me and am so sorry

    That I was late, and for my coughing fit.

     

    I play my record of the Overture

    Over and over. I pretend

    I am still sitting in the theater.

     

    I also wrote a poem which my Mother

    Says I should copy out and send.

    Ever gratefully, Your little friend . . .

    The “little friend” is still shaky on prosody, while proud of his rhymes. And by replicating, mistakes and all, the perfect rapture he expressed at twelve, Merrill demonstrates with witty charm that he is as susceptible now as then to the effect of the rising of the curtain on the music of the Rhine maidens, “Nobody believing, everybody thrilled.” The charm also lies in his decision to let his youthful mistake stand: Das Rheingold has a Prelude but no “Overture.”

             Some usual elements of poetic “charm” in lyric, then, are a slightly perplexing initial effect, unconventional elements (of topic, of addressee), a wayward use of genre, ironic sidelights, and a playful spirit. They all meet in William Cowper’s surprising epitaph-poem. 

    Seeing an elegiac commemoration of “a wild Jack hare,” we wonder how such an epitaph came to be composed, and why it is so moving. Its success arises from the double self-awareness of the poet; he is fully conscious of his own actual grief and equally conscious of the unconventional and comic way in which he is speaking. Above all, he expects his readers to follow his own amusement at the mixed language that he must invent for such an unlikely subject without losing sight of what exigencies call forth its parodic features.           

             William Cowper, who was born in 1731 and died in 1800, was an English clergyman and the son of a clergyman. After a beatific episode in which he felt close to, and loved by, God, he fell into a lifelong despairing conviction that he was predestined to be damned, eternally unredeemable. He was hospitalized for months after a suicide attempt, and was unable in life to function as a clergyman. Retreating from the practice of his profession, but with a small inheritance, he took up residence with Morley Unwin, a clergyman friend, and his wife and child; and when the clergyman died, he continued to live with the compassionate wife, Mary Unwin, who devoted herself to him and was his chief human comfort during his recurrent periods of insanity. 

    Over time, in his saner periods, Cowper became the author of many essayistic pentameter poems that range from peaceful descriptions of pastoral life to outspoken denunciations of colonial slavery. But he also wrote trenchant introspective lyrics, of which the most famous is “The Castaway,” a “posthumous” past-tense description of his own death, comparing it to the fate of a sailor who fell overboard and could not be saved. Recalling Jesus’ calming of the waves of Galilee with “Peace, be still,” Cowper says bitterly that he and the doomed sailor had no such resource, none:

    No voice divine the storm allayed,

    No light propitious shone;

    When, snatched from all effectual aid,

    We perished, each alone:

    But I beneath a rougher sea,

    And whelmed in deeper gulfs than he.

    The devastating effect of “We perished, each alone” is outdone by Cowper’s two-line tragic footnote, a trapdoor to a worse hell than the sailor’s: a “rougher” and “deeper” fate lies in religious despair than in bodily death.

             Cowper’s mother died at his birth, and five of his siblings also died. As an adult — unmarried, childless, profoundly melancholy, suicidal, on several occasions wretchedly confined for insanity — Cowper must have been one of the loneliest poets of our language. Isolated at the house in Olney that he shared in his adult life with Mary Unwin, he built wooden cages in which he kept as pets first a single hare, which he received as a gift, but eventually three wild male hares. They spent the day in the garden, and at evening Cowper would admit them to the parlor, tenderly watching them play together in his presence. He wrote an essay-letter for The Gentleman’s Magazine describing them — “Puss, Tiney, and Bess” (all males) — and revealing, though reticently, the extent to which they benefited him during his anguished depressions. He perceived, he confessed, “that in the management of such an animal, and in the attempt to tame it, I should find just that sort of employment which my case required.” 

    Cowper nursed his hares when they were ill, carried them about in his arms, and dutifully took to obeying their wishes, studying their disparate temperaments. Puss, as he explained to readers of his magazine piece, was grateful to him for the care he showed, but “Not so Tiney. . . if, after his recovery I took the liberty to stroke him, he would grunt, strike with his fore feet, spring forward and bite. He was, however, very entertaining in his way, even his surliness was matter of mirth.” Bess was “a hare of great humour and drollery,” and became tame “from the beginning.” Cowper’s letter describes dispassionately the hares’ diet and their seasonal preferences (“During the winter, when vegetables are not to be got, I mingled their mess [i.e. meal] of bread with shreds of carrot,” and so on). Throughout the essay, Cowper endeavors to persuade his reader that hares are the most appealing of animals: the “sportsman,” hunting not for food but merely to kill, “little knows what amiable creatures he persecutes, of what gratitude they are capable, how cheerful they are in their spirits, what enjoyment they have of life.”

             Besides this reminiscent essay and his “Epitaph on a Hare,” Cowper added, to keep Tiney alive in memory, a Latin epitaph in prose: “Epitaphium Alterum” (“Another Epitaph”). Like the English poem, it begins with the conventional “Hic jacet, “Here lies,” and repeats the conventional address to the passer-by, but it still divagates from the classic human epitaph in celebrating Tiney’s lucky life, sheltered by his owner from both human predators and the unkindness of nature: “No huntsman’s bound, no leaden ball, no snare, no drenching downpour, brought about his end.” The epitaph closes unconventionally, too, as the mourner unexpectedly assimilates his own death to Tiney’s: “Yet he is dead— / And I too shall die”: “Tamen mortuus est— / Et moriar ego.”

    So, flanking the verse “Epitaph on a Hare,” we find the detailed gentlemanly letter and the Latin epitaph, both in prose, each more public than the poem; and it is against such relatively impersonal documents that the “Epitaph on a Hare” shines in its humor and its sadness. Almost every stanza contains a surprise. In the first, we are introduced to the mysteriously protected life of an unnamed wild, not domestic, animal; in the second, we encounter the initially withheld pet-name (which “should” have immediately followed the “Here lies”) and also the reversal of the usual superlatives (not “noblest” but “surliest”); in the third, the mounting list of the hare’s doings, climaxing not with a heroic or saintly action but rather with the doubly stressed comic end-words, “would bite.” The mourner has been obscured, too; his relation to the hare is given only meagerly in the third stanza, with the unrevealing phrase “my hand.”

             These strange and deviant beginnings are, as I say, surprising in themselves, but the great triumph of the poem comes in its next four stanzas, the ones on Tiney’s diet and behavior. It takes a bit of time for us to understand that Cowper is parodying the doting diction of a young mother, who assumes, in her maternal fondness, that her interlocutor-bystander is as interested as she in her baby’s important dietary preferences and daily amusements. Translated to our contemporary moment, the young mother would be earnestly explaining her endeavors to feed her baby the choicest of items and expressing her chagrin when a store has run out of a favored ingredient: “Jimmy really adores the Gerber mixed berries, but there wasn’t a single jar on the shelf, and I was worried, but I did find the cereal and the applesauce that he usually has for breakfast, and some favorite vegetables puréed peas and squash. And then I found a new mix, too, with chicken in it, that he was willing to try when I gave it to him for dinner.” The bystander hopes that this is the end of the recital, but no, now it is her Jimmy’s behavior — how much he clings to his stuffed animals, especially the pet elephant, and how vigorously he pedals in his little swing. Nor does she stop there, but advances to her baby’s preferred time of day and his response to a change in the weather: “You know, when everything settles down after dinner, he’s much more playful, and then, when a storm is coming, he senses it and gets really excited.” By this time the bystander is backing away.

             Cowper parodies the dilated intimacy of the mother’s discourse with much amusement, listening to himself. The interminable list of foods, and the owner’s anxiety if something cannot be found, spill out on the page in an excessive inventory of ten items. Difficulties yield to happy solutions as Cowper continues to imitate “maternal” anxiety (“and then, if I lacked thistles, I’d find lettuce for him”). We are made to feel the wild hare’s joy as he “regales” on his special provender. (The Oxford English Dictionary cites John Adams in 1771, resolving to make a pool with clear water, so that “the Cattle, and Hogs, and Ducks may regale themselves here.”) As the named foods become more adjectivally specific — “twigs of hawthorn,” “pippins’ russet peel,” “juicy salads,” “sliced carrot” — the owner’s extravagant affection mounts. The list ends with the unconcealed triumph of the owner over seasonal scarcity, as he succeeds in substituting alternate foods for scarce ones. Has there ever been a more absurd climax than the proud victory of Tiney’s owner announcing that “when his juicy salads failed, /Sliced carrot pleased him well”? And has there ever been a public epitaph that listed the epicurean delights of a lovingly chosen cuisine for an ungainly pet?

             Cowper is a past master of tone and detail. Not only can we hear the tone in which each detail is given, we are even prompted to intuit tones that must have preceded the present ones. We can infer the owner’s anticipatory devotion in slicing up all those carrots, reflecting how pleased Tiney will be as he approaches his dish. And Cowper is also a master of diction, knowing just how to join Tiney in his “gambols” by releasing a coarser language: Tiney “loved to . . . swing his rump around.” The anatomical phrase brings a farmer’s speech hovering into view.

             The owner of the hares mimics his own worry about Tiney’s aging by slipping directly into Tiney’s very mind, imagining him counting down his years and months of self-indulgent life:

    Eight years and five round-rolling moons

    He thus saw steal away,

    Dozing out all his idle noons,

    And every night at play.

    The poet’s worry was warranted; Tiney died at nine. And here Cowper at last reveals why Tiney is allowed into his house. It is the poet’s first-person confession that makes the whole poem grow in stature and grace:

    I kept him for his humor’s sake,

    For he would oft beguile

    My heart of thoughts that made it ache,

    And force me to a smile.

    The anxious diet-procurement, the seasonal schedule of feeding, the protection from predators, the nightly play — these indeed “beguiled” the poet, as they beguile the epitaph itself, until aching thoughts and a forced smile expose the death’s head of the poet’s suffering being. Between the separated words “heart” and “ache” lie the terrible fears and the hopelessness in which the poet lives. Those two monosyllabic lines — like the fatal “deeper” and “rougher” comparatives of “The Castaway”— intensify the atmosphere to an acute register of pain. That intensity then casts a piercing backlight on the whole epitaph: back over the startling characteristics in “surliest” and “would bite”; over the foolish fondness of “juicy salads” and “sliced carrot”; over the aesthetic appreciation of the contrast between the hare’s skips and gambols and the heartier pleasure when he would “swing his rump around”; and over the poet’s “beguiled” observation of the hare’s vicissitudes of response to the weather. The watching, the devotion, the feeding, the cherishing — all the instances of care — are then decoded, with hindsight from the reader, as daily evidence of the aching thoughts and the rare smiles. The unsettling strobe-effect (charm/sorrow, beguilement/ache, play/loneliness) persists in every rereading. The flicker between comedy and heartache is the chief resource of Cowper’s charm.

    But there are many others: the genuineness of Cowper’s loss flickers between the solemn epitaphic frame (from “Here lies” to the ecclesiastical “long, last home,”) and his elation at Tiney’s animal liveliness, between “here lies” and “would bite.” We are charmed not only by the proprietorial boast of the opening: that Tiney was successfully spared, by his assiduous owner, the ritual danger of the morning hunt, but also by the closing view of the hare’s affection for his two precariously remaining companions. Finally, we are touched by the way Cowper’s past-tense narrative presses forward to amalgamate itself into the “now” and the “this” of the imminent moment of parting. We are made to feel the gap between the poet’s relish in his pets and the implication (explicit in the alternate Latin epitaph) of the poet’s own death in the closing word, “grave.”

             Cowper’s means are simple: he offers a monosyllabic poem composed largely of monosyllabic lines cast into the familiar form of the ballad stanza, with rarely disturbed iambic rhythms. And it all appears to lead to a “Christian” pathos as Tiney “in snug concealment laid” consciously “waits” for “Puss” to keep him company in the grave. Yet once again, as in “The Castaway,” Cowper adjusts the end of the poem to a darker note: Puss feels his irrevocable destiny in “the shocks from which no care can save” and knows he will eventually “partake” (take up space) in Tiney’s grave. All communication then ends — between owner and hares, and among the hares themselves, as a long silence — of the shocks, of the grave — ends the poem.

             Lest charm and humor wane in a poem so mixing the two with mourning, the harsher edges of life and expression must be framed in a “softer” vision, through which nonetheless — if the poem is to ring true — the death’s head must be glimpsed. Others have elegized their pets with playful fondness and appreciation, those natural emotions on losing a companion, but Cowper’s many sophisticated and whimsical tones and tableaux of mourning — for himself as well as Tiney — make his epitaph a deeper commemoration. 

    Is charm still exerted in poetry? I have found it recently not only in Merrill but also in A.R. Ammons’ no-holds-barred final book, unceremoniously titled Bosh and Flapdoodle. The poems, written in old age and illness, combine self-mockery and a basso continuo of fear. Ammons calls them “prosetry.” At first I didn’t know what to make of some of them, their slangy and farcical impudence routing Ammons’ general inclination to serious poetry of science and nature. The charm of these “last words,” is, as usual, bewildering to the reader. Incomprehensibly and grandly, one poem flaunts the title “America,” even though its titular scene — the entire country — seems attached to the minor geriatric problem of dieting. Eventually, the second part of the poem enables another view: America is both personal — when you are chastised into dieting — and grand in landscape and weather when you delete personal annoyances in favor of casting your glance more widely. At the close of the poem, which I omit here, the charm lies in the weird separability, and ultimate twinning, of the two points of view: individual and cosmic.

    The aging Ammons (in the implied narrative of the first part) has chronically bad dietary habits, and his doctor, wanting him to reform, sends him to a dietician. The poem opens on the poet’s “counseling” session with the dietician. Ammons chooses to charm us here by jolting us from voice to voice: one is the voice of the severe dietician, recommending unattractive diet items (and reproving disobedient choices); the second is the voice of the adult poet satirically rephrasing the unwelcome advice; and the third is the undersong of the resentful sotto voce id of the patient, who defensively luxuriates in asides as he solicits the memory of appetizing items of past meals, and slips in, at the end of the diet-poem, a resolve to transgress with “an occasional piece of chocolate-chocolate cake.” I have sorted out the voices here, but imagine what it feels like to read “America” fresh off the page, realizing that the title means, for part one, that everyone in the country is endlessly attempting counseling and self-discipline in eating, and endlessly falling back into appetite: 

    Eat anything: but hardly any: calories are

    calories: olive oil, chocolate, nuts, raisins

     

     — but don’t be deceived about carbohydrates

    and fruits: eat enough and they will make you

     

    as slick as butter (or really excellent cheese,

    say, parmesan, how delightful); but you may

     

    eat as much of nothing as you please, believe

    me: iceberg lettuce, celery stalks, sugarless

     

    bran (watch carrots; they quickly turn to sugar):

    you cannot get away with anything:

     

    eat it and it is in you: so don’t eat it: &

    don’t think you can eat it and wear it off

     

    running or climbing: refuse the peanut butter 

    and sunflower butter and you can sit on your

     

    butt all day and lose weight: down a few

    ounces of heavyweight ice cream and

     

    sweat your balls (if pertaining) off for hrs 

    to no, I say, no avail: so, eat lots of

     

    nothing but little of anything: an occasional 

    piece of chocolate-chocolate cake will be all

     

    right, why worry:

             The serve-and-return pattern of contradictory voicing parodies the counseling session by allowing the things the patient cannot in fact say aloud to rise to the surface. We hear not only his irritation at the attempted control by the dietician, but also his wistful glances back to the delights of parmesan cheese. The smallness of the occasion, the pathos of the geriatric plight, the defiant humor, the fluctuations of tone, the awareness of a reader of unknown gender —”sweat your balls (if pertaining) off” — the witty play with e-mail brevity (“hrs”) are all characteristic of charm, in Ammons as in Merrill and Cowper. Trifling with genre always delights the poet: whether Cowper is upending the epitaph, or Merrill is inventing a child’s thank-you sonnet, or Ammons is parodying patronizing advice, the poet’s self-awareness together with his awareness of an audience makes for a gaily sympathetic and sophisticated performance.

             But why is the title of the poem “America?” The first answer, the comic one, the poet would say, is because this is what all America (myself included) is doing — dieting while resenting dieting. But the second answer, the sublime one, arises from the last seven lines of “America,” as the declining poet finds when he turns his gaze from the indignities of age to the grandeur of the American landscape. In the landscape he finds an impersonal reassurance in “disaster renewal,” the cosmic self-repair of the natural seasons. Satiric “charm” falls away, replaced by awe at the natural resurrections of Spring.

             “America,” with its two contrasting parts, shows that the spell of charm need not be maintained throughout a poem. But the advantage of lyric charm is its capacity to relieve the unreality of an unmixed high seriousness. Instead, one sees oneself as an unimportant speck in an indifferent, if exciting, universe, finding a point of self-regard more independent than earnestness, one not omitting comic truth. Ammons is unsparing on the fact of cosmic indifference; Merrill demonstrates how a more ironic vision has replaced, in adulthood, the naive sweetness of childhood; and Cowper, like our later poets, does not obscure either the ravages of time or the power of sympathy. Cowper ranges through so many tones and tableaux while mourning his beloved hares that the poem seems not a pet-elegy, but rather a human one. As we follow its exquisite variations on charm and grief, classical reminiscence and personal hardship, we are instructed how three improbable pets, more than two centuries ago, could force a despairing poet to a smile.

     

    For the Birds (Strictly)

    ​​​​Strictly for the birds.  – Holden Caulfield

     

    Easy to think of what’s different,

    what’s broken or chastened

    somehow

     

    now that I’ve lived longer than

    my father ever did. No

    nightlights back then,

     

    for example, those steady little

    stars we plant and grow about

    the house now

     

    like nightflowers to make us less

    afraid. Just the moonlight then

    dreaming its way

     

    inside the open window, the

    body of light lying like a

    hologram across the kitchen

     

    floor, like some sleeping hobo,

    some vagrant vagrant, who’ll be

    sure to be gone

     

    in the morning. And the feeder

    outside, barely visible, too early

    for the birds, hanging so long

     

    and still, like the last Apache

    executed at dawn at Fort Yuma,

    Arizona in 1912

     

    before World Wars began, like

    the fasces ax and olive branch

    on the Mercury Dime,

     

    the one Wallace Stevens loved.

    Perhaps you were afraid too in

    that darker darkness

     

    and could have used a little

    light, something to hold on to

    before the dawn,

     

    some tiny votive burning just for

    the birds when everything

    seemed crazy, or

     

    strictly for the birds, as you always

    said. Maybe you told them all

    that they were safe

     

    and still alive, not dead, that

    soon enough it would be time

    to go to work, to sing.

    Before a Fall

    Pride comes before a fall, Solomon says, but any fool knows that’s not

    true. 

    Take Jesus, for example, or Gump Jaworski, who did a double half

    gainer 

    and most of a triple solchow on his last day of working for Gutters ‘R’

    Us 

    (“Gutter Problems? Gutter Call Us!”) when he fell off a company

    ladder 

    trying to steal a case of Budweiser tall boys from an open third floor

    window 

    of the Riverdale Co-op back in the day, and who would’ve gotten

    himself 

    a decent settlement if he’d had any disability insurance to speak of, 

    but he didn’t. 

    Come to think of it, it was the case of beer that landed first, just

    before he did, 

    right on top of it, breaking every bone in his head, and most every

    long-necked 

    bottle inside the case that wasn’t broken already, a feat he took no

    pride in 

    whatsoever, nor should he ever, though he bragged sometimes

    long after the fall 

    through his ill-fitting, whistling teeth that all the way down he had

    never let go 

    of the case. Or take Charley Pride, who sang so easy and let it all go

    with every song 

    he ever sang, who never fell at all as far as we know, and deserved all the pride 

    he ever felt in his life, singing “All I Have to Offer You is Me” the way

    he did, 

    even selling more records than Elvis for a while, a thing to be

    Tennessee proud 

    of there for sure.  There’s proof for all this from the natural world 

    if you want 

    it, and all the animal kingdoms too, the way they say lions come in

    prides, 

    but you can’t tell me the last time you’ve seen one of them take a fall, 

    let alone any pride in it, people laughing and spitting like hyenas all 

    the time. 

    And puffed-up Mr. D? John Donne told him straight up not to be

    proud, 

    but he’s always strutting, moving along, the country around him like 

    a building 

    collapsing, imploding on itself. See him taking selfies on the Capitol

    steps, 

    proud boy, proud as hell, filled with rage, with graveyard joy, unweaning pride, not before, 

    but after a fall. 

    The Safe Bet

    They say Lady Godiva put everything she

    had on a horse, 

    but what if the wager had grown from

    speculating whether 

    everything on earth is always growing

    steadily, incrementally, 

    or whether things are inevitably falling apart? 

    The safe bet 

    would be the latter, of course, the smart call.

    You’d have 

    gravity on your side, that wormy apple hitting

    feckless Newton 

    smack on the skull every time. There’d be

    9/11, the Falling Man, 

    the icy Titanic, Trump, Q, and each driverless,

    non-fungible Tesla to boot. 

     

    But National Geographic reports that Mount

    Everest actually grew two feet 

    last year. Tenzing and Norgay would’ve just

    fallen short. Today they’d be 

    leaping like slow motion Tik Tok NBA

    ballers trying to hang on the rim 

    of the moon. And those Oregon settlers

    buried side by side in hastily dug 

    graves two hundred years ago just worked

    their way to the surface after 

    all this time, some Farmer Brown’s boy’s dog

    sniffing at the rain-soaked 

    gray cannon balls of their skulls, their ribs

    curved up like little cathedrals.  

     

    It’s as if everything wants more open sky,

    more canopy over our heads, 

    to make room for all of what’s rising, all our

    loneliness growing greater 

    every night, as if the earth itself is a seed

    stuck in Whitman’s muddy 

    boots, as if the moon coming up over that

    ruddy fence is the face 

    of the child we’ve loved and have lost, as if

    that’s who we all 

    should be out looking for, betting the house

    every time. 

    Priorism, or the Joshua Katz Affair

    Teach your tongue to say: I do not know, lest you be duped.

    Talmud Berachot 4a

    The phrase “Joshua Katz,” as it is ground down and churned out by the national rumor mill, refers not to one character but to many. He is a conniving fiend; a wronged and saintly genius; a bitter man who has responded terribly to genuine mistreatment; the perpetrator of abuse; the victim of abuse; a valorous defender of independent thought; a sad sack manipulated by a powerful puppeteer named Robert George; a befuddled but well-meaning and brilliant professor, and so forth. It took me several months to notice that all of these Katzes refer to the same man, and still longer to recognize that the name, as used in public discourse, is not a name at all but a rallying cry. The rumors that are think-pieced about Katz do not reflect any serious empirical consideration about what exactly unfolded at Princeton the summer of 2022, though that is their purported subject — but of course that is not what they are intended to do. His name is a speech act, a token, a shorthand, a move in a game. How someone invokes “Joshua Katz” depends entirely on where that individual’s stands on trends that have little directly to do with the man. Ignorance is a primary fuel of opinion.

    Joshua Katz, a classicist, made tenure at one of the most prestigious universities in America when he was just thirty-six years old. That is not why I know his name, though it is among the reasons that the implosion of his academic life was an affair of national significance. (Our country’s pathological obsession with the glitteriest members of the Ivy League — provincial ecosystems that bear little resemblance to anything beyond their hallowed walls — is among our more embarrassing fixations.) Eighteen years after he received Princeton’s President’s Award for Distinguished Teaching, and fifteen years after he made tenure, Katz was ruthlessly fired!, or he was canceled!, or he was justly punished!, depending on which team you play for and how invested you are in your membership in the league.

    Katz is among the many citizens whose private catastrophes have been seized upon and treated something like a theatrical drama in which certain breeds of nauseatingly political Americans assume their customary positions and rehearse their familiar scripts. Scavenging the relevant search engines and piecing together a timeline of the Katz affair after the fever has broken has been a fruitful, if bizarre, anthropological project. At a distance, the earnest hysteria and sanctimonious outrage of all the opiners seems not only ridiculous but also hollow, as if none of these pontificators really cared about this particular drama, except as an opportunity to model the Right (or Left) View of it.

    The name that I give to this style of participation in public debates is priorism, because it comes with a handy framework, an a priori intellectual and even cognitive filter, into which each successive news cycle or morsel of cultural gossip is smoothly fitted. Priorism is a brutish substitute for interpretation; unlike priorism, responsible interpretation awaits facts, considers developments, and suspends judgment for the duration of inquiry while it resists the impulse to extrapolate wildly from bits and pieces. The primary objective of interpretation is to yield understanding, whereas priorism yields only a comforting sense of belonging and a hackish confirmation of an established worldview. Evidence that contradicts its framework is simply ignored or discarded or mocked, and in this way priorists are never thrown into crisis. Theirs is a phony kind of certainty. They, or at least the clever ones among them, are not exactly liars. They tell selective truths, edited accounts, absorbing what is useful and strong-arming it into their system. The spirit that moves even their true opinions is not the spirit of truthfulness but of conformity. Priorism, no matter of which ideological variety, offers its members the armor of a sympathetic, validating community. They are never discomfited, they are never alone, they are only ever affirmed. This, incidentally, is why priorism has these days become a promising career path.

    The national theatrical production called “Katz,” like the ones that preceded it, is, among other things, tedious, no matter the pitch in which the lines are recited, because we have all heard all this before. And further, the more familiar the opinion, and the closer it clings to the script, the warmer its reception: community, and its cheap praise, is guaranteed. The primary mode of its expression is regurgitation. (Re-tweet!) Our discourse is made up of a million platitudes, and these platitudes are repeated endlessly by the very people who purport to be, and are feted for being, our brightest. How do we manage to stay awake through each performance?

    Katz does not appear to be a dazzling individual. Among the oddities of this tale is that he can command national attention at all. It is generally agreed upon that he has an enigmatic, rapacious, and sharp mind, and a captivating energy which sometimes obfuscates his lack of more obvious charms. If ever he possessed charisma, it is not evident now; he is, judging from the essays he has churned out about the terrors of cancellation and the cowardice of his former friends, a bitter man. (It is 2023 — we are connoisseurs of cancellation, and we know the difference between a dignified pariah and an embittered one.) It was surprising to learn that, before the crisis began, long before I ever heard of him, Katz basked in the adoration of the entire Princeton student body. He has a quality rating of 5/5 on ratemyprofessors.com, and 100% of students said they would take his classes again. One respondent on that site gushes that Katz was “a reason to come to Princeton.” Another effused, “Don’t graduate without taking a class from Katz. He is not only brilliant but dynamic and interesting as well… Will know each person in his 100-person lecture personally.” And another: “Possibly the coolest teacher I had through all 4 years of college.” In October 2018, when Katz was already suspended for sexual misconduct but before this fact had become common knowledge, in phase A of the scandal, the website OneClass.com ranked the top ten professors at Princeton and awarded Katz the top spot. Undergraduates used to queue in winding lines to sign up for his courses. He received more than one teaching award, and was among the professors who served as a contributing columnist for the very student newspaper that would later pioneer his destruction.

    These were (some of) the facts available to the American public, and they sufficed for families gathered round their tables to engage in psychological speculation regarding the inner workings of a man they had never met. There is a certain sort of professor for whom undergraduate adoration is infinitely more intoxicating than any drug. The deprivation of this intoxicant seems to have infuriated him more than the other attendant indignities. Or: See how fickle and cruel college students can be? As soon as the torchbearers came knocking they turned on a man they had revered. And so on.

    The story of Joshua Katz revolves around the man, but it isn’t really about him — it is about us, about the cynical and insanely politicized world that we have constructed for ourselves, the kitsch that we slosh around in, the slogans that we slurp and spoon down one another’s throats. There are no heroes in the story. There aren’t any villains, either. In so far as villains are cunning, Katz doesn’t make a convincing villain. This is true despite the fact that his enemies have bent over backwards for the past three years trying to dress him up like one. (I do not mean to imply that he is not guilty of sexual misconduct. He has said himself that it is a sin for which he has repented.) This is among the reasons that he has been so enthusiastically enveloped by the right — for that set of priorists, being accused of villainy by progressives is the surest certificate of purity, just as being cast as a victim has the same effect for the opposite camp.

    It is easy enough to track the public response to the Joshua Katz affair, but the details of the story itself remain overwhelmingly mysterious, like a play within a play that the characters are reacting to though none of them has heard all the dialogue or seen all the action. As noted, this ignorance is an essential element of the story. Knowledge would spoil the fun. As far as I can gather, it is impossible for an outsider to figure out what actually transpired, and it is in all likelihood similarly impossible for an insider with protected but partial information to gauge what actually happened. Very little about this affair can be honestly asserted with confidence, but much has been confidently asserted.

     The broadest details have by now been widely reported (and selectively forgotten). In 2018, Professor Katz was disciplined for a consensual relationship with an undergraduate that occurred sometime in the mid-2000s. (That investigation began the same year Katz was supposed to serve on the “Committee of Three” or “C/3”, which is arguably the most important committee at Princeton. Serving professors help to decide, among other things, which of their peers get tenure. The fact that Katz was appointed to this powerful body speaks to the status that he enjoyed among the faculty.) An investigation into the relationship was initiated after a third party, another student with knowledge of the affair, contacted the university without the support or the consent of the woman (now graduated) with whom Katz had been entangled. She did not participate in the investigation. Based on the committee’s findings, which remain confidential, Katz was suspended for the academic year of 2018-2019. It seems that at the time little was made of his absence. There was no public outcry, and the suspension happened to fall a year after Katz had a scheduled sabbatical, so that his departure was prolonged rather than suddenly and disruptively enforced. Perhaps this experience radicalized him, or perhaps it simply coincided with political upheavals within the university that on their own shifted him rightward. Whatever the case, rightward he went.

    Katz had not been entirely apolitical prior to the events with which we are presently concerned. In 2017, a year before the drama, he was a signatory to a letter penned by fifteen professors from prestigious universities which invited the freshman class of that year to resist pressure to conform politically despite the social consequences. They warned that groupthink is rampant and powerful enough that “it leads [students] to suppose that dominant views are so obviously correct that only a bigot or a crank could question them. Since no one wants to be, or be thought of, as a bigot or a crank, the easy, lazy way to proceed is simply by falling into line with campus orthodoxies. Don’t do that. Think for yourself.” Two years later, in this same spirit, on July 8, 2020, Katz published an essay that would vaunt him onto the national stage. It was entitled “A Declaration of Independence by a Princeton Professor” and it appeared in Quillette. His debut as a participant in the public debate was as a member of, or at least a contributor to, the anti-cancel-culture brigade. With one glaring exception, it was a more or less competent defense of reason and clear-headedness.

    The “Declaration” was written in response to a letter by large numbers of the Princeton faculty, published on Independence Day and addressed to the president and senior administrators of the university. It put forth a suite of demands designed to combat the “Anti-Blackness” that “is foundational to America,” and was signed by over three hundred faculty members. Some of the demands were reasonable, as Katz himself states in his essay. For example: part 4, demand 10 insists that Princeton “fundamentally reconsider legacy admissions, which lower academic standards and perpetuate inequality.” Or, as Katz points out, “It is reasonable to ‘give new assistant professors summer move-in allowances on July 1’ and to ‘make [admissions] fee waivers transparent, easy to use, and well advertised.’ ‘Accord[ing] greater importance to service as part of annual salary reviews’ and ‘implement[ing] transparent annual reporting of demographic data on hiring, promotion, tenuring, and retention’ seem unobjectionable.” These demands were sensible and practical.

    But others, as Katz goes on to point out, were ridiculous. Consider, for example, Part 1 demand 5: “Reward the invisible work done by faculty of color with course relief and summer salary… Faculty of color hired at the junior level should be guaranteed one additional semester of sabbatical on top of the one-in-six provision.” Or Part 2, demand 4: “Enforce repercussions (as in, no hires) for departments that show no progress in appointing faculty of color. Reject search authorization applications and offers that show no evidence of a concerted effort to assemble a diverse candidate pool.” Here, we can agree, we have left the realm of best practices and entered the netherworld of radical identity politics, though Katz claimed that such proposals would lead to civil war on campus if implemented, which seems excessive. But the ugliest of the hyperbolic indulgences in his piece was his now-infamous characterization of a Princeton student group called the Black Justice League as a “small local terrorist organization.” This, from the man who not a year earlier signed a letter in which he joined in lamenting that groupthink had become so powerful that students reflexively assume only a bigot or a crank would oppose it.

    If your fingers have been in the remote vicinity of our culture’s pulse in recent years, you will have noticed that ordinary people with a bit of common sense have devolved from independent thinkers into gang members with an axe to grind. Those who declared themselves anti-groupthink developed their own groups which developed their own asphyxiating vernaculars and codes of conduct. Katz is a freshly minted member of such a group. He wrote recently, in Sapir, that cancellation has allowed him to see who his real friends are. I do not mean to doubt his need for friendship, but surely he must see that the basis of these new attachments are ideological. His new friends have uses for him. If he did not parrot their own scripts with such gusto, they would not be so friendly.

    The phrase “small local terrorist group” is the reason “Joshua Katz” has become part of the national chatter; it is the reason that he is reviled by the left and deified by the right. Five days after his defiant essay appeared, Princeton President Christopher Eisgruber publicly condemned Katz:

    While free speech permits students and faculty to make arguments that are bold, provocative, or even offensive, we all have an obligation to exercise that right responsibly… Joshua Katz has failed to do so, and I object personally and strongly to his false description of a Princeton student group as a ‘local terrorist organization. By ignoring the critical distinction between lawful protest ad unlawful violence, Dr. Katz has unfairly disparaged members of the the Black Justice League, students who protested and spoke about controversial topics but neither threatened nor committed any violent acts.

    Both sides, of course, degrade “free speech” by ping-ponging it cheaply back and forth over the ideological net. That same day, in the American Conservative, Rod Dreher called Eisgruber a coward whose proper role is to “defend free speech by faculty members, not kowtow to radicals.” Dreher declared, in the conventional right-populist way, that Eisgruber’s statement makes clear “who has privilege at Princeton and who does not.” As if Katz had a right not to be disagreed with; as if the members of the administration or the faculty among whom he had just so aggressively distinguished himself had no right to respond to him. You cannot create a provocation and them complain when others are provoked; and this goes for all sides.

    Here is a brief review of the most notable responses to Katz’s villainy/heroism. On July fourteenth the Wall Street Journal editorial board published a column praising Katz and warning that “cancel culture doesn’t need to get him fired to succeed. It succeeds by making him an outcast at his own university, and intimidating into silence others on campus who might agree.” On July 22, Mihael Poliakoff, the president of the American Council of Trustees and Alumni, paid tribute to Joshua Katz for “his intellectual integrity, his heart, and his courage.” and recognized him as a “Hero of Intellectual Freedom.” On July 26, Katz published an op-ed in the Wall Street Journal titled “I survived cancellation at Princeton: it was a close call, but I won’t be investigated for criticizing a faculty ‘open letter’ signed by hundreds.” (This was the first of innumerable subsequent essays, podcasts, and talks given by Katz about surviving cancellation.) In September, the American Council of Learned Societies withdrew Katz’s appointment as a delegate to the Union Académique Internationale. Katz sued the ACLS for “viewpoint discrimination.” A judge dismissed the lawsuit. In January of the following year, John McWhorter, writing in the Atlantic, praised Katz: “He is not an exemplar of white fragility, but a model for the future.”

    On February 2, 2021, seven months after Katz’s essay in Quillette was published, things got darker. The Daily Princetonian published the findings of its own investigation into three different relationships that Katz had had with female students. Katz’s lawyer slammed the article as a “planned smear… clearly yet another attempt to punish him for dissenting from the prevailing campus orthodoxy.” After these findings were published, the alumna who had been the subject of the investigation in 2018 sent a detailed written complaint about Katz to the university. In response to the receipt of that letter, the university commenced a new investigation, this time concerning Katz’s compliance with the previous one. This final investigation took place over the course of the subsequent thirteen months. On May 23 of the following year, the board of trustees resolved to fire Katz, and published a statement which reads in part:

    When [the alumna] came forward in 2021, she provided new information unknown to the University in 2018, and the University initiated a new investigation in accordance with its policies. The new investigation did not revisit the policy violations for which Dr. Katz was suspended without pay in 2018; it only considered new issues that came to light because of new information provided by the former student.

    The 2021 investigation established multiple instances in which Dr. Katz misrepresented facts or failed to be straightforward during the 2018 proceeding, including a successful effort to discourage the alumna from participating and cooperating after she expressed the intent to do so. It also found that Dr. Katz exposed the alumna to harm while she was an undergraduate by discouraging her from seeking mental health care although he knew her to be in distress, all in an effort to conceal a relationship he knew was prohibited by University rules. These actions were not only egregious violations of University policy, but also entirely inconsistent with his obligations as a member of the Faculty.

    Faculty discipline at Princeton is handled in accordance with the “Rules and Procedures of the Faculty,” which guarantee numerous procedural safeguards for faculty members facing proposed disciplinary action. In cases involving a proposed suspension or dismissal, the affected faculty member has the right to seek review by an independent committee composed of members of the Faculty elected by their peers.

    The recommendation to dismiss Dr. Katz was reviewed by the faculty committee, known as the Committee on Conference and Faculty Appeal. After reviewing the pertinent investigation reports and Dr. Katz’s submissions, and interviewing Dr. Katz and others, that committee found that the reasons presented in the dismissal recommendation of the Dean of the Faculty were supported by the record. That recommendation was subsequently submitted to the President, who evaluated it and submitted it to the Board for action.

    The Board voted to dismiss Dr. Katz on the recommendation of the University President and Dean of Faculty, after a review of the extensive record by an ad hoc committee of the Board appointed to consider the matter.

    That same day The New York Times published an article which insinuated that the investigation was simply a ruse, an excuse to fire Katz “for criticizing the anti-racist proposals made by Princeton faculty, students, and staff” — criticisms that he had made seven months before the investigation was opened, and over a year and a half before the committee resolved to fire him. It is unclear to me why the Times, of all places, and at that late date in the thought-policing to which it has itself significantly contributed, decided to use Katz’s firing as an opportunity to condemn the overreach of cancel culture.

    Whatever the reason, since the Times overtly accepted Katz’s reading of the situation, it seems that many others who would otherwise resist hasty conclusions permitted themselves to believe there must have been some higher proof of gross misconduct on Princeton’s part. This is the only explanation I have come up with for why so many other apparently reasonable people have repeated this line without providing persuasive evidence. On July 5, The Chronicle for Higher Education published “Princeton Betrays Its principles: The corrupt firing of Joshua Katz threatens the death of tenure,” which is a withering condemnation of Princetonian spinelessness. If it were true that Princeton used the investigation as an excuse to fire a man whose views were a cosmetic liability for the university, such a criticism would be justified. But there is no way to know for certain that this is what was done. One can only extrapolate broadly from available information.

    For instance, it is undoubtedly true that many members of a progressive mob unjustly demanded that Katz be fired simply for writing something with which they virulently disagreed. (Anyone who actually wanted Katz fired for the Quillette essay is guilty of precisely the progressive extremism of which Katz and his gang accuse Princeton.) It is also true that a university must thoroughly investigate a complaint submitted by a student suggesting that a professor is guilty of egregious misconduct. Is it possible that the dean of the university, a peer committee, the university president, and the board of trustees all perpetrated a hoax investigation for thirteen months because of a concerted effort to fire Katz? Perhaps. Is it manifestly evident? Certainly not.

    Cancel culture has destroyed innocent lives, and has exacted numerous excessive punishments, and has achieved a tyrannical power within certain precincts of elite America. But the sheer fact of cancellation proves nothing about what is true and what is false, who is innocent and who is guilty. Cancellation is not itself one of the facts that need to be established regarding any specific case. Nor has martyrdom ever proven the truth of a faith. And one can acknowledge this even while also contending that those who ordered Katz’s head on a platter simply because of a phrase in an article must be opposed even by non-bigots and non-cranks.

    Similarly, is it possible that Joshua Katz wrote that infamous phrase in his Quillette essay because he knew that the progressives were out for his blood, and so he was throwing his lot in with the other team? Did he write it on purpose, with cunning, so that he could later argue, after his inevitable cancellation on other grounds, that his mistreatment was a matter of free speech and not a matter of sexual misconduct? Perhaps. It is as a convincing a theory as any other, and none are very convincing. I have heard both versions of the Katz affair defended with perfect confidence by people I respect. All their confidence is baseless. The incontrovertible fact is that we do not know the facts.

    Our very conception of participation in the public discourse is predicated upon a gross distortion of the proper relationship between truth and opinion. We abhor silence, as if having nothing to say is somehow worse than saying much without much  substance. It is not shameful to recognize one’s own incompetence for judgment, if judgment requires knowledge that one does not possess. Regarding subjects which we cannot adequately know, especially those which concern the private lives of other people, it is honorable not to have an opinion.

    A dear friend of mine, I will call her Jane, was raped by a boy with whom she had attended middle school and high school, and with whom she had been close for most of her life. He was monstrously drunk at the time, so much so that he does not remember the act (at least he has never indicated to her that he does). She did not tell the police or any of their mutual friends, neither directly afterwards, when basic functioning was a task that she could hardly manage, or many months later, when the fog had begun to lift and she dreaded re-engulfment. For quite a long while after the trauma, she interpreted any remotely analogous incident primarily as a tale of rape. (Whenever a new cycle of ignorant gossip about a sexual-assault-related claim captivates the nation, and the same priorists who last time asked “why didn’t the victim come forward earlier?” ask it again, Jane’s nails puncture her palms.)

    This event determined her disposition towards every allegation levied against a man regarding sexual misconduct — even cases like Katz’s, though Katz was certainly not accused of rape, or of a less violent kind of sexual assault. Jane maintained that prior disposition until her brother was accused of sexual assault by a woman with whom he had gone to college. For years after the accusation was made, her brother receded into a depression that sapped him of the strength or the inclination to leave his bed, or to read, or to speak to most anyone other than Jane. During that bleak era, she would schedule her days around their phone calls, convinced that her voice was all that kept him from suicide. She believes, on the basis of her own cross-examination of her brother, in his innocence.

    Discussing the Joshua Katz affair with Jane is psychologically and sociologically fascinating. Depending on the day, or the aspect under discussion, or the attitude of her interlocutors, she assumes either the priorism of a victim of rape or the priorism of her brother’s sister. Whichever of these personas participates in a given conversation, it is evident that Jane is agitated by contradictory loyalties, which is why it is so difficult for her to conjure genuine concern about the facts of the case she is discussing. She doesn’t know whether Katz is guilty or innocent; she knows that her brother is innocent, and she does not want to be the kind of person who would have assumed her brother’s guilt, or would not have advocated for his rights to fair treatment and due process. She has no idea why Katz’s female student did not participate in the primary investigation, but she knows why she has never gone to the police, and she does not want to be the kind of person who would doubt the veracity of the testimony of a woman such as herself.

    It must be acknowledged that Jane’s antithetical loyalties have a certain integrity, even though they inhibit her from developing a dispassionate view of this case. While there are many priorists who have exploited the Katz affair for their side, not all priorists are operating in bad faith, in the sense that they are not all primarily motivated by a desire for community membership. Priorism is not always crass, though it always facilitates an intellectual incompetence. The views that Jane develops purely on the basis of her prior loyalties, which are outgrowths of her own dark experience, are not cheaply held. They are understandable, even admirable impulses — but they are not intellectually supportable ones. Loyalty is a precious human expression, and it can be enriching and beautifying. But it must be closely watched, tempered, and monitored to keep from becoming blind and devolving into tribalism.

    Sooner or later, in the analysis of our scandals, owing to the complexity of the questions and the mixed availability of evidence and the laziness of our public discussion, one must consider seriously questions of epistemology — of what we can know and how we can know it. The “fake news” and “alternative facts” of the Trumpists brought this philosophical conundrum into the open, though it has always been a fundamental concern for conscientious citizens. And now it seems to be everywhere, as each gang cherry-picks their experts and sends them into battle on every subject from medicine to foreign policy. The Katz affair is another example of the ruined reputation of authority.

     “It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.” So wrote the English mathematician and philosopher W.K. Clifford in his essay “The Ethics of Belief,” which appeared in 1877. In his essay Clifford advances a defense of evidentialism, an epistemic doctrine which stipulates that a belief is only ever rightly and morally held if it is supported by conclusive evidence. Clifford insists that even if a belief is true but is held for any reason other than that it was empirically or logically proven, it is wrong to hold it. (Milton’s wonderful phrase for this intellectual predicament was “a heretic in the truth.”) And the word “belief” does not refer simply to the question of religious faith, which was the immediate though hidden subject of his essay, but extends also to every variety of knowledge: “No simplicity of mind, no obscurity of station, can escape the universal duty of questioning all that we believe.” It is an exorbitant imperative, practically impossible to fulfill, and paralyzing even to attempt.

    Nineteen years after this essay appeared, William James published his famous rebuttal to Clifford, in “The Will to Believe.” Therein James defined a hypothesis as “anything that may be proposed to our belief; and just as the electricians speak of live and dead wires, let us speak of any hypothesis as either live or dead. A live hypothesis is one which appeals as a real possibility to him to whom it is proposed.” He goes on to defend the human “right to believe at our own risk any hypothesis that is live enough to tempt our will.” This laxity about truth is necessary, James argues, in order for certain strains of higher belief to remain possible. He is salvaging an epistemology for religion. Clifford’s rigor, he warns, prescribes an agnosticism which “would absolutely prevent [one] from acknowledging certain kinds of truth” and is therefore “irrational.” (It was daring of James to invoke rationality in the service of his idea of faith.)

    But surely James’ prescription is as untenable as Clifford’s — it is as intellectually lazy as Clifford’s is intellectually severe. Clifford slams the door shut, James removes the hinges. Can this great controversy about religion be applied to politics? It is surely impossible for a citizen to gain proficiency in all the subjects that would allow her to cast a thoroughly informed vote, for example. Insisting on Cliffordian certainty in public affairs is futile. As Clifford himself noted, we all rely upon authorities, and it is our responsibility to determine what counts as authority in various fields. But if evidentiary certainty is not possible, does this ignorance emancipate us from Cliffordian scruples and gain us a Jamesian freedom to believe whatever kindles to us?

    Consider a concrete instance: the Katz affair. Given all that we do not know, does James’ latitude apply in such a case? Do we have a “right to believe at our own risk any hypothesis” regarding Katz and his cancellation “that is live enough to tempt our will”? Is it cowardly to hold one’s tongue and not contribute to the fight over first principles simply because one is not certain? If one is convinced that independent thought is under siege, isn’t it fair to extrapolate from what one already knows about American society and infer that Katz was unfairly treated? And even if Katz was not unfairly treated — does it really matter? Shouldn’t anyone who opposes the attack on independent thought defend Katz, as he has become a synecdoche for the larger problem?

    Similarly, suppose one believes that there is a certain kind of professor who serially abuses his position and takes advantage of his students. And suppose that this same person is also convinced that there are charlatans who have made careers out of condemning cancel culture, and such people, the most powerful among them, have connived to dress Katz up as a wronged saint. Doesn’t such a person have a duty to speak up regardless of whether or not she has evidence of Katz’s specific guilt?

    The answer, of course, is no. It is undignified to play the fool in the name of ideological loyalty. No citizen or ally is required to simply repeat what others on her team expect her to say, no matter the valorousness of that team’s general code. None of us have the right, let alone the duty, to feign certainty. It is entirely honorable to have no opinion about the Katz affair, or about any controversy that will not admit of clarity. We have instead the onerous obligation to defend our values while putting pressure on platitudes and slogans. Put down your scripts. They refer to nothing beyond themselves.

     

    Problems and Struggles

    “So Socrates!” he teased, “you are still saying the same things I heard you say long ago.” Socrates replied: “It is more terrifying than that: not only am I always saying the same things, but also about the same things.”

                      Xenophon, Memorabilia, IV.4.6

                              (translated by Jonathan Lear)

    In the plenitude of discouragements that is contemporary history, the one that perhaps stings me the most is my increasing despair about the possibility of persuasion. Who changes their mind anymore? What is the difference between an open society that is intellectually petrified and a closed society? In a democratic society, which governs itself by exchanges and tabulations of opinion, surely the first requirement of meaningful citizenship is receptivity. Thoughtlessness is a betrayal of democracy. Mill said that democracy is “government by discussion.” The purpose of discussion is to test the merit of opinions with the presumption that one may convince others, or become convinced by others, of new views. One of the quintessential experiences of democratic life is to admit that one is wrong. In debates about large principles and large programs, everybody cannot be right, and sometimes not even a little right; and in a liberal order the adjudication of contradictions is accomplished not by guns but by arguments. Or so we like to tell ourselves. But the degrading spectacle of what passes for public debate in America has shaken my hoary faith in the dependability of argument. Is social media a discussion? Is a shriek an argument? Where is the reasoned deliberation that Milton and Madison and Mill regarded as the foundation of a decent polity? They intuited that the road from unreason to indecency is not long, and we are diabolically confirming their intuition. We have made “public reason” into an oxymoron. We are drowning in discursive garbage. Even the people who believe in persuasion seem to persuade only each other. They are just another American community of the elect — the mild and articulate sect of the arguers.

    Many observers have noticed this intellectual crack-up. They suggest a host of solutions. We must keep our minds open. We must listen more carefully. We must respect each other. We must be reasonable, and even rational. We must identify our biases and correct for them. We must bring evidence. We must lower the temperature. We must enhance our capacity for empathy. We must connect with each other, and with the Other. We must practice epistemic humility. These homilies are everywhere, and all the preaching is true. We should indeed do all these noble and necessary things. These are the traits of a democratic individual. But is it not time to notice the futility of this wisdom in present-day America? Nobody seems to be hearing that we should listen. These exhortations leave almost no trace on our public life, which gets insistently dumber and nastier. They have become a sad and lovely genre of their own, a journalistic counterpoint of urgent but soothing platitudes. They may be accomplishing nothing more than providing solace and companionship for those who utter them. I have uttered many of them myself, and I stand by them. They are the only answers. But I am beginning to feel a little foolish, and disconnected, and marginal; I do not feel sufficiently helpful.

             To some extent, of course, it was ever thus. There never was a time when Madisonian graciousness ruled our politics. Philadelphia in 1787 and Illinois in 1858 were epiphanies, not norms. Indeed, the promiscuity of the nineteenth-century American press can make social media seem redundant, in its slanders and its outrages. The manipulability of public opinion has always been a primary assumption of American politics and its cunning practitioners. Was there ever a medium of communication that was inhospitable to zeal, or that turned its back on lies? Have fanatics and extremists ever been at a loss for instruments of influence? There is some consolation to be had, I suppose, from this long history of what ails us. We are not the first to have fallen short of our discursive ideals.

    Moreover, it is good that people stick up for what they believe. Intellectual stubbornness is in its way a mark of intellectual maturity. The malleable are too often mistaken for the reasonable. It is good that people hold strong convictions, and that they confer upon beliefs a prominent role in their identities. Yet the strength of a conviction has no bearing on its merit. Beliefs are not like foods that taste better hot. Too many people hold their beliefs for bad reasons, or for no reasons at all — merely because other people like themselves hold them, in the “cascades” and the “contagions” that have exercised social scientists in their study of our era of conformity. In the articulation of our beliefs, the most common substitute for reasons are passions. The idolatry of feelings that has characterized our culture for many decades has now been extended to our politics. But what does passion have to do with persuasion? Persuasion by passion is a nice definition of demagoguery.

             This time we have fallen very short. The collapse is especially painful for someone such as myself, who has spent his years in the argument business. It was, and still is, an idealistic calling. It came with many scruples about the integrity of argument. We worked hard when we argued, and we tried never to make it personal. (Almost nobody has a perfect record about turning an argument into a quarrel. I certainly do not. Sometimes hostility follows naturally from having understood the dangerous nonsense that your interlocutor is peddling.) There was an atmosphere of exhilaration that surrounded the seriousness. Of course there were also degraded forms of the practice: the gladiatorial kind, for which debate is a kind of sport, an exhibition of dialectical virtuosity, a contest of cleverness; and the academic kind, in which debate consists in making “moves” and “turns” and combinations thereof, as in a professional game; and the festival-of-ideas kind, in which thinking is presented breezily with a “hard stop,” for the entertainment of the paying customers and the rich. But there remained, there still remain, intellectuals with a sense of honor, for whom truth and method matter most, and who regard their activity, rightly, as a significant contribution to their society. One would think that such people are never more valuable than in a crisis — but they are learning, I fear, that it is precisely in a crisis that they may be least valuable, and most easily overridden. In 2016, for example, almost every thoughtful conservative columnist in the country valiantly opposed Trump, and it was as if they never existed. Right now the argument for persuasion, an American argument if ever there was one, seems to be experiencing the same indifference.

    Yet there is another way to consider this problem, and others, so as to elude despair and to find strength. It is to regard it not as a problem, but as a struggle.     

    The success with which we meet the difficulties that we face depends first on an accurate description of them. Nothing destroys hope so quickly as asking a question in a way that makes it impossible to answer. Such a question leaves us with the crippling impression that the world is finally intractable, that there is nothing that can be done. It is one of pessimism’s finest tricks. There are predicaments, of course, in which nothing can be done — but they are rare, even in adversity, and they, too, must be accurately characterized, if we are to be sure that we are being thwarted by reality and not by ourselves.

             There are problems and there are struggles. Problems have solutions; struggles have outcomes. Problems are technical; struggles are historical. Problems recur; struggles persist. Problems teach impatience; struggles teach patience. Problems are fixed; struggles are fought. Problems require skill; struggles require character. Problems demand knowledge; struggles demand wisdom. Problems may end; struggles may not end. A problem that does not end is a defeat or a failure; a struggle that does not end is a responsibility and a legacy.

             We are not given to choose between a world of problems and a world of struggles, and so we must be dexterous. Different temperaments incline to, or feel especially beset by, the one or the other; and this may be the case with communities and societies, too. The American affinity for problems over struggles is well known: the great American epic of practicality and its rewards. We care so much about practicality that eventually it was raised into a philosophy, according to which the proven satisfactions of a hammer and a nail were powerful enough to rid us of nothing less than metaphysics. William James, who perversely regarded pragmatism as a spiritual dispensation, once defined reality as “a perfect jungle of concrete expediencies.” Whether or not reality is like that, American reality is. The wildness of American religiosity may be understood as the response to such an environment of rampant utilities. (Silicon Valley is a hotbed of New Age rubbish.) Yet the American obsession with how things work has produced many admirable results, not least the technocracy that now inspires the wrath of the populists. Over many decades it has done more for the public good than any mob ever did, even if sometimes it has attempted to plant its standpoint where it does not belong and sought in its fanatical meliorism to reduce struggles to the scale of problems. But eventually struggles, too, have a place for policy, which is best not made by visionaries.

             Thinkers from Augustine to Heidegger have belittled the uses of things. The “ready-to-hand,” owing to its “serviceability,” is ontologically shallow, according to the latter, and much too distant from Being. According to the former, the uti, the use of something for the sake of something else, is similarly secondary and extrinsic to the highest meanings, and he ponders whether “men should enjoy themselves, use, or do both.” The American experience of enjoyment in use, of pleasure in function, is beyond his imagination. Such a hierarchy of value would be wrecked by a visit to an American hardware store. The anti-pragmatists are disquieted by a love of the extrinsic just as the pragmatists are disquieted by a love of the intrinsic. The answer to Augustine’s question, obviously, is that we must do both.

    Moreover, there is glory, and not only necessity, in our practical achievements (just as, say, there is beauty, and not only necessity, in architecture). Homo faber, if he is to make things and build things, must include among his talents a sense of form and a concept of design, and an ability to work out the purposes of an object as well as its material properties. The gulf between instrumentality and art is not as wide as the aesthetes and the Platonists would have us believe. I learned this lesson in Kensington, Maryland, where there used to be a shop that sold antique tools — carpentry tools, construction tools, kitchen tools, fireplace tools — a paradise of practicality; and when I first walked into the shop I was struck not by the spectacle of utility but by the spectacle of imagination. The shapes and the metals were gorgeous. I still own the heavy late-nineteenth-century iron cooking pot, with its delicate handles and its handsomely pockmarked lid, that I acquired there. It is a welcome drag on my aspirations to loftiness.

             Here is a passage from one of the many American books on (this is its subtitle) “how to perfect the fine art of problem-solving”:

    Problem-solving is a critical survival skill because things go wrong for us all the time. Working through problems is crucial for productivity, profit, and peace. Our problem-solving skills, however, have been short-circuited by our complicated, technology-reliant world. Why learn how to fix something when Google can do it? Unfortunately, calamity doesn’t always fit in a search bar. And increasingly in our modern, perilous world, the issues that emerge are subtle, laced in subtext, or teeter on the tip of a slippery slope — all attributes that require a human touch to solve. As said humans, we must not only be able to address the problems that arise across all professions and walks of life, we must also be able to solve them. Before they drown, damn, or destroy us. Thankfully, problem-solving is a skill that can be learned.  

    I can practically hear The Star-Spangled Banner in the background. But every word is unimpeachable, except perhaps the reference to peace, which belongs more realistically to the realm of struggle. The undaunted confidence in human agency, the respect for the concrete, the commendation of the artisanal and the collaborative, the faith in education and the transmission of skills: these are elements of the mentality that built cities and created technological revolutions, and their dazzling social and economic benefits. The inventors, the tinkerers, the adjusters, the repairers, the tweakers: they are pillars of everyday existence, who defy our sense of helplessness and relieve us of many of the oppressions of our material setting. They make life more dignified, because there is dignity in safety and comfort and the conquest of anxiety. 

    The same mentality, alas, these same elements, are also the source of our Icarian perils. Sometimes our ability to make things exceeds our ability to comprehend what we are making, and we deploy our inventions before we adequately understand their purposes and their effects. “Problem-solving” is ethically contentless; it serves many causes and many codes. Evil, like goodness, seeks technical support, which is why “pragmatic,” in ordinary usage, also has a pejorative connotation. (As does “fixer”.) The question of how things work is never the most fundamental question one can ask about human affairs. But fundamental questions are not the only questions that we are obliged to ask. We are, even the largest-souled among us, commonplace creatures who live fragilely in a world of cracks and fixes. We are fortified more by reforms than by revolutions. So blessed be the fixers, especially those who recognize the limits of the fix as a model for all human solutions.

    Not all the difficulties that beset us can be described as problems that can be fixed. Some of them are deeper and thicker and more lasting, and therefore more immune to our practical brilliance and our utilitarian talents. They are conditions, inherited states-of-affairs, systems and structures, traditions and loyalties, inner dispositions in the individual and the community, cultural premises hallowed by the generations, abstract conceptions and reified ideals. They imbue everything we do, but we cannot take a hammer to them. (Except wantonly, of course: violence in a problem-fixing society is owed in part to the special frustration of problems that cannot be fixed. Frustration, and the inability to live with it, is one of the characteristic hazards of the can-do worldview.) Indeed, the ubiquity of their effects, their saturation of all the private and public realms, contributes to their durability. And yet they must be fought.

    There is the difference: fixing is not exactly a fight, even when it is hard. No fight is necessary when satisfaction can be technically and efficiently achieved, and there are no first principles at stake. A solution to a problem may be wrong without being evil. Trial-and-error is a benign war on error; a correction of mistakes, not of sins. The question of how best to fight inflation, or how best to curtail our dependence on fossil fuels, or how best to halt nuclear proliferation — such questions may provoke virulent debates, but the virulence is generally not philosophical. These are “how” questions, and not all “how” questions must become “why” questions. A debate about means when there is a consensus about ends is much more easily resolved than a debate about ends. Conversely, one time-honored way of wrecking a debate about means is to turn it into a debate about ends — to make every difficulty into a matter of first principles, to transform problems into struggles. The transformation of a problem into a struggle is a fine strategy for the enemies of a solution.

             Perhaps the fundamental difference between a problem and a struggle is time. The temporal horizons of struggle are long —sometimes very long, even longer than a lifetime. Sometimes we bequeath a struggle to our children. The struggler, like the lover, is prepared to wait. A problem, by contrast, does not tolerate such duration. It needs to be solved soon, if we are to function; whereas struggles are not the condition of our functioning but of our just and proper functioning. One of the meanest facts of human life is that unjust societies can function. (Making a society function is one of the oldest excuses for injustice.) But there is some comfort, too, in that fact, since a just society has never existed. Our only alternatives may be imperfection or extinction.

    Fiat justitia pereat mundus: the old Latin maxim captures the tense relation between perfection and reality. Let justice be done even if the world perish! That was the maxim’s customary reading, not least by Kant, who described as “a sound principle of right…which should be seen as an obligation of those in power not to deny or detract from the rights of anyone out of disfavor or sympathy to others.” But what sort of justice is the destruction of the world? Where is the virtue in nothingness? (Kant dodged this ethically complicating objection with a strange paraphrase of the maxim’s meaning: “let justice reign even if all the rogues in the world must perish.”) We may read the maxim differently, then, and less as a mandate for zeal: we may read it as a warning that the insistence upon perfect justice may destroy everything, as a caution about absolutism in a just struggle. Be careful not to destroy the world when you seek justice! And I have seen a peculiarly American inflection of the adage.  At the Supreme Court there hangs a portrait of John Marshall painted by Rembrandt Peale in 1834. The jurist is set heroically in a stonework oval with Roman ornamentation, and beneath him is a stone on which are carved the large words FIAT JUSTITIA. The rest of the maxim, the worry about the consequences of righteousness, has disappeared. Only a society consecrated to newness, a society that regarded itself as a beginning in what is right, could so blithely have banished the shadows from the ancient injunction.

    A struggle does not allow for such innocence, if only because of its wealth of sobering experience. If you have struggled against an injustice, then you have known it, and witnessed it, and existed with it. You have learned too much about the world to believe that pragmatism is all the equipment that you will need to meet it. There are other inner resources that must be readied: steadfastness, patience, tenacity, resilience, courage. The less your life has need of those qualities, the happier (and the luckier) it is. A life of problems is not like a life of struggles. The trials of fixing are real, but they differ from the trials of struggling — the fixer’s trials are more like exasperations. But an exasperation with history, particularly with a history of suffering, is no mere exasperation: it is a sense of tragedy. It broaches the hardest question of all, which is the question of the warrant for hope.

    A life in struggle is a life in hope, and hope gets stronger as its basis in reality gets weaker, until finally it floats free of experience and proclaims a pure assertion of the will to exist. The more empirical the hope, the less it is needed. But unempirical hope, or hope after catastrophe, is, for that reason, invincible; and it would be an offense against all the communities of struggle, all the shattered but intact peoples, to dismiss such hope as illusion, when it is the purest evidence of unbroken vitality. In a beautiful study of the spiritual perdurability of the Crow Nation, Jonathan Lear has called this “radical hope,” by which he means an inner independence from history that permits one to entertain “the possibility of new possibilities.” For this reason, anyone involved in a struggle will not count a bad day as the last word, because he lives in expectation of it, and he is accustomed to a different pace for progress, to the unsteadiness of forward motion, to delays and reversals and losses. The larger the goal, the rougher the road to it.

    If we prefer to see ourselves as a nation of problem-solvers, it may be in part because we prefer to look away from the strugglers in our midst. Having completed their tasks, problem-solvers proceed to the most typical American activity of all: they move on. But the strugglers cannot move on. They are prisoners of circumstances, and of the power that with its prejudice arranged their circumstances. Their inner freedom is a measure of outer necessity. Our centuries of innovations and breakthroughs were also centuries of oppression and discrimination. Our country has harbored many communities of struggle: the Native Americans, for example. For a hundred years or so the labor movement represented a community of struggle, and it may do so again. But no Americans have a more natural understanding of struggle than black Americans. Their emancipation, which we treat as a discrete historical event circa 1863, was (in the words of one historian) “the long emancipation.”

    The story of African American culture is a story of melancholy and its mastery. There is joy in the blues, which is not the case with many other traditions of sad song. The slave songs and the spirituals are intimate with the “trouble of the world,” but I have never heard one of them recommend surrender. “O me no weary yet, o me no weary yet, I have a witness in my heart, o me no weary yet.” The slaves sang, “Lord, make me more patient”; they sang, “Hold out to the end.” And many decades later the poets expressed the same extreme commitment to endurance. Here is Sterling A. Hayden, addressing a Southern “nameless couple” who have suffered much hardship:

    Even you said

    That which we need

    Now in our time of fear, —

    Routed your own deep misery and dread,

    Muttering, beneath an unfriendly sky,

    “Guess we’ll give it one mo’ try,

    Guess we’ll give it one mo’ try.”

    And here is Countee Cullen’s “The Dark Tower”, whose title refers to a place on 136th Street in Harlem where poets used to meet, as if the poem, in its first person plural, might speak for them all.

     

    We shall not always plant while others reap

    The golden increment of bursting fruit,

    Not always countenance, abject and mute,

    That lesser men should hold their brothers cheap;

    Not everlastingly while others sleep

    Shall we beguile their limbs with mellow flute,

    Not always bend to some more subtle brute;

    We were not made eternally to weep.

     

    The night whose sable breast relieves the stark,

    White stars is no less lovely being dark,

    And there are buds that cannot bloom at all

    In light, but crumple, piteous, and fall;

    So in the dark we hide the heart that bleeds,

    And wait, and tend our agonizing seeds.

    There is the temperament of struggle: waiting and tending to one’s agonizing seeds, which one day, owing precisely to the pain of their cultivation, will grow.

    Are Americans, particularly liberal Americans, still capable of such a temperament? Have we, in the inward velocity of our digital and consumerist present, forfeited the mental readiness for the extended future, or squandered it on futurism? I arrived at this broad and imprecise distinction between problems and struggles in order to understand the despair that I see around me. I attribute that despair to a confusion between these orders of difficulty. It makes sense to despair of solving a problem — some things, after all, cannot be fixed; but it makes no sense to despair in a struggle, because disappointment is a regular feature of struggle, and perseverance comes before success. Injustice is much bigger than a problem. Anybody who combats injustice without the wisdom of struggle will fail in the effort to prevent it from becoming a fate. There are concrete instances of injustice, of course, which can be addressed with legal or political remedies. But there are no policies for the human heart. An earned income tax credit cannot heal psychic and cultural wounds. Discrimination can be ended by practical means, but not racism. Discrimination is a problem, but racism is a struggle. Racism, and all the other panics about difference, will never disappear. They are as old as civilization, and the greatest affront to it. All that can be done is to raise the legal and political and social costs of a particular expression of a prejudice, and then, having inflicted defeat upon it, await its resurgence, which must never surprise us even when it shocks us. The struggler is not a pessimist, but he is a disabused man. The appearance of anti-Semitism in America does not refute the revolutionary promise of America for Jews, because which student of Jewish history, which student of Christian history, which student of evil in human history, ever believed that once and for all anti-Semitism would end? Anti-Semitism was never illegitimate in the European political tradition, and in the Russian one, but it is illegitimate in America according to the terms of our founding. (Whereas white supremacy was inscribed in some of them.)

    When friends tell me, as a consequence of Trump and the ascendancy of the radical American right, that America is over, or when they tell me, as a consequence of Netanyahu and the ascendancy of the Israeli right, that Israel is over, I castigate them for being disinclined to struggle. (I have three motherlands: America, Israel, and my library.) When they tell me, as they spin the globe, that democracy is over, I reply that the rise of authoritarianism is not an event, but an era; and that it will take a long time, a generation or more, to push back the authoritarians and restore the prestige of the open society; and that we must not measure the crisis in election cycles, because it is more profound than politics; and that the inability of democracy to defend itself has always been its greatest historical failing; and that its rejection does not refute it — in sum, that we are in a historical struggle. The refusal to recognize it as such makes it more likely to fail. It is, moreover, a privilege to serve. The struggle for democracy, like the struggle for justice, makes life less trivial. Camus believed that Sisyphus was happy.

             But do we, as they say in foreign policy, any longer have the staying power? The analogy with foreign policy is actually quite useful. One already hears and reads about “Ukraine fatigue” in America. We are fatigued by their fight for survival? The vanity! If the Ukrainian war is just, then it is just even when we get tired of it. The Biden administration has responded more or less splendidly to Putin’s aggression, but more will be needed, because this is not a problem, it is a struggle. (The Ukrainians have established “resiliency centers” against the destruction of the country’s infrastructure and the winter cold.) It was right about now that I expected the administration’s determination to collide with the country’s lack of determination. I mean, it’s been a whole year. Pretty soon we will have another “forever war” on our hands.

    There is no more damning evidence that the readiness for struggle is waning in America than our stupid retreat from Afghanistan. Twenty years is not even close to forever, except for people who do not understand historical time and have been damaged by the warp speed of American life. There were sound moral and strategic reasons for our presence in Afghanistan; and this is unwittingly conceded every time the same opinion pages that stridently called for an end to the “forever war” publish poignant pieces about the plight of Afghan women and Afghan schoolchildren in the kingdom of the Taliban. What did they think was going to happen? The whole world was taught that it could wait America out, that we have only a limited competence for commitment. Unlike us, our enemies know how to practice the art of waiting. They are not intimidated, or bored, by the longue duree. In their global rivalry with us, they are preparing for a struggle.

    The psychology of struggle is a brake also against another danger that faces us. Owing to the magnitude and the multiplicity of the crises that confront us, the apocalyptic spirit has been given new life. Hysteria is increasingly accepted as intelligent, as a condign response to a proper analysis of things. In our culture we are riveted by endings, especially by spectacular ones. There is a new fashion in the-end-of-history, which is just as blind as the old one. Unlike the old one, this one is animated not by a sensation of triumph but by a sensation of weariness, by a loss of heart. History may now be numbered among the causes of depression. The prophecies of decline and destruction are overwhelming. In politics, the belief that time is running out, that it is too late to change course, that all that awaits us is cataclysm, has two antithetical consequences: apathy and apocalypse.

    An apocalyptic is someone who decides to treat a struggle as a problem, and to get it over with. He wants a quick eschatological fix; his understanding is distorted by his desperation. Despondency has sapped him of his will and his energy, or rather, it has left of his will and his energy only enough for the less exacting way of radicalism, which (as we know from the radical past) will either blow things up or exhaust itself. Struggle, in other words, even struggle unto the generations, is the quintessential anti-apocalyptic path. It will not be waited out, or permanently hobbled by gloom. In its decision to outwit despair, in its solemn promise that its resolution will be invulnerable to fortune, the spirit of struggle arms us not only against the injustice that we fight but also against our own frailties. We may reflect, and be calm, and hold together, in the storm, because we are wiser than the storm. Like Durer’s knight we can advance, but unlike Durer’s knight we are not alone.

    The Court Gone Wrong

    What is happening on the Supreme Court of the United States? 

    The Court has overruled Roe v. Wade. It has rejected the whole idea of a right to privacy. It is sharply restricting the ability of federal agencies to protect safety, health, and the environment. It is limiting voting rights. It is expanding the rights of gun owners, commercial advertisers, and those who wish to spend a lot of money on political campaigns. It is moving very quickly, and almost always in directions favored by the political right.

             None of this comes out of the blue. It is the culmination of four decades of intense work, meant to move constitutional law in exactly these directions — work by activists and scholars, politicians and lawyers-for-hire, corporate lobbyists and the National Rifle Association, religious organizations and the Federalist Society. It was a long process, but it seems fair to announce that they have finally won.

             I received a firsthand sense of what was afoot in 2002, when I found myself in a large audience at the University of Chicago Law School, waiting to hear a speech by Douglas H. Ginsburg, who was then Chief Judge of the influential Court of Appeals in Washington, DC. Tall and thin, with a bemused and scholarly manner, Judge Ginsburg is an able and fair-minded judge. He is a generous and kind person to boot. He is also a graduate of the University of Chicago Law School, which was my home institution at the time. I like and admire him. But on that day I was flabbergasted by what I heard; actually I was appalled. Judge Ginsburg called for something like a constitutional revolution. 

     

    Judge Ginsburg contended that the Supreme Court abandoned the United States Constitution in the 1930s, when it capitulated to Franklin Delano Roosevelt and his New Deal. He sought to return to the Constitution as it was understood before the capitulation.

    Ginsburg began by emphasizing that “ours is a written Constitution.” Making a bow in the direction of populism, he contended that this observation is controversial in only one place: “the most elite law schools.” The fact that the Constitution is written has major implications. If judges are “to be faithful to the written Constitution,” they must try “to illuminate the meaning of the text as the Framers understood it.” 

    In Ginsburg’s account, judges were faithful to the Constitution for most of the nation’s history — from the founding period, in fact, through the first third of the twentieth century. But sometime in the 1930s, “the wheels began to come off.” In that period the nation faced the Great Depression, and President Franklin Delano Roosevelt tried to do something about it, above all with his New Deal, which greatly expanded the power of federal agencies, through, for example, the creation of the National Labor Relations Board and the Securities and Exchange Commission. Responding to “the determination of the Roosevelt Administration,” Ginsburg declared, the Supreme Court abandoned its commitment to the Constitution as written.

    How did this happen? Judge Ginsburg’s first example was Congress’ power, under the Constitution, to “regulate commerce . . . among the several states.” What does this mean? Judge Ginsburg referred, with enthusiastic approval, to the Supreme Court’s view that Congress lacked the constitutional power to ban child labor. But his strongest complaint involved the Supreme Court’s decision, in 1937, to uphold the National Labor Relations Act, which protects the right of workers to organize and to join labor unions. In upholding the Act, the Supreme Court said that when strikes occur, interstate commerce is affected. A strike in Pennsylvania often has a big impact elsewhere. 

    Judge Ginsburg objected that this is “loose reasoning” and “a stark break from the Court’s precedent.” But his complaint went much deeper. The Court’s acceptance of the National Labor Relations Act was not merely “extreme.” It was also “illustrative.” He objected that the Supreme Court has upheld the Clean Air Act, which, in his view, violates the separation of powers by granting excessive discretion, and hence legislative power, to the Environmental Protection Agency. Under the Constitution, legislative power rests in Congress; Judge Ginsburg said that because the Clean Air Act allows the Environmental Protection Agency to make the law, the “structural constraints in the written Constitution have been disregarded.” 

    But even this is just the tip of the iceberg. Since the 1930s, the Court has “blinked away” crucial provisions of the Bill of Rights. Of these, Judge Ginsburg singled out the Constitution’s Takings Clause, which says that government may take private property only for public use and upon the payment of “just compensation.” Judge Ginsburg complained that the Takings Clause has been read to provide “no protection against a regulation that deprives” people of most of the economic value of their property. In other words, the Court allows government to impose regulations, especially in the environmental area, that do not quite “take” private property but that much diminish its value. Judge Ginsburg objected that the Supreme Court has not required government to compensate people for their losses. 

    At the same time that the Court has “blinked away” the individual rights of the American Constitution, judges have manufactured new rights of their own devising. In his view, these rights are fake news. In this way, members of the Supreme Court have acted not as judges, but as a “council of revision with a self-determined mandate.” What does Judge Ginsburg have in mind? His chief objection was to the right of privacy. It seemed clear that he rejected Roe v. Wade

    But he went much further than that. He singled out the Court’s decision in 1965 in Griswold v. Connecticut, the foundation of modern privacy law. In that case, the Court struck down a law forbidding married people to use contraceptives. Judge Ginsburg objected that a judge “devoted to the Constitution as written might conclude that the document says nothing about the privacy of” married couples. The Griswold decision, he added, is “not an aberration.” It is matched by recent decisions holding that the Constitution imposes limits on capital punishment, such as its decision in 2002 striking down a death sentence imposed on a mentally ill defendant. 

    Judge Ginsburg’s narrative, then, is simple and straightforward. Until 1933 or so, the Court followed the Constitution. At that point, it adopted a “freewheeling style.” But Judge Ginsburg offered real hope for the future. In recent years, a small but growing group of scholars and judges have been calling for more fidelity to the constitutional text, focusing on the original meaning. “Like archeologists, legal and historical researchers have been rediscovering neglected clauses, dusting them off, and in some instances even imagining how they might be returned to active service.” 

    Judge Ginsburg’s leading example? The Second Amendment to the Constitution, which protects the right “to keep and bear arms.” Judge Ginsburg gave a strong signal that judges might well strike down gun control legislation. His exact words? “And now let the litigation begin.”

    Judge Ginsburg was speaking here of what he himself called the Constitution in Exile — the real Constitution, the one that should be restored. What made his argument so remarkable is that Judge Ginsburg was, and is, a responsible person with a first-rate intellect — and, in his judicial capacity, he displays a large measure of restraint. But in his speech twenty years ago, calling for radical changes in constitutional understandings, Judge Ginsburg was hardly speaking in a vacuum. On the contrary, he was summarizing a line of argument that such conservative luminaries as Robert Bork, Edwin Meese, and Antonin Scalia had been developing for decades. That line of argument had been embraced by many members of the Federalist Society and the Republican Party as well. “And now let the litigation begin” — that was their mantra.

    Judge Ginsburg set out a kind of Constitutional Wish List. The goal was to transform constitutional law, and to do so in major ways. For those on the right, the Constitutional Wish List included the following:

      A broad understanding of the individual right to possess guns.

      A rejection of Roe v. Wade.

      A rejection of the right to privacy in general.

      New limits on the power of modern administrative agencies, including the Environmental Protection Agency. 

      Dramatically strengthened property rights.

      Sharp reductions in Congress’ power under the Commerce Clause.

    In 2002, all this seemed unlikely in the extreme. Would the Supreme Court really be prepared to turn so many constitutional understandings upside down? Astonishing but true, we now have to put a checkmark next to each and every item on the list. They have all been achieved.

     Before 2008, the Supreme Court had rejected the idea that the Constitution creates an individual right to possess guns. Now the Court recognizes that right — and is steadily expanding it. Before 2022, Roe v. Wade was the law of the land. Now it is overruled. Before 2022, the right of privacy seemed firmly ingrained. Now it is gone. Until recently the Court had embraced, in ways large and small, the power of modern administrative agencies, including the Environmental Protection Agency. Now it has sharply limited that power. Just as Judge Ginsburg hoped, property rights have indeed been enhanced. The Court did uphold the Affordable Care Act, by a vote of 5-4, but in the process it announced new limits on Congress’ power under the Commerce Clause. And all this might be just the beginning. With respect to voting rights, freedom of speech, the rights of criminal defendants, freedom of religion, and much more, dramatic changes seem to be coming.

     There are two ways to understand the recent developments. The first, in the spirit of Judge Ginsburg’s argument, is jurisprudential. It insists that the Court is now being “faithful to the written Constitution” — that it is (finally!) following the Constitution “as written.” On this understanding, the Supreme Court has become “originalist,” which means that it is adhering to “the original public meaning” of the Constitution. If that is the right understanding, we need to ask a single question: Is originalism right?

    The second understanding is political. It is that the Court’s understanding of the Constitution is uncomfortably close to the political preferences of the current Republican Party. On that view, the Court is lawless. It is acting as a political body, even if it understands itself as being faithful to the written Constitution.

    Let us begin with originalism. What is it, and what does it entail? 

    That is a surprisingly hard question to answer. The term itself was coined in 1980 by the Stanford law professor Paul Brest, in a law review article that sketched what, in his view, were devastating objections to the whole idea. Brest meant to challenge a view about constitutional interpretation associated with Bork and Raoul Berger (a legal historian at Harvard) that was, at the time, a kind of fringe position, with little support even among right-of-center academics. (At the time, conservative scholars tended to argue more broadly in favor of “judicial restraint,” understood as respect for the decisions of the political process.) As a fringe position, originalism had little influence and political salience.

    What a difference forty years make! Originalism now comes in many shapes and sizes. It is used as a political rallying cry. It has been elaborated in great detail by a host of sophisticated law professors, among them Lawrence Solum and William Baude; law professors who embrace originalism disagree vigorously with one another about what originalism means and requires. Some originalists follow Ginsburg in emphasizing the intentions of the Constitution’s authors; others think that the search for the authors’ intentions is a fool’s errand. Some originalists think that it is important to respect precedent, even if they are not originalist; other originalists think that it is entirely wrong and that the original understanding should trump the Supreme Court’s mistakes. Some originalists think there is a difference between “interpretation,” where judges must follow the original meaning, and “construction,” where judges have nothing to follow and must exercise discretion; other originalists reject this distinction and seem to be appalled by it.

    Amid all the debates, one variety of originalism now seems to be on the ascendency. It is called “public meaning originalism.” Justices Thomas, Alito, Gorsuch, and Barrett seem committed to it, and Justice Kavanaugh seems to like it a lot. On this view, the Constitution must be interpreted in a way that fits with its original public meaning. That means that terms such as “freedom of speech,” “executive power,” “cruel and unusual punishment,” and “due process of law” must be understood not only in accordance with their semantic meaning, but also with the meaning that people would have given to them at the time of ratification in 1789. Interpretation, in this view, depends on an inquiry into history, not on any kind of moral judgment. As Richard Fallon puts it, public meaning originalists contend that the public meaning can be “discovered as a matter of historical and linguistic fact.” In Solum’s words, “the meaning of the constitutional text is a function of the conventional semantic meanings of the words and phrases as they are enriched and disambiguated by the public context of constitutional communication.” 

    Originalists are keenly aware that it is often hard to discover the original public meaning of words as they were used in the late eighteenth century. They know that reasonable people, including specialists, disagree on historical questions. They also know that unanticipated social changes can greatly complicate the search for historical answers. What is the original public meaning of “freedom of speech” as applied to radio and television? How should we understand protection against “unreasonable searches and seizures” as applied to the Internet? The most careful originalists do not ignore these questions. Still, they insist that if judges are originalists, many questions are easy. They add that when originalism leaves some questions open, or makes them really hard to answer, at least it provides the right orientation. 

     

    There is no doubt that if judges followed the original public meaning of the Constitution, constitutional law would be radically transformed. The national government would be permitted to discriminate on the basis of both race and sex. If the national government wanted to segregate people by race, it could almost certainly do that. The right to free speech would be greatly truncated. Blasphemy could probably be made a crime. States could probably allow public figures to recover huge sums of money for defamation. 

    The idea of one person, one vote would be out the window. If the federal government wanted to take away people’s Social Security benefits, or welfare benefits of various sorts, it might not have to give them any kind of hearing. Contrary to Judge Ginsburg’s view, protection of property rights would be reduced, not expanded: some of the most careful scholarly work suggests that according to the original public meaning, the Constitution protects only against physical invasions of property, and imposes no barrier to regulation that greatly diminishes the value of property. All this, by the way, is just the beginning of what would be possible.

             Originalists are acutely aware that their preferred method might lead to outcomes that many people would abhor, and they have a variety of responses. Some originalists insist on the importance of democracy and on the need to rely on democratic processes, not on courts. If originalism might allow government to do what some people consider to be terrible things — for example, to ban contraceptives or to sterilize people — originalists respond that in a self-governing society, the appropriate correctives come from We the People, not from unelected judges. Consider the case of abortion: originalists say that if the right to choose is to be protected, it must be because majorities want them to be.

    Other originalists emphasize the rule of stare decisis: judges should ordinarily respect their own precedents, even if they are wrong. True, the Court was willing to overrule Roe v. Wade, but even in doing so the Court proclaimed that other privacy rulings, including those that protect the right to use contraceptives, were not necessarily at risk. Still other originalists contend that the answers to the historical questions might not be so terrible. Many originalists are at pains to say that on their approach, states may not segregate school children by race. Some originalists contend that on originalist grounds a broad right to freedom of speech is secure.

    Most fundamentally, originalists argue that their approach is mandatory rather than optional. If it requires abhorrent conclusions, that is, in a sense, a sign of intellectual integrity, a badge of honor. In their view, originalism is the only legitimate approach to interpretation, and it is justified independently of the outcomes that it produces. 

    Each of these arguments must be addressed on its own terms. Democracy is fundamental of course, but is it really right to think that the scope of freedom of speech, racial equality, and personal privacy should be defined by political majorities? Would the United States have been better off if the Supreme Court in the twentieth century had limited these and other rights to the understandings of the eighteenth and nineteenth centuries? Consider these words from Justice Felix Frankfurter, from a memorandum that he wrote in 1953 for his files during the Supreme Court’s deliberations over the constitutionality of school segregation:

     

    But the equality of the laws . . . is not a fixed formula defined with finality at a particular time. It does not reflect, as a congealed summary, the social arrangements and beliefs of a particular epoch. It is addressed to the changes wrought by time and not merely the changes that are the consequences of physical development. Law must respond to transformations of views as well as that of outward circumstances. The effects of changes in men’s feelings for what is right and just is equally relevant in determining whether a discrimination denies the equal protection of the laws.

    Some originalists believe in respecting precedents, but many do not. Is it sufficient to say, on behalf of a theory of interpretation, that it would not do nearly as much damage as it might, because some judges are willing to ignore that theory?

    The most important argument about originalism is that it is mandatory. Many originalists seem to think that the very idea of interpretation requires their preferred approach. This is a colossal mistake. The Constitution does not contain instructions for its own interpretation. It does not have an Originalism Clause, directing judges to be originalists. Originalism is a choice. Whether it is the right choice must depend, inevitably, on whether it would make the American constitutional order better rather than worse. That is not a hard question. 

    In these circumstances, it is natural and fitting to wonder: what, exactly, have liberals been doing over the last few decades? For that matter, what have conservatives been doing, if they reject originalism or seek other paths? The short answer is: a lot. Like Paul Brest in 1980, many liberals have been vigorously attacking originalism, sometimes on the grounds that it is much squishier than it purports to be, sometimes on the grounds that it would lead to a host of intolerable results. 

    Liberals have also been developing their own theories of interpretation. For decades, Ronald Dworkin argued for “moral readings” of the Constitution, in which judges would infuse broad phrases with their preferred moral content. Some members of the Supreme Court, including Anthony Kennedy and Sonia Sotomayor, have seemed to agree with Dworkin; consider the Court’s decision to require states to recognize same-sex marriages. In 1980, John Hart Ely published Democracy and Distrust, which argued that judges should protect democracy itself, by safeguarding democratic processes and those who are at a particular disadvantage in them. Some members of the Court, including Ruth Bader Ginsburg and Stephen Breyer, have often seemed to agree with Ely. Left-of-center theorists and practitioners, such as Larry Kramer, the former dean of Stanford Law School, have developed other approaches as well, with occasional (and steadily mounting) enthusiasm for a more modest role for the Court, with an insistence that the justices are most likely to protect those who have the most power. But on the current Court, dominated by Republican appointees, it is not easy to find five votes in favor of positions associated with Dworkin, Ely, and Kramer.

              This point puts a bright spotlight on the elephant in the room: the relationship between constitutional law and political convictions. It would be a true miracle if originalism, properly applied, consistently led to outcomes favored by the extreme right-wing of the contemporary Republican Party. To update Ginsburg’s Wish List: robust gun rights, a ban on affirmative action, reduced voting rights, restrictions on campaign finance laws, no abortion rights, no privacy rights, strengthened property rights, sharp limits on the power of administrative agencies, greater protection of commercial advertising, no right to same-sex marriage, reduced rights for criminal defendants. What are the odds, really, that a particular method of interpretation, honestly applied, would always result in outcomes pleasing to one political side? On the Supreme Court, however, justices who favor originalism are drawn, time and time again, to rulings that belong on that particular Wish List.

    It is important to say that among law professors who are interested in originalism, we can find humility or uncertainty about what, exactly, the relevant history shows. And among law professors who are interested in originalism, we can sometimes find left-of-center conclusions — as in the view that the Equal Protection Clause requires the authorities to protect people of color every bit as well, and as much, as they protect white people. But there is no mistaking the fact that as it is being practiced by real judges, originalism is consistently producing conclusions that delight the political right.

             In these circumstances, it is fair to wonder whether the Supreme Court is doing law at all.