Proust and the Mystification of the Jews

             The controversy over whether Proust was in any sense a Jewish writer or, on the contrary, in some way essentially a Jewish writer, began in France only weeks after he was buried. It still persists there. But before we dip into these muddied waters, some clarifications are in order about the contradictory milieu from which he sprang.

             Proust, who was born in 1871 in Auteuil near Paris, was, of course, a half-Jew, though that is not how he defined himself to others. His mother, Jeanne Weil, was the daughter of a wealthy Jewish stockbroker. Her son remained deeply attached to her all his life, and her image is affectionately inscribed in In Search of Lost Time in the figure of the Narrator’s mother and to some degree in that of his grandmother as well, together with his actual maternal grandmother. Jeanne’s marriage with Adrien Proust, a perfunctory Catholic, was undertaken for social reasons on her part and for economic reasons on his. He was a highly ambitious physician who would attain considerable professional success, and the substantial dowry that Jeanne Weil brought to the union helped launch him on his career. The son of a grocer, he could not offer her lofty social standing through his background, but his identity as a Catholic gave her and the two sons she would bear him the necessary entrée into French society. A stipulation of their marriage contract was that any children of their union would be baptized as Catholics. Jeanne, however, never contemplated conversion, nor did her husband attempt to persuade her to convert, as far as we know. 

             There appears to have been no great romantic element in the marriage, and as was very common in haute bourgeois circles at the time, he had mistresses, including, it seems, one that he shared with his wife’s uncle, Louis Weil. Proust himself was by no means a pious Catholic, though for a brief period around the age of twenty he actually thought about a vocation as a priest. The abiding appeal of Catholicism for him was aesthetic, as well as serving as a marker of identity. In 1896, in an often quoted letter to his friend Robert de Montesquiou, he states flatly that “I am Catholic like my father and my brother; on the other hand, my mother is a Jew.” Why this attenuated connection with Jewish origins of a self-affirmed Catholic should enter into his enterprise as a writer is by no means evident, but the connection cannot simply be dismissed. To address this question, one must understand something of the ambiguous — or perhaps one should say, amphibian — nature of the Parisian Jewish milieu of which the Weil family was a part.

             A recent book by James McAuley, The House of Fragile Things: Jewish Art Collectors and the Fall of France, happily offers a vivid and detailed account of this milieu. During the nineteenth century, a concentration of vastly wealthy Jewish families gathered in Paris. Many of them came from Germany or from Alsace-Lorraine (Balzac’s Jewish banker speaks with a heavy German accent), though one prominent family, the Camondos, were Sephardic, their origins in Istanbul. To get some idea of the wealth these families possessed, the family patriarch Salomon Camondo was said to be the richest man in the Ottoman Empire. Many of these people remained for a time self-identified Jews, supporting synagogues and communal institutions, marrying and burying as Jews, and mostly wedding within their own social circles. The list of the affluent Jewish families is long: the Camondos, the Rothschilds, the Ephrussis (chronicled by their descendant Edmund de Waal in The Hare with Amber Eyes), the Reinachs, the Cahen d’Anvers, the Weils. Everyone in this milieu aspired to enter the highest echelons of French society. 

    Given that aspiration, it is hardly surprising that many of them intermarried, like Jeanne Weil, or converted to Catholicism, as she did not, and sought to put entirely behind them any traces of their Jewish origins, as she also did not. Alas, in the early 1940s the descendants of these families, including those who thought their Catholicism and their high social standing would protect them, were deported and murdered in the Nazi death camps. One must, of course, resist the temptation to say haughtily, “Little did they know…,” an attitude against which Michael André Bernstein argued vigorously in Foregone Conclusions, his indispensable book on thinking about past lives after historical catastrophe. But for a time — and certainly in Proust’s lifetime — it seemed as though the offspring of these wealthy Jewish immigrants to France would continue to flourish splendidly, shining at the heights of French society and culture, whether as Jews or otherwise.

             As McAuley’s fine book richly illustrates, a principal avenue for this flourishing was aesthetic, and this aspect of upper-class French Jewish life has obvious relevance to Proust. Many of these families became great collectors of art. McAuley shows that their tastes tended to focus on paintings, furnishings, and objets d’art from the ancien régime, evidently out of a desire to identify with the traditional and aristocratic elements of French culture. There is an approximate analogy in this drive to collect with the wealthy New York families of the Gilded Age — one thinks of the Fricks, who left a legacy in the museum that bears their name — a group that was mainly nouveau riche. But the aestheticism of their French counterparts was more pronounced. In due course, they would leave their collections to national museums or turn their own grand mansions into museums. To get some notion of the sheer opulence of these collections, one has only to visit the Musée de Camondo, once the family’s Paris residence. 

    All this ostentation of wealth in the collections elicited contradictory responses from those who considered themselves Français de souche, authentic native French. As the great Jewish collectors became public benefactors, they were appreciated by some for contributing to French culture. Others, predictably, resented them for their display of wealth, which in their eyes confirmed their preconception that Jews, inevitably vulgar, were concerned with nothing but the conspicuous display of wealth. The very taste of their collections was excoriated by some as a violation of true French aesthetic values, an act of cultural subversion. 

    Such conflicting views are brilliantly etched in the representation of Jews and of the responses to them in the pages of In Search of Lost Time. This, after all, was the milieu that Marcel Proust knew through his mother. And it should be said that, Catholic though he officially declared himself to be, many of his schoolmates were acculturated secular Jews, mostly unbaptized. Although the young Marcel certainly had no desire to be thought of as a Jew, or even a so-called half-Jew, he could scarcely disengage from the awareness that the Jews were a distinct social constellation in France, encompassing even those who did not see themselves as part of it, and this visceral awareness came to play an important role in his novel.

             Where does this ambiguous status of Jews in France lead one to think about the Jewish dimension, if there is one, of Proust’s writing? If one may judge by the range of responses to his novel from the 1920s to the present, it leads to some strange places. Many of these responses have been documented in detail in an exhaustively researched new book by Antoine Compagnon called Proust du côté juif. (The extreme systematic thoroughness of documentation is perhaps not surprising in a country where doctoral dissertations often run to a thousand pages.) Compagnon is a prominent literary scholar who has written on Montaigne, Baudelaire, literary theory, and, repeatedly, on Proust. In this book, he tracks the published responses to Proust by Jewish writers from the early 1920s and then forward decade by decade. Many of these responses are rather curious. Let me add that Compagnon is party to a current debate on Proust’s Jewishness, to which I shall presently turn.

             Perhaps the oddest thing about the early views of Proust is that a group of young French Zionists, writing in La Revue juive, Menorah, Palestine, and sometimes in other journals, recruited him as an inspiration for their movement. They did not claim that he was actually a Zionist, and the famous — for some, notorious — remark in the introductory chapter of Sodom and Gomorrah — that “the fatal error [of homosexuals] would consist, just as a Zionist movement has been encouraged, in creating a sodomist movement and in rebuilding Sodom” — clearly does not make him a Zionist. The reasoning of those young Jewish intellectuals was along the following lines: Proust was already widely regarded as one of the major French novelists. In his big book, as they wanted to see it, this great writer offered a compelling example, chiefly through the character of Swann, of a staunch Jewish identity that was entirely secular. And this was precisely what they hoped to achieve through Zionism. In this oblique way, they contrived to see him as a Jewish writer, an extraordinary novelist they could claim as their own. The argument hardly requires refutation.

    Another line of argumentation for claiming Proust as a Jewish writer was taken up in these early responses: that it was a matter of heredity. This view was typical of the common tendency to create a mystique about Jews. Frequent comparisons were made in those journals between Proust and Montaigne as purportedly Jewish writers — to be sure, with some pushback in the publications of the era. Montaigne had a grandmother whose maiden name was Lopez; she was a convert to Catholicism, and some have proposed that she was a Marrano, a crypto-Jew, despite her outward Catholicism. She was, of course, a forebear two generations removed from the writer, and there is no evidence that she could have influenced him, even if in fact she did remain a secret Jew. Yet these young Zionist writers assumed that she preserved a distinctively Jewish consciousness which somehow percolated down to her grandson. Perhaps it was through “the blood”? 

    In this uncertain light, it was claimed that Montaigne’s strikingly innovative mode of writing, with its frankness, its unblinking reflectiveness, and its bold analysis of mores and men, derived from his supposedly Jewish identity. Montaigne, it should be said, remained a loyal if not entirely a devout Catholic, taking care to leave instructions that he should be buried according to the Catholic rite. In the famous tower where he wrote, he arranged for a balcony to be built so that he could participate in mass being conducted down below. He had, moreover, nothing like Proust’s intimate relationship with a mother who remained a Jew. Yet he and Proust were said to share a distinctively Jewish ruminative style, even if the prose of one does not read much like the prose of the other. This mode of thinking, sad to say, is the positive mirror-image of the negative idea of heredity — the racist idea — that flourished after 1492 in the Spanish notion after of limpeza de sangre, or “purity of blood,” which must be preserved by Old Christians against invasive aliens. The notion reached its monstrous apotheosis in Nazi Germany — namely, that a few drops of Jewish blood were forever determinative.

             Another method of claiming Proust as a Jewish writer was through a proposed link with traditional Jewish sources, and this has persisted until our day. In the 1920s, it was contended by some that his innovative and often complicated style was “talmudic,” although there is no evidence that he had the slightest acquaintance with the Talmud. The Talmud is an important Jewish text that is exotic and seemingly impenetrable to outsiders, and so it has been tempting for some as part of a general process of exoticizing the Jews to attribute a “talmudic” character to any writer having a Jewish heritage. But “talmudic” means much more than dense or allusive. The association of Proust with the Talmud met published resistance in this early period, the objectors rightly countering that Proustian prose, however original, was thoroughly rooted in French literature.

             The “talmudic” argument, with occasional exceptions, has not persisted, but an even stranger one, the association of Proust with another quasi-canonical Jewish text, the Zohar, still flourishes. The seductive appeal of the Zohar as a ground of Jewish writing is clear. If the Talmud is exotic, the Zohar, a mystical text composed in thirteenth-century Catalonia that is often deeply mystifying, is exotic to the second power. No less a French intellectual eminence than Julia Kristeva, in a study of Proust in 1994, stated with perfect confidence, though without any evidence (as Compagnon notes), that Proust drew on a translation of the Zohar when he was writing his novel. She throws into the mix of sources, moreover, the old contention about the Talmud: “One knows that Jewish tradition” — about which, of course, “one” knows almost nothing — “and especially the talmudic, to which Proust was responsive, proliferates interpretations.” Having established by fiat that Proust was responsive to talmudic tradition, she confidently concludes that “in this light, Proust’s experience can be said to be talmudic.” This is all nonsense.

             In the current French scene, the Zohar connection has been revived by Patrick Mimouni in a series of articles and in a recent book called Proust amoureux: Vie sexuelle, vie sentimentale, vie spirituelle. Mimouni, born in Algeria to a Jewish family, began his career as a film-maker with a sequence of films about AIDs and homosexuals. In the 1990s he began writing about Proust, soon immersing himself in the minute details of Proust’s biography and repeatedly focusing on the role of homosexuality in Proust’s work. In his new book, he conveys a clear sense, following others, that Proust was not a garden-variety homosexual. It is known that, though he had a few lasting relationships, he frequented the kind of homosexual brothel catering to special tastes in which Baron Charlus is seen in the novel. Reports have circulated, passed on by Mimouni with, I think, a certain relish, that Proust attained orgasm by watching two rats savagely attacking each other, and Mimouni accepts the story that the horrific scene in the novel in which Madamoiselle Vinteuil, in the presence of her lesbian lover, spits on a photo of her dead father, actually mirrors an act by Proust in a gay brothel spitting on the image of his beloved mother. 

             For our present concerns, however, Mimouni devotes attention to what he contends is an important connection between Proust and the Zohar. This would be a significant feature of his vie spirituelle, the term in French spanning “spiritual” and “intellectual.” The slender basis for this proposed familiarity with the Zohar is a passing reference by Proust to “reading” the Zohar around the time of his journey to Venice in 1900. But when he was working on In Search of Lost Time, a French translation of the Zohar was not available. His only access to the arcane work would have been through a Latin version. It is far from clear that his lycée Latin would have enabled him to get anything out of such a difficult esoteric text. Thus, Compagnon, in a rejoinder to Mimouni, expresses warranted skepticism about the notion that Proust was acquainted with the Zohar in any way. Mimouni on his part has accused Compagnon of deleting a reference to the Zohar in his edition of Proust’s notebooks — a rather grave charge. The weight of evidence looks to be in favor of Compagnon, but for Mimouni and others determined to make Proust into a Jewish writer the temptation to see a link with the Zohar is irresistible. And since few actually know what is in the Zohar, it is easy to claim that Proust drew on it, a claim that will be attractive to the many who have no informed connection with Jewish tradition but are either striving to be affirming Jews or are philosemitic gentiles, thinking of Jewish culture as something magical and mystical that was somehow transmitted to writers of Jewish extraction, however removed they were from Judaism and the Jewish tradition. To cite a small symptomatic instance in Proust’s case: a brief mention in a letter to a friend in 1908 of the Jewish mourning custom of laying a small stone on the grave of the departed has been leaned on heavily by some commentators as a sign of their hero’s extensive familiarity with traditional Jewish practices.

    What, then, can reasonably be made of the Jewish side of Proust’s great novel? A kind of baseline proposition was put forth by Edmund Wilson nearly ninety years ago in Axel’s Castle, his pioneering study of literary modernism. With a flourish of characteristic common-sense intelligence, he wrote that “it is plain that a certain Jewish family piety, intensity of idealism and implacable moral severity, which never left Proust’s habits of self-indulgence and worldly morality at peace, were among the fundamental elements of his nature.” This seems just, though I am not quite sure about the intensity of idealism. One readily sees how he could have drawn these attributes from his mother, with no mystery of the blood as conduit. But the picture requires some complication.

             In Search of Lost Time is certainly not a Jewish novel, but there is a noticeable presence of Jews within it, one that becomes increasingly evident in the last stages of the book. Snobbery is involved in much of this. Proust is surely one of the most probing and subtle anatomists of snobbery in the history of the novel. He himself could definitely be regarded as a snob, but this only enabled him to understand the phenomenon all the more keenly. He was a man who loved to be loved — in the first instance, of course, by maman, but then especially by well-placed people in society. He did not hesitate to hobnob with vicious antisemites as long as they were sufficiently prestigious. Thus he dined at the home of Lucien Daudet, son of Alphonse Daudet, a contributor to Edmond Drumont’s violently antisemitic La France juive, while Lucien’s brother Léon belonged to the extreme nationalist right. Proust sat in silence while his host delivered himself of vituperative pronouncements on the Jews, and only later did he object in a letter to a friend. Similarly, he said nothing when his friend Robert de Montesquiou spewed a tirade against the Jews, though the next day he wrote a letter to Montesquiou, which I cited above, saying he had to differ with him on this because his mother was a Jew, though he himself was a Catholic. Evidently, he saw as the admission price to French high society a willingness to hear Jews vilified and to bear it in silence.

             The vehemence of French antisemitism in this era may be a little hard to imagine now. It scarcely yields pride of place to the growing anti-Jewish fury across the border in Germany over the next several decades. That widespread hostility toward the Jews is the context of the Dreyfus Affair that split France apart as Proust was coming of age, and put a decisive stamp on the later pages of In Search of Lost Time. Drumont was the leading spokesman for this unrestrained hatred. Characteristically, early in the Dreyfus Affair, he writes of Joseph Reinach, who belonged to one of those wealthy Parisian Jewish families, “If his ape-like face and his deformed body carry all the stigmata of his race, all the faults of the breed, his hateful soul swollen with venom better sums up all its evil, all its genius, disastrous and perverse.” Der Stürmer did not invent anything new. It is important to realize that Proust, longing to be accepted in the best French society, was hardly moving through a neutral environment.

             A number of Jewish characters, for the most part in relatively walk-on roles, cycle through the many episodes of In Search of Lost Time. Two of them are rather important in the imaginative economy of the novel. They are, of course, Bloch and Swann. In most respects they are altogether antithetical portraits, negative and positive, respectively. The Narrator has been an acquaintance of Bloch since the early days of both, and the Narrator has taken care to preserve a certain distance from him. Bloch is in certain ways the embodiment of the off-putting qualities that antisemites attribute to Jews. He is vulgar, unpleasantly assertive, conspicuously ambitious, and a social climber. 

    One of the hallmarks of Proust’s greatness as a novelist is that in the course of time Bloch undergoes a transformation, whether seeming or real, like many of the other characters. When we encounter him later in the novel, he has assumed a noble-sounding name, Jacques de Rosier, married a Christian from a well-placed family, refined his manners, and contrived to attach himself to the French aristocracy. The Narrator may well invite us to regard this self-transformation as a disguise, perhaps a different manifestation of Bloch’s initial vulgarity. The portrait of the new version of Bloch is amusing and etched in acid: “Thanks to the haircut, the removal of his mustache, to the general air of elegance, to the whole impression, his Jewish nose had disappeared, in the way a hunchback, if she presents herself well, can seem almost to stand straight.” 

             Yet there may be a certain element of the author in Bloch. He was, after all, the son of a Jewish mother and on his father’s side a grocer’s grandson. He by no means grew up as a Jew, like Bloch, but he must have been perceived as a Jew by at least some in the social stratosphere that he chose to inhabit. It was widely asserted that in the famous Nadar photo of a bearded Proust on his deathbed he looked like an Old Testament prophet — though not to me — as though at the end the Jew had emerged from the mondain. Even more pointedly, some months earlier, an acquaintance, Fernand Gregh, remarked: “One evening, after for a time he had let his beard grow, it was suddenly the ancestral rabbi who appeared behind the charming Marcel we knew.” Endowed with a good education and impeccable manners, Proust cultivated a polished persona that gave him easy entrée to exclusive places, and in those places he tolerated vehement antisemites. Proust certainly disapproved of Bloch, the character he invented, but I suspect that he put a little of himself in Bloch.

             Swann, of course, is a much more substantial figure. Indeed, he is arguably the most complex and attractive character in the novel. Readers will recall that in the first pages of In Search of Lost Time, it is Swann as the dinner-guest of the Narrator’s parents who distracts maman from going upstairs to her child desperately awaiting a goodnight kiss, although we do not immediately realize his identity. For a while in the novel we do not even know that Swann is a Jew, and I think this is a shrewd strategic choice on the part of the writer. Unlike Bloch, there is no suggestion that he has hidden his origins, but he does nothing to make people conscious of his ethnic identity. This is not a matter of pretense or disguise. Swann is what he appears to be — a perfect gentleman, a poised socialite, a person of exquisite taste (one recalls the care he takes to go from florist to florist in order to assemble the perfect bouquet for the hosts who have invited him to dinner). He is also a loyal friend, something by no means true of everyone in this world. 

    Swann’s one major slip is to fall in love with Odette, a young woman not as refined as he is and also a woman who has been far from chaste. She is, as the Narrator tells us more than once, not really his type. This paradox derives from Proust’s always interesting assumption that the psychology of love is often quirky, unpredictable, contradictory. Proust’s hapless love for his chauffeur, Alfred Agostinelli, who was obese when they met, uncultivated, and never much cared for his extravagantly indulgent benefactor, is a case in point from the writer’s own life. 

             It is the Dreyfus Affair that flips our perception of Swann’s Jewish identity. It may also have flipped something in Proust. The trumped-up charge that Dreyfus, a Jewish army captain, was guilty of treason in passing military secrets to the Germans became a litmus test for where one stood in French society. The accusation was widely believed; Dreyfus was convicted, stripped of his rank, and sent to imprisonment on Devil’s Island. A retrial four years after the initial one in 1896 issued in another conviction. Only in 1906 was he exonerated and his military appointment restored with a promotion to major, after the real culprit had been exposed and fled the country. The false accusation, however, gained considerable credence and was embraced by staunch Catholics, conservatives, aristocrats, and also by some of the leading artists of the period (Degas, Cézanne, Renoir). The readiness of large numbers of the French to embrace a blatant and bigoted falsehood has one sad explanation: there was a widespread suspicion, even among many one would have thought should have been better informed, that the Jews had never been altogether French. Their indelible foreign character thus made it plausible that one of them would be prepared to betray the vital interests of the nation to a foreign power. In Proust’s novel, after Swann has declared himself a Dreyfusard, it occurs to the Duc de Guermantes that though he had always thought of Swann as a Frenchman, now he realizes that he was mistaken.

             Proust himself became a defender of Dreyfus, even attending sessions of the first trial. His character Swann emulates him as a Dreyfusard. The aristocratic Guermantes, on the other hand, with just one exception, were anti-Dreyfusards, and they were prepared to discard their friendship of many years with Swann, now seeing him as a Jew. It even strikes them that Swann has been revealed to look like a Jew. The last physical description offered of him in the novel is shocking. He appears at the kind of elegant social gathering he has always frequented, but now he is wasted by old age and disease:

     his nose, for so long reabsorbed into a pleasing face, now seemed enormous, tumid, crimson, more of an old Hebrew than an inquisitive Valois [The Valois were a French royal line.]. Perhaps in any case, in recent days, the race had caused the physical type characteristic of it to reappear more pronouncedly in him, at the same time as a sense of moral solidarity with the other Jews, a solidarity that Swann seemed to have neglected throughout his life, but which the grafting, one onto the other, of a mortal illness, the Dreyfus Affair and anti-Semitic propaganda, had reawakened.

    When Proust wrote these lines, he could have not known that he himself would be perceived by many in his photographic desk-mask as a Hebrew prophet. For the purposes of his novel he was drawing a polemical antithesis: Bloch’s Jewish nose has receded through a kind of illusionist’s trick while Swann’s, hitherto barely noticed, had emerged in the uncompromising authenticity of his impending mortality.

             A complement of sorts to this last image of Swann become a Jew occurs in the famous moment in which the Duchesse de Guermantes can spare no attention for her dying friend as she prepares to depart for a ball because she is entirely preoccupied with finding her red shoes to wear for the occasion. Proust’s point in showing her in this light is clearly to illustrate how for women like her social vanity takes precedence over the mere human obligation of kindness and concern for a friend in extremis. One suspects, nevertheless, that her discovery through the Dreyfus Affair that the dying Swann is, after all, merely a Jew may have encouraged her to ignore Swann in his ultimate hour of distress.

             Proust’s ability to show how someone may change through time, both in regard to identity and in what happens to one’s physical presence, is a signature aspect of his greatness as a novelist. It is a corollary of the long duration that he has devised for his novel, and one finds few parallels to it in other novels. One surely cannot detect any sign in this of those putative Jewish sources for his writing, the Talmud and the Zohar. Two rare precursors may be the Biblical stories of Jacob and David in the Hebrew Bible, and though these are texts that Proust knew, there is no evidence he paid any attention to scriptural precedents.

             So it is hardly helpful to think of Proust as a Jewish writer, much as some have sought to do so. He evinces no distinctive cast of mind, no special mode of writing and thinking, that can plausibly be attributed to the Jewish side of his family background. He writes about Jews in his novel because they were a visible part of the world that he set out to represent. In this limited respect, he does not differ from Philip Roth, who always objected to being labeled a Jewish novelist, even if the presence of Jews in Roth’s fiction is much more predominant than in Proust’s. As the son of a Jewish mother, Proust was acutely attuned to the precarious location of Jews in French society in his time, and he represents this fraught condition with penetrating understanding. If his Jewish background enters at all into the achievement of his novel, it is that, as a person pulled between two forces, something that perhaps he would not himself have admitted, he proved to be an unusually keen observer of both the French and the Jewish side of the tension. A writer needs to stand a little on the outside to see things with the greatest clarity.

             The general lesson to be drawn from this strangely persistent controversy over Proust’s identity as a writer is that there is nothing particularly mysterious or unique about the Jews. Granted, they are a people that has persisted in history over many long centuries, during which they produced remarkable cultural and spiritual achievements. But the Jews are like everybody else, and not even more so. To think that they possess some magical esoteric heritage that is manifested in the creative work of many writers of Jewish extraction is in the end foolish, and sometimes racist. Proust is pre-eminently a French writer whose literary lineage includes La Rochefoucauld, the philosophes, Stendhal, and Flaubert. Inevitably, he made use of what he knew from his mother’s world and from his sometimes uneasy negotiation as his mother’s son with the French world beyond it, but then most writers make use of whatever is part of their familiar experience. Proust provides no inspiration for Jewish identity, and why should he?

     

    What is a Statesman

    We yearn for great leaders, but we seem to resist them when they come along. This is a paradox inherent to democracies, between the demand for liberty, equality, and self-reliance among citizens and the continuing need for leadership in the unruliness of an open society. We vacillate between power and drift, between embracing strong leaders and endorsing a kind of leaderless rule. Our confusion about statesmanship is partly because we have lost the language in which to understand the term.

    The term statesman has an old, even an antiquarian ring about it. Herbert Storing, a great historian of the American founding, once noted that there seems something almost “un-American” about the word. While politicians pay lip service to the concept, for the most part the term is regarded as outmoded, elitist, and vaguely anti-democratic. Harry Truman once joked that a statesman is just a politician who has been dead for ten or fifteen years. 

    Yet it is hard to deny that today we are experiencing a dearth of statesmanship. With the exception of Volodymyr Zelensky, bless him, who is doing a stirring impression of Winston Churchill, statesmen are in short supply. Our current moment has certainly witnessed a renaissance of authoritarian figures — Putin, Xi, Modi, Bolsonaro, Orban, Trump — but none of these seem to qualify as statesmen. What is a statesman, and how do we know one when we see one?

    The confusion about the concept is due in part to its unavoidably normative character. Isn’t one person’s statesman another’s demagogue? Historians are often wedded to a kind of social determinism that regards the statesman as an agent of powerful classes, interests, and social forces which he or she may only dimly understand. Political scientists, who only feel at home in the world of big data that can be quantified and analyzed by mathematical methods, contribute to the flattening out of experience. But it is impossible to study political phenomena without evaluating them. If we are unable to distinguish a magnanimous statesman from a humble mediocrity from an insane imposter, we will be unable to understand anything about politics. 

    Like much of our political vocabulary, the concept of the statesman is of ancient origin. It is a translation of the Greek word politikos. Plato devoted an entire dialogue to this concept, although his most famous discussion of the statesman occurs in the Republic, where he famously asked what kind of knowledge a statesman had to possess. His answer was that the politikos was required to be a philosopher-king, someone who blended a high degree of intellectual excellence or expertise with the skillful management of public affairs. Many people disagreed with Plato’s answers — most notably Aristotle, his student — but his question is the one we have been grappling with ever since.

    The understanding of statesmanship has been compromised by two tendencies fostered by modern democracy. The first view conceives the statesman as a technocrat, someone who is guided by scientific experts and who is then able to apply this knowledge to the various problems deemed to be plaguing society. This kind of statecraft is rightly called “progressive” because it regards progress in politics as dependent upon advances in scientific (and social scientific) knowledge. According to this view, as scientific knowledge increases so, too, does our ability to apply its insights to the most pressing issues of society, whether these are hunger, disease, poverty, inequality, or climate change. Social problems are regarded here as largely technical in nature and politics is seen as a form of policy science.

    The idea that politics is reducible to policy is at the core of what is sometimes referred to as the administrative state. This view was imperishably expressed in Alexander Pope’s couplet:

    For forms of government let fools contest;

    That which is best administered is best. 

    On this account, politics can be reduced to a form of problem-solving not unlike that encountered in the worlds of science, technology, business, and other aspects of a modern capitalistic economy. The claim that we frequently hear from public officials that they are just “following the science” is a perfect illustration of this approach. Politics becomes a matter of implementing the insights of scientists, medical professionals, and other policy experts. We do not necessarily expect our leaders to be experts, but we expect them to follow the advice of experts, certainly in arcane areas such as public health and monetary policy. 

    The second misunderstanding confuses the statesman with the populist leader. As William Galston has argued, populism and democracy are to some degree inseparable. Every democratic leader claims to have the mandate of the people even if he holds power by only the slimmest majority, and sometimes not even by a majority at all. The populist leader was best characterized by Max Weber with the term “charisma”. In his renowned essay “Politics as a Vocation,” he invoked this term to distinguish the charismatic leader from the party politician. The charismatic leader is someone who claims to stand above special interests and party loyalty and speak directly to the people, and who can serve as their voice. In modern American politics, Woodrow Wilson was the greatest representative of this viewpoint.

    The test of the charismatic leader is the claim to authenticity, that he speaks for the people. But how does one measure the authenticity of a leader? How does one distinguish the charismatic leader from the demagogue, the mountebank, and the fraud? How does one distinguish the true prophet from the false prophet? This is one of the oldest questions in the history of human affairs. Weber provides no acid test for charisma. There are no fixed principles, no program for action. There is only the personality of the leader. As Machiavelli said about the charismatic preacher Savonarola, he lost authority only when the people ceased to believe in him. Charisma is very much in the eye of the beholder. 

    The problem with charismatic politics is its almost complete lack of content. In recent American history, Barack (“Yes We Can”) Obama and Donald (“Make America Great Again”) Trump have been regarded as charismatic leaders, but George H. W. Bush and Joe Biden have not. Yet a leader’s charismatic properties have no bearing on the quality of his or her governance. Charisma is value-free: it can be used for good or evil. It is a means, not an end; except when it becomes an end in the form of personalist dictatorship. It is a kind of political mesmerism. There are no fixed principles of action beyond a certain theatrical gift and a demand for authenticity. In charismatic leadership, the message of the leadership is the ruler himself. Underlying this is a complete absence of, even contempt for, constraints on power. For this reason, charismatic leadership is often a recipe for extremism and violence. Weber regarded the charismatic leader as a remedy for the problems of political gridlock and stalemate, but there is only a short step from the populist leader to the Duce and the Führer. Charisma may not be incompatible with democracy, but it is dangerous to democracy. 

    To start at the beginning, statesmanship is about the care and oversight of states. It presupposes the bounded political units — call them states or nation-states — that have been the basis of international order ever since the Peace of Westphalia. The leaders of empires — Cyrus, Alexander, Napoleon — no matter how gifted they were in other respects, were not statesmen but conquerors and military despots. The same is true for those who exhibit leadership skills in business, university administration, and criminal enterprises. They may express expertise but not political know-how. Statecraft fundamentally concerns securing the conditions of political legitimacy.

    There are three kinds of statesmen I wish to consider: founders, preservers, and reformers. 

    The greatest statesmen in history are political founders, lawgivers, responsible for introducing “new modes and orders.” These are inevitably revolutionary figures, people like Machiavelli’s “new prince” who promise freedom and redemption from an oppressive political order. Founding statesmen have no authority other than their own words and deeds to justify them. It is their capacity to mobilize and to shape opinion that is the basis of their legitimacy.

    Political founders come in all shapes and sizes, from mythic figures such as Moses, Lycurgus, and Romulus to Oliver Cromwell, George Washington, Lenin, and Mao. I would also add less well-known figures such as Atatürk in Turkey, Bismarck in Germany, Ben-Gurion in Israel, Sun Yat-sen in Taiwan, Nkrumah in Ghana, and Lee Kuan Yew in Singapore, as further examples of this creative type. These are all the “fathers of the Constitution” who create the frameworks within which later and lesser statesmen can handle changing situations.

    The study of political foundings is exciting, as the never-ending flow of books, movies, and musicals about the American founders illustrates. Political founders typically try to set up the widest possible gap between themselves and the old regimes that they are attempting to overthrow. They wish to represent a rupture. In France in the 1790s, they renamed streets, remade the calendar, abolished historical provinces, reformed the language, and created new religious cults. William Wordsworth famously recalled his own enthusiasm at the outbreak of the French Revolution, 

    Bliss was it in that dawn to be alive,

    But to be young was very heaven!

    Books such as Hannah Arendt’s On Revolution and J.G.A. Pocock’s The Machiavellian Moment celebrate revolutionary beginnings as the only times of truly creative political action. Such revolutionary moments seem almost to be breaks in political time, as Stephen Skowronek has argued. 

    But while it is easy to romanticize revolutionary moments in history, it is just as easy to forget precisely how tenuous and dangerous such moments are and how easily things can turn dark. Consider how quickly the Arab Spring turned into the Arab Winter, as exhilarating hopes for democracy foundered on the shoals of political reality. Even at their best, founding moments introduce a principle of disruption into political life that, once started, cannot easily be stopped. As Aristotle warned, the habit of disobeying law, even a bad law, has the tendency of making people altogether lawless. Revolution is not a bus that you can get off at will. 

             In 1838, in his speech on “The Perpetuation of Our Political Institutions,” Lincoln warned against the dangers of the “towering genius” in politics. The American founders were men who staked their all on creating free institutions. But their nobility and their sincerity of democratic purpose does not preclude the possibility that later generations will produce Alexanders, Caesars, and Napoleons of their own. Such men will not rest content with perpetuating what has been established; they will seek new fields of glory as a testament to their own ambition and love of fame, and often this will involve the repeal of the accomplishments of those who preceded them. “Towering genius disdains a beaten path,” Lincoln warned. “It thirsts and burns for distinction and, if possible, it will have it, whether at the expense of emancipating slaves or enslaving freemen.” How to channel this overreaching ambition remains a permanent challenge for any theory of statesmanship. What is a durable constitutional republic, after all, if not a beaten path? 

    If the political founder is the rarest type of statesman, the most familiar type is the preserver, who works within an established set of laws and institutions to maintain the coherence of a tradition. Preservers are typically conservatives such as Walpole, Burke, Adams, and Disraeli, who are responsible for maintaining the social fabric often after periods of war or social upheaval. They see themselves as custodians of continuity. Preservation is the policy of adjusting old traditions to fit new situations. Like the “Parting Hours” described by George Crabbe,

    The links that bind those various deeds are seen,

    And no mysterious void is left between.

    The art of preservation may seem an unambitious role in comparison with founding, but it is the mode of political action fitted for most occasions. A founding that is not followed by preservation is doomed. Few will ever have the chance to construct a political order de novo, but many have the opportunity to shape and secure the polities that they already inhabit. Preservation is distinctly non-charismatic in tone. It must present its innovations as derived from traditions, law, and institutions in order to give its methods an air of legitimacy. 

    The classic study of statecraft as the restoration of order is Henry Kissinger’s A World Restored, which analyzed the role of two great European diplomats — Metternich and Castlereagh — and their part in the restoration of the balance of powers after the Napoleonic wars. Other notable restorationists were Konrad Adenauer, who helped to restore the moral and political dignity of Germany after a time of war and dictatorship, Angela Merkel, who did so much to restore a sense of German unity after the fall of communism, and Margaret Thatcher, who restored a sense of order and stability in Britain after a period of strikes and moral decay.

    The best description of this form of statecraft was provided by the English Whig George Savile, Marquis of Halifax. In 1688, in his classic essay “The Character of the Trimmer,” Halifax defended a policy of what might be called principled inconsistency. The first goal of the statesman, he argued, is to keep the ship of state afloat. The true statesman — or trimmer: he was not using the word pejoratively — must be prepared to shift his position, moving from one side of the boat to the other in order to correct its list. If this offends the demand for adherence to principle, so much the worse for principle. While such a policy might be condemned as opportunism or flip-flopping, Halifax argued that such prudent flexibility was the essence of political wisdom.

    This image of the ship of state was repurposed by the English conservative philosopher Michael Oakeshott in his lecture on “Political Education” given at the London School of Economics in 1951. For a people who had only recently emerged victorious from a world war, Oakeshott offered only words of caution, describing himself as a skeptic “who would do better if only he knew how.” Rather than speaking about justice, liberty, or equality — the standard fare of political philosophy — he insisted that politics consists of “the pursuit of intimations,” by which he meant acting in a way that is most likely to retain or to restore the coherence of a tradition. “In political activity,” Oakeshott told his listeners, “men sail a boundless and bottomless sea; there is neither harbor for shelter nor floor for anchorage, neither starting-place nor destination. The enterprise is to keep afloat on an even keel.” 

    These words capture concisely the image of the statesman as preserver, not the larger-than-life personalities of a Churchill or a de Gaulle who led their countries through times of crisis, but the George Marshalls and George Kennans charged with preserving peace and stability. Preservers are typically not futurist visionaries possessed of a grand strategy and a singular sense of purpose; they are more often diplomats and parliamentarians accustomed to working the back rooms and the corridors of power. In Machiavelli’s metaphor, they tend to be foxes, not lions. Their goal is not to create justice on earth, but to establish legitimacy where this means the maintenance of stability and authority. 

    Finally, the third type of statesmen are the reformers, who regard statecraft as a means of affecting moral and political change. Reformers are often, but not always, outsiders to politics who agitate for change through popular protest and acts of civil disobedience. This may be because the path of ordinary political participation is closed to them, as was the case with the abolitionists and suffragettes of the nineteenth and early twentieth-centuries, or because it may seem to them that agitation and protest are the only ways of effecting meaningful change, as Black Lives Matter advocates today seem to believe. In either case, this outsider status often gives reformers a richer sense of possibilities that may not occur to those who have spent their lives operating inside the corridors of power. 

    The classic expression of this kind of protest politics was Thoreau’s On Civil Disobedience, which argued that any law that violated the sacred right of conscience has no claim on our loyalty. For Thoreau, this line was crossed with the American war with Mexico and the annexation of Texas, which he considered little more than a land grab. His appeal to conscience has had a powerful hold on our moral imagination, and has been an inspiration to generations of peoples worldwide, but there is a problem. This particular brand of conscience politics is essentially empty. Conscience politics has inspired everything from the abolitionist movement, to protest against the Vietnam war, to the refusal of the Kentucky county clerk who refused to issue licenses for gay marriages. Are all expressions of conscience equally valuable? Is radicalism a mark of truth? How do we know when appeals to conscience are sincere expressions of a person’s deeply held moral and religious beliefs and when they are just a mask for prejudice and self-delusion? What happens when it is hard to draw the line between beliefs and prejudices? One person’s voice of conscience may be another person’s hypocrisy.

    The task of the reformer may be the most difficult of all, because she must know how to split the difference between revolution and restoration. The question posed by the reformer is not “all or nothing” but “how much or how little.” Exemplary reformers have been men such as Mikhail Gorbachev in the former Soviet Union trying to navigate the transition from communism to democracy, Deng Xiaoping opening China after the disasters of Mao’s Cultural Revolution, and Nelson Mandela and F. W. de Klerk in South Africa working to bring an end to apartheid. The interesting feature of leaders such as Gorbachev, Deng, and de Klerk is that they were once insiders who ended up advocating for change and leading their countries, at least temporarily in the case of Russia and China, into a more hopeful democratic future.

             The best kinds of political reformers are those who manage to hold together both loyalty to founding principles or ancient traditions with agitation and critique. They are examples of what Michael Walzer has called “connected critics,” because their standards for reform do not come from some private voice of conscience or some putatively universal principle of natural right, but from an appeal to the very standards of justice espoused by the systems they were criticizing. Examples of connected critics are Camus but not Sartre, Orwell but not Lenin, Gandhi but not Fanon, Martin Luther King but not Malcolm X. This is an idea that would benefit many social justice advocates today. 

    Statesmanship, as I noted earlier, rests on knowledge, but what kind of knowledge? Is statecraft a science, like mechanics or engineering, that can be codified in rules and then learned, memorized, and put into practice, as Machiavelli seems to have believed? Or is it more like an art that can only be mastered through practice and experience, and that requires the capacities of insight, imagination, and intuition, like having an ear for music or a knack for languages? I want to consider statecraft as more of an art than a science, based on the mastery of three essential skills. 

    The first is the statesman’s role as teacher. The statesman is not simply a problem-solver in possession of technical expertise or a tribune of the people’s will, but an educator who is able to shape a vision of the political regime. By a regime I mean not just a form of government but an entire way of life, what gives a people’s collective life a sense of wholeness and meaning. Each regime type creates in turn a different range of human possibilities. The differences between regime types form the basis of our ability to distinguish between the various politically relevant ways of life.

    As educator-in-chief, the statesman must always be aware that opinion is the medium of society. As Hume ringingly observed, “It is on opinion only that government is founded.” By opinion he did not mean the kind of information elicited through polling data or focus groups, but a structured set of sentiments, habits, and beliefs that shapes a people’s character and way of life. Without a settled body of opinion, no government, not even the most authoritarian, could last a single day. No one understood the importance of opinion better than Lincoln. “With public sentiment,” he wrote, “nothing can fail; without it nothing can succeed.” He then went on to add: “Consequently, he who molds public sentiment goes deeper than he who enacts statutes or pronounces decisions.”

             The statesman’s art consists, then, in the ability to educate the public mind by helping to form its beliefs and opinions. In the American case, these opinions are rooted in our founding texts — the Declaration of Independence and the Constitution — as well as the immense superstructure of laws, interpretations, and rulings that have been built upon them. These texts have in turn shaped our fundamental experiences of right and wrong, of who should rule and who should be ruled, of who governs and why. These structures of opinion are what make future change possible.

    A second feature of statecraft is the capacity for communication. In the phrase frequently applied to Ronald Reagan, the statesman must be a “great communicator.” This is what used to be known as the art of rhetoric. The importance of rhetoric is especially true for democracies, where statecraft consists in the ability to communicate with fellow citizens whose views are decisive in politics and to some extent in governance. It is not enough for a statesman to craft a vision for society; she must be able to harness it in language and persuade others of its power and beauty. As Edward R. Murrow said of Churchill, “He mobilized the English language and sent it into battle.” This is what the greatest leaders have been able to do, namely, to immortalize in words and images what a people stand for, what they believe in, and what they look up to. 

    More than any other regime, democracy gives to speech a pride of place in determining the legitimacy of policy. Unlike autocracies that are governed from above, democracies require a continual flow of communication not only from the top down but from the bottom up. Democratic politics has for this reason rightly been called “logo-centric” or talk-centered, because most of what goes on takes place through the medium of language, whether in legislative assemblies, jury rooms, courts of law, newspapers, and increasingly the internet. Mill said that democracy is government by discussion.

    To be sure, the importance of speech can be vastly overstated. The ideal of a rhetorical democracy — parrhêsia in the Greek sense of “to speak boldly” — is at the core of what Jurgen Habermas has called “an ideal speech community.” Habermas apparently considers politics as a vast public seminar that should be adjudicated by the neutral standards of “public reason,” in which only the force of better argument decides policy outcomes. There are two significant problems with this view. First, it holds public deliberation to a high standard of language and thought that is rarely realized in existing democracies and their media. Consider only what passes for public deliberation in contemporary America: our debased discourse does not exactly rise to the standard of parrhêsia. Second, this view too quickly ignores the more coercive and disciplinary aspects of politics. Democracy is about public deliberation, but it is also about authority, command, and decision. A country, or a legislature, or a court, is not a seminar. The reduction of politics to speech is at the root of what the ancients called sophistry. 

    The dependence of democracy on speech or rhetoric can be both a strength and a weakness. This focus on speech may be a source of frustration to those who demand swift, decisive, and concerted action. But deliberation is also necessary for providing a sense of legitimacy for public decisions. As Pericles said of Athenian democracy in his Funeral Oration, “instead of looking on discussion as a stumbling-block in the way of action, we think it an indispensable preliminary to any wise action at all.” 

    The third characteristic of the statesman is political judgment. Aristotle named this capacity phronesis to indicate the sphere of prudence or practical reason that he regarded as the political virtue par excellence. Judgment is the form of reasoning appropriate to citizens situated in juries, legislative assemblies, and deliberative bodies of all sorts. It is knowledge of the fitting or the appropriate thing to do under the circumstances. He associates it with the man — and for Aristotle it is always a man — who possesses the skills necessary to manage well the affairs of the political community. 

    The knowledge of the statesman differs from both the theoretical knowledge of the philosopher and the technical expertise of the specialist. While philosophical knowledge aims at the true or the universal — the ideal regime, the idea of justice — and technical knowledge with the mastery of rules, political judgment is necessarily local and improvisational. The relation of judgment to circumstance is essential for successful statecraft. “Circumstances,” Burke wrote, “give in reality to every political principle its distinguishing color and discriminating effect.” In politics, circumstance is everything. 

    No one has thought more deeply about the role of political judgment than Isaiah Berlin. Berlin was especially interested in what distinguishes successful statesmen from philosophical genius and why, by implication, the latter often appear politically foolish. Albert Einstein may have been a brilliant theoretical physicist but his reflections on world peace seem almost touchingly naïve. Bertrand Russell was a brilliant logician but his writings on marriage, religion, and war display an alarming indifference to the complexities of political reality. What Berlin deemed essential for the statesman was what he called “a sense of reality.” This meant not merely possessing more facts or information about society but having an almost intuitive grasp of its texture, both its constraints and potentials.

    Judgment in politics is the ability to see possibilities that had not previously been seen or imagined. It not a matter of knowing more but of seeing further than others. It is the capacity that we associate not with the scientist who uncovers laws and uniformities, but with the creative artist, the poet, the novelist, and the playwright who seeks patterns and connections between colors, shapes, characters, and words. Judgment is almost an aesthetic perception, something like the ability to see pattern and coherence in a painting or work of art where others see only chaos and confusion. It is not just a quality of the mind but involves the entire personality, the unique temperament, of the individual. 

             Good judgment consists not least in the ability to improvise on the spot. Like a musician creating unfamiliar riffs on the familiar chords of a jazz standard, having judgment consists in the skill of working within an established idiom but expanding upon and developing the possibilities that are latent within it. It is a kind of analytical imagination. It is the same skill possessed by the master chef who is able to create new combinations of tastes from a familiar palette of choices. Judgment is not the mechanical application of a rule or a fixed standard to changing circumstances — something like the demand for strict consistency — but the ability creatively to adapt rules to new and unforeseen situations and master them. 

             Good judgment in politics is the quality necessary for successful statecraft. This is not to say that good judgment necessarily guarantees success. It is often difficult to say whether success is the outcome of judgment and foresight or just good luck. Sometimes even the best plan may need a little luck. The wisest historians have often considered luck — accident, happenstance, contingency — as a causal power in history. Machiavelli could speak of fortuna as a goddess that can dispense as well as withhold her favors. He claimed that even the most far-seeing statesmen could only control events half the time, leaving the other half to the vicissitudes of fate. In Guys and Dolls, Sky Masterson pleaded that “luck be a lady tonight.” It is a sign of wisdom to recognize the limits of our capacities. Near the end of the Civil War, Lincoln confessed to Albert Hodges that “I claim not to have not controlled events, but confess plainly that events have controlled me.”

    Great statesmen are judged not only by how they respond to success but how well they handle failure. Do they respond with bitterness and resentment or with a sense of magnanimity in the face of defeat? Contrast, if you will, Richard Nixon’s petulant concession speech after his failed gubernatorial run in 1962 to Al Gore’s magnanimous concession speech after the Supreme Court stopped the Florida recount in 2000. The statesman must know how to turn failure into success. FDR’s defeat for the Vice Presidency in 1920 and then his struggle with polio was a better training for leadership than his earlier life of privilege. Occasional setbacks are a valuable test of character. Churchill is reported to have said that the mark of success is the ability to go from failure to failure with no intervening loss of enthusiasm. This is witty, but it cannot be true. A person who has met with repeated failure, however enthusiastic, could not possibly be said to have good judgment even if such a person meets with occasional success. As the saying goes, a stopped clock is still right twice a day.

    Judgment is, finally, the ability to respond to unforeseen situations. Preparing for the unpredictable is always the better part of judgment. To be sure, there is no one model for the exercise of judgment. It is all a matter of context and circumstance, but if there is one feature that distinguishes the successful statesman from the day-to-day politician, it is the ability to articulate the permanent and aggregate interests of the community. This requires the ability to plan not just for today or tomorrow but for the future. As Tocqueville — who has acted indirectly as the educator of many statesmen and legislators — said in the introduction to Democracy in America, his book was written not to satisfy any faction or class but to see deeper and further than the different parties. “While they are occupied with the next day,” he wrote, “I wanted to ponder the future.”

    I have said, ruefully, that the concept of statesmanship seems old-fashioned, out of touch with the times. Perhaps it is. We vacillate between the view that all politics is essentially a power struggle while at the same time holding our leaders to impossibly high standards of moral perfection. We either expect too little from our public figures or too much.

    The ethics of statecraft can be summarized, I believe, in a single word: responsibility. The concept of responsibility grows out of moral and legal language. A person can be called responsible for something when she acts on her own initiative or when her actions can be regarded as the cause of some state of affairs. To be held responsible is connected with terms like causation, guilt, and accountability. Responsibility may seem to lack the grandeur of such ancient moral terms as “duty,” “conscience” and “virtue,” but it is also more suited to the politics of a democratic age. 

    This brings us back to Weber. He explored the theme of political responsibility in great depth, and regarded it as the defining characteristic of the statesman. Political life, Weber argued, is torn between two competing moralities. The first he called an ethic of conviction, which takes different guises but is typical of the moral idealist in politics. The classic form of this ethic was the Sermon on the Mount, with its injunctions to “turn the other cheek” and “resist not evil with force,” but it finds similar expression in the Kantian demand that politics must give way to morality or in the Rawlsian claim that justice is “the first virtue of social institutions.” In each case, politics is regarded as morality by other means. “The believer in an ethic of ultimate ends,” Weber wrote, “feels ‘responsible’ only for seeing to it that the flame of pure intentions is not squelched.” 

    Weber tended to associate conviction ethics with the Christian pacifism and revolutionary socialism of the World War I generation, but it is in fact a category of belief and sentiment that has manifested itself throughout history in various times and places. It is the attitude of the moralist who identifies a particular evil — slavery, war, exploitation, injustice — and demands its eradication, not tomorrow, not next year, but today, here and now, immediately, regardless of the cost. A sense of indignation, no matter how well-meaning, then gives rise to the demand for moral action, and soon after issues in fanaticism and violence. “Those, for example, who have just preached ‘love against violence’ now call for the use of force for the last violent deed,” the conviction ethicist proclaims, in Weber’s words, “which would then lead to a state of affairs in which all violence is annihilated.” Needless to say, the final act, like the end of history, is a condition that never arrives.

    A pure example of conviction ethics was the radical abolitionist William Lloyd Garrison, who advocated no compromise with the Constitution in his opposition to slavery. On an issue like slavery, certainly, it is important to keep alive a pure moral vision, but such visions can only be held by saints, reformers, and “intellectuals.” It took Frederick Douglass and Abraham Lincoln to understand that if slavery were to be abolished, it would be not be by shredding the Constitution but by embracing it. Or consider two more recent cases: the journalists Daniel Ellsberg and Julian Assange both published official state documents during wartime without considering how such materials would inevitably be distorted and misused. Men such as Garrison, Ellsberg, and Assange are what Raymond Aron once called “technicians of subversion,” who prefer to see their country dismembered or defeated rather than compromise the purity of their ideals. 

    At the other end of the spectrum, in Weber’s analysis, is the ethic of responsibility. This phrase has been often understood to mean a form of consequentialism, a concern with what will work and what will not, or what is expedient over what is morally right. This description is not false but it fails to grasp what philosophers often call “agential” responsibility. This ethic is concerned not with the purity of intentions but with the consequences of action, especially the unintended consequences, that may follow from them. This is not to say that it is simply Machiavellianism by another name. Rather the art of the statesman is concerned with the uses of power and its moral and psychological effects on those who wield this power. What, exactly, are these effects?

    First, the ethic of responsibility requires taking ownership for decisions that will invariably inflict harm upon others. Politics, as Weber warned, is always a bargain with infernal powers. There will always be situations where even the nobility of the end will be compromised by the sordidness of the means necessary to achieve it. Lincoln’s decision to prosecute a war to end slavery ended up costing over six hundred thousand lives. He never doubted the rightness of the cause, but the length and the destructiveness of the war took an immense psychological toll on him. Sometimes anguish accompanies virtue. Consider Truman’s decision to drop the atomic bomb on Hiroshima effectively ending World War II. The decision was, in retrospect, the correct one, but it cannot help leaving a sense of disgust in its wake. Truman never second-guessed his decision, believing that in the end he saved lives, especially American lives, although the philosopher G. E. M. Anscombe subsequently attacked him as a mass murderer. Whatever Truman’s faults — he was not particularly given to moral self-reflection — his decision ended a terrible war and brought peace more swiftly than any other option. 

    Anscombe’s protest over the decision of the University of Oxford to confer an honorary degree on Truman is a clear example of the kind of conviction ethics that Weber deplored. No complex decision will be morally blameless, and those who seek a clean conscience and a pure heart should pursue their satisfactions in private life. “The safety of the morally innocent and their freedom to lead their own lives depend upon the ruler’s clear-headedness in the use of power,” the philosopher Stuart Hampshire wrote, perhaps with Anscombe in mind. I would only add that whereas we should not necessarily expect our leaders to be morally paradigmatic human beings, we should at least expect them to be attentive to the needs and the interests of their fellow citizens and call them to account when they fail in this task.

    Second, the ethic of responsibility accepts that moral conflict will always be the norm in politics. “We are placed into various life-spheres each of which is governed by different laws,” Weber wrote. Unlike the idealist convinced that all issues must be subordinated to a single cause, the responsible statesman is aware that he operates in a world of conflicting values that are qualitatively heterogenous. There is no summum bonum that is equally good for all individuals, there is only a range of values the importance of which will be determined by circumstances, education, and personality. The thesis of “value pluralism,” which is now most associated with Isaiah Berlin, has led some critics, notably Leo Strauss, to label Weber (and Berlin) a moral relativist who accords equal legitimacy to all values, however evil, base, or insane. This is an unfortunate misreading that robs moral life of its difficulty and its pathos. The fact that our deepest commitments stand in inalterable conflict was not meant as an exhortation to extremism or to nihilism, but as a counsel of sympathy and moderation. To Barry Goldwater’s call that “extremism in defense of liberty is no vice,” Weber would have replied that not all things are permitted even in the pursuit of a just cause. 

    Statecraft is ultimately a matter of choice, not so much between good and evil, but between rival and competing goods that cannot be tidily ranked by some hierarchy of ends or derived from some first principle. There is no one value, whether it be peace, equality, freedom, justice, or rights, that always trumps all others. Rather than seeking the best, responsible statecraft seeks to avoid the worst. If we cannot expect our leaders to follow the Hippocratic Oath to “do no harm,” we should at least expect them to do as little harm as possible. In politics, as in love, a sound maxim is “you can’t always get what you want,” which means that statecraft will always involve the art of balancing conflicting ends and purposes. Deal-making and compromises are the inevitable costs of a morally diverse and politically conflicted society. And the problem of “dirty hands” remains an ever-present possibility.

    Third, an ethic of responsibility suggests responsibility to oneself. “Whoever wants to engage in politics at all,” Weber warned, “is responsible for what may become of himself.” Politics can do strange things to people. It can turn ordinary men and women into monsters. What Reinhold Niebuhr once said of religion is equally true of politics: it makes good people better and bad people worse. Only those who can approach politics with a sense of self-restraint — a feat akin to Ulysses having himself bound to the mast — are capable of responsible leadership. Responsibility requires a “sense of proportion,” the control of the passions, and a degree of detachment from friends and associates. When presidential candidate Bill Clinton told a supporter, “I feel your pain,” he was deliberately attempting to create a sense of intimacy between them, breaking down barriers of formality and restraint, yet it is a characteristic of the greatest leaders to put a sense of distance between themselves and their followers. Lincoln remained aloof even from those who knew him best. In The Edge of the Sword, de Gaulle wrote movingly of the loneliness of command.

    Finally, statecraft is an autonomous sphere of political activity related to but under-determined by external moral, legal, scientific, or economic principles. It must be distinguished from the narrowness of the administrator, the dogmatism of the moralist, the pedantry of the lawyer, and the zealotry of the partisan. It is a realm of its own. Statesmanship is not something for which rules — natural law, the Categorical Imperative, the greatest happiness for the greatest number, even raison d’etat — can be given precisely because statecraft requires a freedom or latitude to act as the situation requires. Strategy without flexibility is futile. Statecraft consists in the concrete decisions made under the force of circumstance. When the moment of truth arrives, when it becomes necessary to say, “Here I stand, I can do no other,” then the statesman has found his calling and this calling will not be provided by Morality, Science, or History, or by any power other than one’s own individual judgment, which is formed by education and experience. There is no set of principles that can define in advance what is to be done in all situations, because no set of principles can control for the mutability of life. 

    Man’s yesterday may ne’er be like his morrow;     

    Naught may endure but Mutability.

    The Shadow Master

    On July 15, 1945, Rembrandt’s 339th birthday, the Rijksmuseum in Amsterdam re-opened with the most emotionally charged exhibition in its history. Called “Weerzien der Meesters,” or “Reunion with the Masters,” the show gathered one hundred and seventy-five paintings that had spent the five years of the Occupation hidden in bunkers. During those five years, private collections were looted and museums stripped of their greatest works. For all the average person knew, these treasures, like so many others, had been stolen or destroyed in the Nazi terror.

    Now they were making a triumphant return to the center of Amsterdam. From The Hague came Fabritius’s little goldfinch, Potter’s big bull, Vermeer’s pearl earring. From Haarlem came the great Hals group portraits, which were displayed alongside the Rijksmuseum’s own collection — including, of course, the nation’s famous Rembrandts. 

    In 1939, with war looming, the huge Night Watch, eleven by fourteen feet, had been taken to a castle in North Holland, where it was stored in a vault of reinforced concrete. The location proved too dangerous. In 1940, when the Germans invaded, the masterpiece was covered with a canvas borrowed from a local farmer and hastily removed to a bunker in Castricum, closer to Amsterdam: a journey of fifty kilometers that took twelve hours. At one point, when an enemy plane appeared overhead, its escorts took refuge in neighboring fields, leaving the great painting alone in the middle of the road. 

    When it finally reached its destination, its caretakers discovered that it was too large for the entrance, and they had to roll it up. Finally, in 1942, it was taken to a special storage site near Maastricht, where it was kept in a limestone quarry, thirty-three meters underground. The director of the Frans Hals Museum, Henk Baard, recalled the scene: “Through the slow progress of the silent bearers the remarkable spectacle, under its ghostly lighting, recalled a princely funeral.” 

    Now it was back. One hundred and sixty-five thousand people eventually visited the show. In a country that still lacked basic provisions, many of these visitors came on an empty stomach. All understood its promise: that past glory would bring future resurrection. “A people that can display such a parade of greatness shall reclaim its special place,” a journalist wrote. At the opening, a minister declared that the Canadians who liberated Holland “have also liberated Rembrandt and Frans Hals.” 

    More than Hals, Rembrandt needed that liberation. He needed it more, in fact, than any other Dutch artist. Hitler himself had declared him “a true Aryan and German,” and under the quisling regime he had become the focus of a bizarre cult, his birthday even replacing the exiled Queen Wilhelmina’s as the national holiday. Now this unwitting German hero could become, once again, the symbol of the dignity of a free people. 

    Light had chased out darkness. It was precisely the kind of cosmic struggle that Rembrandt had illustrated in his works, though that struggle rarely had such a clean outcome. History was recapitulating the trajectory of Rembrandt’s own evolution. If Vermeer was a painter of light, Rembrandt was a painter of dark, or more precisely, of dark commingled with light; and in his work the question of evil recurs more than in any other Dutch artist’s — so insistently that looking at his pictures is sometimes unbearable. There are more scenes of murder, cruelty, torture, rape, betrayal, malediction, and death in Rembrandt than in any other Dutch painter’s work — by far. 

    Vermeer painted no such scenes. Neither did Hals. Neither did any of the blither spirits, Avercamp or De Hooch or Jan Steen. At the very most, the landscapists and the still-life painters will allude, with a graceful symbol, to mortality, to the passing of time. Most Dutch paintings were made for the wealthy middle-classes, and they show things that those people liked to see. Who among them would have wanted a picture such as The Blinding of Samson, in which a silver dagger is plunged into the protagonist’s eye? The painting is so gigantic, two by three meters, that it is hard to look away from it. It is just as hard to look at it: even Delilah can hardly contemplate her victim without a shiver. 

    Rembrandt was so prolific that even the most ambitious museum survey will never capture more than a slice of his work. Books aren’t much use, either: the images end up crammed onto the page, and that monumental quality of Rembrandt’s paintings — their patina, their glow, the sense that they give of something physical, like an extraordinary geological phenomenon — goes missing too, flattened onto smooth paper, removing the tactility, the brazenness, of his surfaces.

    The etchings and drawings, originally made on paper, fare better in reproduction. But there are hundreds of them, and even the most avid eye can only absorb so much. To try to see more than a few at a time is to be reminded that, as with Dante and Shakespeare and Bach, you cannot rush an acquaintance with Rembrandt. His work took a long time to make, after all: nearly half a century between his earliest productions and the works he made in the days before his death at sixty-three. 

    Add, to the quantity of works and media, the quantity of genres. In England, writers were comedians or tragedians or poets, but only Shakespeare was acknowledged as the greatest in every field. In Holland, Rembrandt, who was ten when Shakespeare died, worked in nearly every specialty known to Dutch art, each of which absorbed the energies — the entire lives — of his most talented contemporaries. 

    And then there is the profusion of his styles. What makes it even harder to form a coherent image of Rembrandt is that he painted in so many different styles. Many Rembrandts do not look anything like the popular idea of a Rembrandt. His early work looks so different from his later work that the early works were not even recognized as such until deep into the nineteenth century. Over the years, all sorts of ghastly paintings have been attached to his name — including some that he actually painted.

    There are still rediscoveries today, though these are nearly always of lesser works that seldom add more than a footnote to the image that has emerged from two hundred years of dogged scholarship. The lacunae of that scholarship only yawn in comparison to a demand for completeness. We now know as much about Rembrandt as we know about almost any figure, artistic or otherwise, of his century. We have a large group of works. We know what they show. We know when they were painted—and, often, why and for whom. We can see that some themes interested the master only for a short period. We can see that others were there from the beginning and stayed with him until the end. Some come back in every medium, in every style, at every point in his career. 

    One such recurring theme is violence — evil — darkness.

    The spectacle of cruelty is there in his earliest signed painting, The Stoning of St. Stephen, painted in 1625, when he was nineteen. This work shows a crowd surrounding the first Christian martyr: stones held high, ready to smash him to pieces. Unlike the Samson, it is not hard to look at — it is too much of an apprentice piece to stir real emotion; but though he is not visibly joining in, it is a bit disturbing — and premonitory — to find a chubby teenaged Rembrandt among the crowd. 

    A few years later, the novice has matured into a master. Rembrandt was twenty-six when he painted The Anatomy Lesson of Dr. Nicolaes Tulp. This is a group portrait of eight men around the corpse of Aris Kindt, who had been convicted for armed robbery and executed earlier that morning. The men are wearing neat clothing, and their expressions range from technical curiosity to a keen realization that they themselves will soon be just as dead as the body that lies before them. The way they are arranged around the body is such that you, the viewer, step right up to the circle, invited to join the grisly academic proceedings. Despite the decorous scientific proceedings and the sober expressions on the doctors’ faces, the smell of rot tickles your nostrils. 

    You could derive a positive message from this painting. You were, in fact, intended to derive uplift by those who commissioned it, if not by the artist. The march of science! The doctor is instructing the public with lessons that demonstrate the Dutch commitment to progressive education, lessons that were commemorated with such paintings because the Dutch medical societies were prestigious and famed; Rembrandt was one of many artists invited to paint them. When, thirty years later, he returned to the theme, he showed Dr. Jan Deyman dissecting Johan Fonteyn, who had committed the crime of breaking into a draper’s shop and pulling a knife. This anatomy lesson took place on the day after his execution. Of this painting, damaged in a fire in the eighteenth century, only two central figures, a spectator and the criminal, survive. All we see of Dr. Deyman are the hands peeling back Fonteyn’s bright red brains.

    Rembrandt’s criminals have a presence that bodies in other portrayals of anatomy lessons do not. Sometimes, in these, the doctor is showing a skeleton. Frederik Ruysch, father of the still-life painter Rachel Ruysch, who succeeded Dr. Deyman as Amsterdam’s city anatomist, was painted with rosily blooming cadavers, like Greek nudes, on the dissecting table. He was famed for making dead bodies look alive. Rembrandt’s bodies, by contrast, are unequivocally dead—and he arranges us, like the doctors, around them. 

    We have to look at these cadavers, just as we have to look at poor Elsje Christiaens, a teenage girl who was strangled in 1664 for killing her landlady, apparently in self-defense. She was executed on the Dam, the central square from which the city takes its name (“the dam on the Amstel”). Her body was hung on a gibbet in Volewijk, across the River IJ, where it was to remain “until the winds and birds devour her.” It was there that Rembrandt saw her, strung up like a doll. He drew her twice. 

    Did any other Dutch artist show a dead body this way? A well-known image of the disemboweled De Witt brothers, murdered by a mob in 1672, comes to mind; but the painter, Jan de Baen, was undistinguished, and the picture is remembered mainly because the De Witts were among the most powerful politicians in the Netherlands. The painting, shocking and grotesque, shows a significant historical event — not the death of an obscure eighteen-year-old girl.

    So it goes with many other themes. Often enough, you can find something comparable, somewhere. There are plenty of dead animals in Dutch painting, for example, but there is no picture quite like the Still Life with Peacocks in the Rijksmuseum. Here one bird lies in a pool of its own blood. Another is hung by its feet, its mouth still agape, as if to protest its murder. It is both exquisite and excruciating: the hallmark of Rembrandt. 

    Look, too, at The Slaughtered Ox. Red as the brains of Johan Fonteyn, the dead animal hangs from a wooden beam. “Slaughtered” is not quite the right word. The French title uses écorché — mangled, flayed — and no Christ in the whole Louvre captures the pathos of sacrifice like this harrowing carcass. The painting contains no religious references. But it stirs the same feeling that the most sacred mysteries evoke. 

    Do we identify with the ox? Or with the servant girl, barely visible, looking at it? If the anatomy lessons invite us into the circle of the learned doctors, into their progressive and prosperous institutions, our eyes go nonetheless directly to the dead men at their center. The light is on them; they radiate sanctity. They are not martyrs, but they are somehow numinous. It is not an accident that Rembrandt placed the criminals in the position where the dead Christ was placed in earlier paintings — or that he crucified the ox, and Elsje Christiaens.

             These works are not overtly religious, but Rembrandt painted plenty of religious works, too. If their contents reflect the mood of the man who created them, so does their very existence, since there was so little commercial incentive to create them. In post-Reformation Holland, to the contrary, they were unfashionable to the point of career suicide. The great German art historian and curator Max Friedländer (who in 1939 had to quit Berlin for Amsterdam because he was a Jew) could credit an entire tradition to a single artist:

    A view of the whole of Dutch production in the seventeenth century tells us that, where it was animated by any receptive interest in the religious picture, this was Rembrandt’s personal achievement or was at least set in motion by him. It was Rembrandt who, from spiritual predilection, bequeathed the non-ecclesiastical religious picture to the reformed North, which was ready to only a very limited extent to accept this present with gratitude.

    Rembrandt’s “spiritual predilection” sought extremes, and ways to portray them. Bourgeois moderation was not to his taste. The rough clash of light and dark could be rendered graphically, in paint, as in The Supper at Emmaus, painted when he was twenty-two: as the resurrected Christ reveals himself to a disciple, the dark Savior is surrounded by a halo of blazing light whose hidden source lends him the majesty of a mountain. Light triumphs over darkness. 

    But not always. Often light does not have the last word. Rembrandt shows the damned as well as the saved. In Belshazzar’s Feast, the impious Babylonian king gazes in terror at the ominous writing on the wall. In Uzziah Struck With Leprosy, a Judean king is punished for profaning the temple. There is nothing picturesque about these exotic scenes of hubris humbled. They fulminate, they rage, they terrify, they denounce; and if their warnings are warnings to others, they are also, you feel, warnings to the artist himself. 

    There is no gentleness here, nothing tame or easily digested. We are in the presence of an Old Testament prophet — he painted many — who, we sense, was well acquainted with the extremes that he depicted. The biographical evidence bears this feeling out. Though Rembrandt became a kind of secular saint in the nineteenth century, twentieth-century researchers discovered that he was, in Gary Schwartz’s words, a “cocktail of litigiousness, untrustworthiness, recalcitrance, mendacity, arrogance, and vindictiveness.” It turned out that Rembrandt’s contemporaries had almost nothing nice to say about him. No artist of his time could boast the number of disagreeable incidents that peppered his life. He didn’t pay his bills; he was tactless and rude and prickish; he was cruel to his mistress. “To sum it up bluntly,” Schwartz writes, not uncontroversially, “Rembrandt had a nasty disposition and an untrustworthy character.”

    In a list of appalling incidents, Rembrandt’s treatment of Geertge Dircx stands out. When his wife, Saskia, died at the age of thirty in 1642, she left him a nine-month-old son, Titus. Rembrandt hired Geertge to take care of Titus. He and Geertge became lovers, and they were together for six years. Geertge intended to marry the widower, and she claimed that he had promised to do so — until he began a relationship with his housekeeper, Hendrickje Stoffels. Lawsuits ensued. Rembrandt promised Geertge alimony. In the meantime he collected unflattering testimony about her, which he used to have her committed to the Gouda spinhuis, an atrocious institution for women who had “fallen” for a long list of reasons, from prostitution to insanity. She was desperate to get out. Rembrandt was desperate to keep her there. After five years, she was released, and died soon thereafter. 

    A long-ago dispute between embittered former lovers can be read in any number of ways. In books and films, Geertge has been portrayed as a conniving, gold-digging temptress — and, more recently, as a victim of a man’s determination to get her out of the way. If it weren’t for all the rest of the abundant evidence of his rebarbative personality, we might, in this case, be more inclined to give Rembrandt the benefit of the doubt. 

    Why, in any case, should we care? Surely other painters were obnoxious in ways that history has hidden. For all we know, Adriaen Coorte liked little girls, and Jacob van Ruisdael cheated on his taxes: when names fade and personalities fall away, only an artist’s work — and then usually only a portion of it — remains for us. We do not wonder whether the painter of an Egyptian fresco was likeable. 

    Four hundred years later, we wouldn’t wonder about Rembrandt either — except that the conflicting accounts do bother us. The reason is that we love him. We know him so well, after all. We can see him: his work, and also him, since no artist ever exposed himself as repeatedly and as nakedly. From the adolescent among St. Stephen’s tormentors to the valediction that Jean Genet described as “a sun-dried placenta,” around eighty of his self-portraits survive. The number is astronomical. We have no idea what most painters look like, but except for his childhood we can see Rembrandt at every phase. From the proud young man to the imperious genius to the wrecked patriarch, we can watch his life pass before us; and when, in a museum, we come across him in a new guise, we greet him as an old friend. We know him so well. Is this despite his darkness, or because of it? 

    The darkness is not, in any case, a secret. Even today, when personal confession has become a painterly genre of its own, it is hard to think of an artist who revealed himself so remorselessly. We see him as we see Aris Kindt or Johan Fonteyn or Elsje Christiaens. The difference is that, if the ox was flayed by the butcher and Aris Kindt was dissected by the doctors, Rembrandt did this to himself. In the light of this destiny, do earthly transgressions matter?

    “The West too has known a time when there was no electricity, gas, or petroleum, and yet so far as I know the West has never been disposed to delight in shadows,” the Japanese novelist Junichiro Tanizaki wrote in 1933. He described traditional lacquerware that “was finished in black, brown, or red colors built up of countless layers of darkness, the inevitable product of the darkness in which life was lived.”

    Was Tanizaki familiar with Caravaggio, whose tenebroso style, which used darkness to make light shine all the more radiantly, Rembrandt perfected and transcended? (In contrast to much Japanese painting, which dispensed entirely with light and shade.) In the West, the age without electricity had not quite ended when Tanizaki wrote those words. It stretched into living memory: the Mauritshuis, where The Anatomy Lesson of Dr. Tulp hangs, did not acquire electric light until 1950. How did these paintings look when our grandparents saw them in “the darkness in which life was lived” — and how did they change when first seen under artificial light? Nobody I know of recorded their impressions. 

    Did the Japanese really assign a moral value to darkness? The temptation to such symbolism is universal. Perhaps Tanizaki was a romantic, but at the very least — as you feel that Vermeer’s light has a meaning that exceeds the visual requirement to illuminate — it is possible that Rembrandt’s darkness has a role akin to the one that Tanizaki describes: “Our ancestors presently came to discover beauty in shadows, ultimately to guide shadows towards beauty’s ends.” 

    Yet Rembrandt’s contemporaries did not always consider his shadows beautiful. In his lifetime and beyond, they were often criticized as no more than a murky waste of space. Many great Rembrandt portraits are indeed little more than heads, and sometimes hands, peering out of the gloom, or floating in it, and if we imagine them in pre-electric rooms under the moody skies of Holland, we have to imagine them even darker.

    Look at the late portrait of Margaretha de Geer, from 1661. In the alert and penetrating eyes, and in the right hand that grips a handkerchief, and in the left hand that resolutely holds on to the arm of her chair, as if to launch her at the viewer, you can see her great power. She is one of the richest women in Europe — yet she is very old, and her face, served on a bright round millstone collar as on a platter, seems about to dissolve into the darkness that surrounds her. 

    For all the startling physicality of their surfaces, there is a ghostliness to these portraits, an immateriality, including to those that Rembrandt made of himself, that makes them more haunting than any other art of their time. If The Blinding of Samson is awful to look at, it is so theatrical that it troubles us less than Margaretha. The picture was designed to hang in her children’s house. One shudders to imagine them walking past it at night, the matriarch illuminated by flickering candles.

    Eventually the critical tide turned. Rembrandt’s darkness acquired a positive value, eventually coming to form a crucial part of his myth: a forerunner of the nineteenth-century Parisian bohemian. Especially the tenebrous late works — those final utterances of the discarded old genius, scorned by those who once had courted him, rotting in his cheap lodgings in the Rozengracht — were equated with spiritual profundity.

    Rembrandt’s darkness was viewed so positively that his works were even darkened artificially. The varnish that protects paintings needs to be replaced every fifty or so years, before it decays and darkens; but sometimes, at the insistence of curators, it was deliberately left on, so as not to lighten the work. Deep into the twentieth century, some restorers added pigments to new varnish to darken the pictures. The practice was not restricted to Rembrandt. “A good painting, like a good fiddle, should be brown,” wrote the painter and patron Sir George Beaumont in the nineteenth century. This brownness, known as “gallery tone,” may have seemed appropriate to objects prized, among other qualities, for their antiquity; perhaps here is an echo of the “beauty in shadows” that Tanizaki did not believe existed in the West. 

    In unlit galleries, covered with decaying varnish, how shadowy these paintings must have been! Did the darkness make Rembrandt more mysterious, more de profundis — or did it make him illegible? If darkening seems inappropriate from a scientific perspective, it doesn’t strike us as inappropriate for Rembrandt as it would for another painter. You wouldn’t darken an Avercamp or a Metsu — much less a Vermeer, whose genius lies in the uncanny suffusion of light in space. 

    But Rembrandt is, after all, dark. Yet his darkness does not always have a negative implication. In his several renderings of the apocryphal story of Tobit, for example, he shows the old man whom God has blinded in order to test his faith. While their son Tobias seeks a cure — this turns out to be the entrails of a monstrous fish — Tobit and his wife Anna stay home, patient and impoverished: resigned to their lot, firm in their faith. In Anna and the Blind Tobit, the old man sits in a ramshackle room, his face turned from a light he cannot see. Anna uses a ray from the window to wind wool on a frame; but most of the room, and most of the painting, is dark. The mood is of humility, not of expectation. We know that Tobit will be cured, but he himself has no such knowledge. His reward is unseen, and unforeseen. Faith — darkness — faith in darkness — is all he has.

    In her preface to The Passion According to G.H., Clarice Lispector warned that the book should only be read by “those who know that the approach, of whatever it may be, is done gradually and painstakingly — passing through even the opposite of what it’s going to approach.” The phrase applies to Tobit — and to Rembrandt too, since his darkness, even in his bleakest paintings, always contains an admixture of light. 

    The master was a moralist. He was a sensualist, too. The early works reveal a love of splendor — ostentation, even — and the paintings often contain a glint of gold. This was more than a color, or a taste. In the dark rooms where these paintings hung, it had a practical purpose (“the extravagant use of gold,” Tanizaki wrote, “gleams forth from out of the darkness and reflects the lamplight”) and a representational one. And in their way they reflected the shiny prosperity of the society in which they were painted. 

    In the self-portraits, as life takes its toll on the cocky young man, the light that had illuminated his figures from the outside begins to move inside. The background darkens, the atmosphere turns into mist — and the figures glow, like the fierce eyes of Margaretha de Geer, with something otherworldly. The artist becomes sadder and older — and grander, more imposing, more profound, the inner light shining all the more intensely — because of the approaching dark.

    As Rembrandt ages, the light in his paintings takes on an added luster, and with it an added meaning. It reveals a view of the world as a contest between light and dark that is, at heart, religious: of the world as a theater in which good and evil are intertwined, and in which good only occasionally triumphs. Sometimes, as for Belshazzar, crime meets its just punishment. Often, as for Samson, it does not. Does Rembrandt think it matters? “He didn’t care about being nice or mean, surly or patient, grasping or generous,” wrote Genet of the late Rembrandt: “He didn’t have to be anything more than an eye and a hand.” Now life has taken everything from him. All that matters was his art — and so, “with dirty fingernails,” the erstwhile lover of gold is now shuffling “from the bed to the easel, from the easel to the shitter.” He is beyond good and evil, or trapped in their mixture, living with light and with dark, beyond perfect clarity.

    A conflicted personality has been reconciled. The artistic and the spiritual are no longer in conflict; and in his last months Rembrandt returned, after thirty years, to the parable of the prodigal son. This is the story, from the Gospel of Luke, of two brothers. One stays home faithfully with his father while the other squanders his fortune carousing with whores. Eventually, forced to work as a swineherd, he envies the pigs. The theme had occupied Rembrandt since his youth. In the mid-1630s, he painted himself and his wife Saskia in a tavern scene, The Prodigal Son in the Brothel. There is a peacock pie on the table, and Rembrandt’s golden sword pokes out at the viewer. It is not conventional for a painter to portray himself as a wastrel, or his wife as a hooker; but this, the painting declares, is not a man interested in convention.

    Thirty years later, Saskia was dead. Titus was dead. Geertge and Hendrickje were dead. He himself would follow soon; but before he went, he painted the story one more time. Yet this time he chose another moment: the return of the prodigal son, when he comes back, humbled and repentant, causing his father to rejoice. He dresses him richly, “putting a ring on his hand, and shoes on his feet,” and orders the fatted calf killed. “Lo, these many years do I serve thee, neither transgressed I at any time thy commandment: and yet thou never gavest me a kid,” the virtuous son protests. “But as soon as this thy son was come, which hath devoured thy living with harlots, thou hast killed for him the fatted calf.” But as Christ explains, one repentant sinner causes more joy in heaven than “ninety and nine just persons, which need no repentance.”

    Rembrandt, who painted so many Hebrew scenes of vengeance and sacrilege, now paints this epitome of Christianity, a scene of forgiveness and redemption and love that, even by the master’s own standards, is stately and symphonic: the ragged, pathetic son kneeling before his old father, who gazes at him through half-open eyes. Though they are surrounded by darkness, the light is upon them.

    Without passing through darkness — through the opposite of what he meant to approach — the son could never come into the light of the father. Light cannot exist without darkness, nor virtue without sin. They are intertwined in every life. The electricity coursing between these magnetic poles — between Dr. Tulp and Aris Kindt, between Delilah and Samson, between the butcher and the ox — was the very subject of Rembrandt’s art.

    Forced to a Smile

             An epitaph — the short inscription on a tombstone — normally names and praises admirable qualities of the person buried there, and then hopes for a benevolent future after death. The gravestone may speak to the viewer in the dead person’s voice (as Coleridge imitates the Latin Siste, viator: “Stop, Christian passer-by, stop, child of God! / O, lift one thought in prayer for S. T. C.”) or it may speak as a mourner addressing the buried person (as in the Latin, Sit terra tibi levis, “May the earth lie light upon you”). In his Essay on Epitaphs, written a few years after William Cowper’s birth, Dr. Johnson restricts epitaphs to “heroes and wise men” deserving of praise: “We find no people acquainted with the use of letters that omitted to grace the tombs of their heroes and wise men with panegyrical inscriptions.” The readers of “Epitaph on a Hare” by William Cowper (pronounced “Cooper”) would have expected just those qualities in any epitaph: it would celebrate a male either wise or heroic, and its praise would be public and formal. (The Greek roots of “panegyric” mean “an assembly of all the people”.)

             Against such prescriptive forms, the only obligation for an ambitious poet writing an epitaph is to be original. The form becomes memorable by dispensing with or altering conventional moves: Yeats brusquely repudiates Coleridge’s Christian “Stop, passer-by,” in his own succinct self-epitaph: “Cast a cold eye / On life, on death. / Horseman, pass by!” Keats, dying in his twenties, refused the first, indispensable element of an epitaph, a name, and wanted only “Here lies one whose name is writ in water.” 

             As soon as animals became domestic pets, they could become the subject of an epitaph; Byron wrote a long epitaph on his dog, and had it inscribed on a large tombstone. (On the grounds of at least one of the colleges at Cambridge, there is a cemetery for pets of the dons which includes inscribed tombstones and small sculptured monuments.) Nowadays, in a practice that would have scandalized the pious of past eras, newspaper death notices in the United States commonly include, among the named survivors, domestic pets. The subject of Cowper’s epitaph is not domesticated, but wild — “a wild Jack hare” — not a hero, not a human being, hardly even a pet, but one nonetheless named and distinguished from its fellow hares.

             The most original epitaph for a pet in English literature, Cowper’s “Epitaph on a Hare” is a poem utterly dependent on charm. Poets writing on death have traditionally preferred to create either a somber “philosophical” meditation (on time, regret, the afterlife, and so on) or a direct expression of personal grief. By contrast, charm in lyric requires a complex management of tone: it cannot be single-mindedly earnest nor single-mindedly sorrowful, nor can it be unconscious of its hearers. It is a social utterance. It needs a stylized attitude of wistfulness and irony, a blending of the impersonal with the personal, of the independent mind with the troubled heart, and above all, it requires an evident awareness of itself and its listeners. 

             In real life, charm is almost as rare as exceptional beauty: beauty is Fate’s gift, but charm is a quality of personality and behavior. And charm is always remarked with a lightness of tone; it concerns something small, not sublime or heroic. The praise of charm is always tinged with pathos, charm being such a transient quality. Yeats, reflecting in “Memory” on the women he had loved (if imperfectly) over a long life, comments on the relative rarity of loveliness and charm among those women: “One had a lovely face / And two or three had charm.” But neither loveliness nor charm could transfix him for life, as had the wild beauty of Maud Gonne’s presence:

    One had a lovely face,

    And two or three had charm,

    But charm and face were in vain,

    Because the mountain grass

    Cannot forget the form

    Where the mountain hare has lain.

    That his love for Gonne was a quality of the flesh is stipulated by Yeats’s finishing this little poem with an unignorable match of the botanical and the animal: the mountain grass cannot forget the “form” (the image impressed on it) by the couched mountain hare. Grass-bed and hare belong to each other not because of any human kinship of “mind” or “soul,” but because (Yeats’s repeated noun tells us) both are denizens of the mountain, grass and flesh born of the same territory. 

             Yeats chooses a formal rhyme scheme for his poem on unforgettable beauty, but his slightly unsettling scheme does not employ the familiar couplet or quatrain; instead, it is a freestanding sestet, abcabc. And its slant rhymes are at first uncertain: does “grass” indeed rhyme with “face?” Will “form” eventually rhyme with “charm?” Only at the sixth line, where “lain” emphatically rhymes with “vain,” is the scheme fully intelligible. So unprecedented, so confusing, is heroic beauty that an unsettled air must hover over the lines until the conclusive arrival at “lain.” 

             In Yeats’s “Memory,” charm is somewhat bewildering, a possession of only “two or three” in an erotic lifetime; it comes etymologically from the Latin carmen, “song,” and is related to “incantation.” It has magic power, it lays a spell, it is alluring, it overcomes resistance, it “pleases greatly” (according to my dictionary). On the other hand, unlike striking beauty, charm has to be ascribed to something relatively approachable, of a domestic size, like the “charm” on a “charm bracelet.” It never claims too much; it can never be theatrical. And something about it is odd, as Robert Herrick knew: it is odd to be sexually “bewitched” by something which is rationally off-putting (distracting, neglectful, careless) but psychically fascinating, since it intimates a “wantonness” within: 

    A sweet disorder in the dress

    Kindles in clothes a wantonness;

    A lawn about the shoulders thrown

    Into a fine distraction;

    An erring lace, which here and there

    Enthrals the crimson stomacher;

    A cuff neglectful, and thereby

    Ribands to flow confusedly;

    A winning wave, deserving note,

    In the tempestuous petticoat;

    A careless shoe-string, in whose tie

    I see a wild civility:

    Do more bewitch me, than when art

    Is too precise in every part.

             Our contemporary master of charm in verse was James Merrill, who, at 62, dared to close his eight-sonnet sequence on opera, “Matinées,” with a version of the “naive note of thanks (made into halting verse) that he had sent, at the age of twelve, to his mother’s friend who had invited him to join her at the Metropolitan Opera for Das Rheingold. Miraculously, the note has mutated into a childishly “awkward” sonnet (following on seven sonnets of symphonic eloquence):

    Dear Mrs. Livingston,

    I want to say that I am still in a daze

    From yesterday afternoon.

    I will treasure the experience always — 

     

    My very first Grand Opera! It was very

    Thoughtful of you to invite

    Me and am so sorry

    That I was late, and for my coughing fit.

     

    I play my record of the Overture

    Over and over. I pretend

    I am still sitting in the theater.

     

    I also wrote a poem which my Mother

    Says I should copy out and send.

    Ever gratefully, Your little friend . . .

    The “little friend” is still shaky on prosody, while proud of his rhymes. And by replicating, mistakes and all, the perfect rapture he expressed at twelve, Merrill demonstrates with witty charm that he is as susceptible now as then to the effect of the rising of the curtain on the music of the Rhine maidens, “Nobody believing, everybody thrilled.” The charm also lies in his decision to let his youthful mistake stand: Das Rheingold has a Prelude but no “Overture.”

             Some usual elements of poetic “charm” in lyric, then, are a slightly perplexing initial effect, unconventional elements (of topic, of addressee), a wayward use of genre, ironic sidelights, and a playful spirit. They all meet in William Cowper’s surprising epitaph-poem. 

    Seeing an elegiac commemoration of “a wild Jack hare,” we wonder how such an epitaph came to be composed, and why it is so moving. Its success arises from the double self-awareness of the poet; he is fully conscious of his own actual grief and equally conscious of the unconventional and comic way in which he is speaking. Above all, he expects his readers to follow his own amusement at the mixed language that he must invent for such an unlikely subject without losing sight of what exigencies call forth its parodic features.           

             William Cowper, who was born in 1731 and died in 1800, was an English clergyman and the son of a clergyman. After a beatific episode in which he felt close to, and loved by, God, he fell into a lifelong despairing conviction that he was predestined to be damned, eternally unredeemable. He was hospitalized for months after a suicide attempt, and was unable in life to function as a clergyman. Retreating from the practice of his profession, but with a small inheritance, he took up residence with Morley Unwin, a clergyman friend, and his wife and child; and when the clergyman died, he continued to live with the compassionate wife, Mary Unwin, who devoted herself to him and was his chief human comfort during his recurrent periods of insanity. 

    Over time, in his saner periods, Cowper became the author of many essayistic pentameter poems that range from peaceful descriptions of pastoral life to outspoken denunciations of colonial slavery. But he also wrote trenchant introspective lyrics, of which the most famous is “The Castaway,” a “posthumous” past-tense description of his own death, comparing it to the fate of a sailor who fell overboard and could not be saved. Recalling Jesus’ calming of the waves of Galilee with “Peace, be still,” Cowper says bitterly that he and the doomed sailor had no such resource, none:

    No voice divine the storm allayed,

    No light propitious shone;

    When, snatched from all effectual aid,

    We perished, each alone:

    But I beneath a rougher sea,

    And whelmed in deeper gulfs than he.

    The devastating effect of “We perished, each alone” is outdone by Cowper’s two-line tragic footnote, a trapdoor to a worse hell than the sailor’s: a “rougher” and “deeper” fate lies in religious despair than in bodily death.

             Cowper’s mother died at his birth, and five of his siblings also died. As an adult — unmarried, childless, profoundly melancholy, suicidal, on several occasions wretchedly confined for insanity — Cowper must have been one of the loneliest poets of our language. Isolated at the house in Olney that he shared in his adult life with Mary Unwin, he built wooden cages in which he kept as pets first a single hare, which he received as a gift, but eventually three wild male hares. They spent the day in the garden, and at evening Cowper would admit them to the parlor, tenderly watching them play together in his presence. He wrote an essay-letter for The Gentleman’s Magazine describing them — “Puss, Tiney, and Bess” (all males) — and revealing, though reticently, the extent to which they benefited him during his anguished depressions. He perceived, he confessed, “that in the management of such an animal, and in the attempt to tame it, I should find just that sort of employment which my case required.” 

    Cowper nursed his hares when they were ill, carried them about in his arms, and dutifully took to obeying their wishes, studying their disparate temperaments. Puss, as he explained to readers of his magazine piece, was grateful to him for the care he showed, but “Not so Tiney. . . if, after his recovery I took the liberty to stroke him, he would grunt, strike with his fore feet, spring forward and bite. He was, however, very entertaining in his way, even his surliness was matter of mirth.” Bess was “a hare of great humour and drollery,” and became tame “from the beginning.” Cowper’s letter describes dispassionately the hares’ diet and their seasonal preferences (“During the winter, when vegetables are not to be got, I mingled their mess [i.e. meal] of bread with shreds of carrot,” and so on). Throughout the essay, Cowper endeavors to persuade his reader that hares are the most appealing of animals: the “sportsman,” hunting not for food but merely to kill, “little knows what amiable creatures he persecutes, of what gratitude they are capable, how cheerful they are in their spirits, what enjoyment they have of life.”

             Besides this reminiscent essay and his “Epitaph on a Hare,” Cowper added, to keep Tiney alive in memory, a Latin epitaph in prose: “Epitaphium Alterum” (“Another Epitaph”). Like the English poem, it begins with the conventional “Hic jacet, “Here lies,” and repeats the conventional address to the passer-by, but it still divagates from the classic human epitaph in celebrating Tiney’s lucky life, sheltered by his owner from both human predators and the unkindness of nature: “No huntsman’s bound, no leaden ball, no snare, no drenching downpour, brought about his end.” The epitaph closes unconventionally, too, as the mourner unexpectedly assimilates his own death to Tiney’s: “Yet he is dead— / And I too shall die”: “Tamen mortuus est— / Et moriar ego.”

    So, flanking the verse “Epitaph on a Hare,” we find the detailed gentlemanly letter and the Latin epitaph, both in prose, each more public than the poem; and it is against such relatively impersonal documents that the “Epitaph on a Hare” shines in its humor and its sadness. Almost every stanza contains a surprise. In the first, we are introduced to the mysteriously protected life of an unnamed wild, not domestic, animal; in the second, we encounter the initially withheld pet-name (which “should” have immediately followed the “Here lies”) and also the reversal of the usual superlatives (not “noblest” but “surliest”); in the third, the mounting list of the hare’s doings, climaxing not with a heroic or saintly action but rather with the doubly stressed comic end-words, “would bite.” The mourner has been obscured, too; his relation to the hare is given only meagerly in the third stanza, with the unrevealing phrase “my hand.”

             These strange and deviant beginnings are, as I say, surprising in themselves, but the great triumph of the poem comes in its next four stanzas, the ones on Tiney’s diet and behavior. It takes a bit of time for us to understand that Cowper is parodying the doting diction of a young mother, who assumes, in her maternal fondness, that her interlocutor-bystander is as interested as she in her baby’s important dietary preferences and daily amusements. Translated to our contemporary moment, the young mother would be earnestly explaining her endeavors to feed her baby the choicest of items and expressing her chagrin when a store has run out of a favored ingredient: “Jimmy really adores the Gerber mixed berries, but there wasn’t a single jar on the shelf, and I was worried, but I did find the cereal and the applesauce that he usually has for breakfast, and some favorite vegetables puréed peas and squash. And then I found a new mix, too, with chicken in it, that he was willing to try when I gave it to him for dinner.” The bystander hopes that this is the end of the recital, but no, now it is her Jimmy’s behavior — how much he clings to his stuffed animals, especially the pet elephant, and how vigorously he pedals in his little swing. Nor does she stop there, but advances to her baby’s preferred time of day and his response to a change in the weather: “You know, when everything settles down after dinner, he’s much more playful, and then, when a storm is coming, he senses it and gets really excited.” By this time the bystander is backing away.

             Cowper parodies the dilated intimacy of the mother’s discourse with much amusement, listening to himself. The interminable list of foods, and the owner’s anxiety if something cannot be found, spill out on the page in an excessive inventory of ten items. Difficulties yield to happy solutions as Cowper continues to imitate “maternal” anxiety (“and then, if I lacked thistles, I’d find lettuce for him”). We are made to feel the wild hare’s joy as he “regales” on his special provender. (The Oxford English Dictionary cites John Adams in 1771, resolving to make a pool with clear water, so that “the Cattle, and Hogs, and Ducks may regale themselves here.”) As the named foods become more adjectivally specific — “twigs of hawthorn,” “pippins’ russet peel,” “juicy salads,” “sliced carrot” — the owner’s extravagant affection mounts. The list ends with the unconcealed triumph of the owner over seasonal scarcity, as he succeeds in substituting alternate foods for scarce ones. Has there ever been a more absurd climax than the proud victory of Tiney’s owner announcing that “when his juicy salads failed, /Sliced carrot pleased him well”? And has there ever been a public epitaph that listed the epicurean delights of a lovingly chosen cuisine for an ungainly pet?

             Cowper is a past master of tone and detail. Not only can we hear the tone in which each detail is given, we are even prompted to intuit tones that must have preceded the present ones. We can infer the owner’s anticipatory devotion in slicing up all those carrots, reflecting how pleased Tiney will be as he approaches his dish. And Cowper is also a master of diction, knowing just how to join Tiney in his “gambols” by releasing a coarser language: Tiney “loved to . . . swing his rump around.” The anatomical phrase brings a farmer’s speech hovering into view.

             The owner of the hares mimics his own worry about Tiney’s aging by slipping directly into Tiney’s very mind, imagining him counting down his years and months of self-indulgent life:

    Eight years and five round-rolling moons

    He thus saw steal away,

    Dozing out all his idle noons,

    And every night at play.

    The poet’s worry was warranted; Tiney died at nine. And here Cowper at last reveals why Tiney is allowed into his house. It is the poet’s first-person confession that makes the whole poem grow in stature and grace:

    I kept him for his humor’s sake,

    For he would oft beguile

    My heart of thoughts that made it ache,

    And force me to a smile.

    The anxious diet-procurement, the seasonal schedule of feeding, the protection from predators, the nightly play — these indeed “beguiled” the poet, as they beguile the epitaph itself, until aching thoughts and a forced smile expose the death’s head of the poet’s suffering being. Between the separated words “heart” and “ache” lie the terrible fears and the hopelessness in which the poet lives. Those two monosyllabic lines — like the fatal “deeper” and “rougher” comparatives of “The Castaway”— intensify the atmosphere to an acute register of pain. That intensity then casts a piercing backlight on the whole epitaph: back over the startling characteristics in “surliest” and “would bite”; over the foolish fondness of “juicy salads” and “sliced carrot”; over the aesthetic appreciation of the contrast between the hare’s skips and gambols and the heartier pleasure when he would “swing his rump around”; and over the poet’s “beguiled” observation of the hare’s vicissitudes of response to the weather. The watching, the devotion, the feeding, the cherishing — all the instances of care — are then decoded, with hindsight from the reader, as daily evidence of the aching thoughts and the rare smiles. The unsettling strobe-effect (charm/sorrow, beguilement/ache, play/loneliness) persists in every rereading. The flicker between comedy and heartache is the chief resource of Cowper’s charm.

    But there are many others: the genuineness of Cowper’s loss flickers between the solemn epitaphic frame (from “Here lies” to the ecclesiastical “long, last home,”) and his elation at Tiney’s animal liveliness, between “here lies” and “would bite.” We are charmed not only by the proprietorial boast of the opening: that Tiney was successfully spared, by his assiduous owner, the ritual danger of the morning hunt, but also by the closing view of the hare’s affection for his two precariously remaining companions. Finally, we are touched by the way Cowper’s past-tense narrative presses forward to amalgamate itself into the “now” and the “this” of the imminent moment of parting. We are made to feel the gap between the poet’s relish in his pets and the implication (explicit in the alternate Latin epitaph) of the poet’s own death in the closing word, “grave.”

             Cowper’s means are simple: he offers a monosyllabic poem composed largely of monosyllabic lines cast into the familiar form of the ballad stanza, with rarely disturbed iambic rhythms. And it all appears to lead to a “Christian” pathos as Tiney “in snug concealment laid” consciously “waits” for “Puss” to keep him company in the grave. Yet once again, as in “The Castaway,” Cowper adjusts the end of the poem to a darker note: Puss feels his irrevocable destiny in “the shocks from which no care can save” and knows he will eventually “partake” (take up space) in Tiney’s grave. All communication then ends — between owner and hares, and among the hares themselves, as a long silence — of the shocks, of the grave — ends the poem.

             Lest charm and humor wane in a poem so mixing the two with mourning, the harsher edges of life and expression must be framed in a “softer” vision, through which nonetheless — if the poem is to ring true — the death’s head must be glimpsed. Others have elegized their pets with playful fondness and appreciation, those natural emotions on losing a companion, but Cowper’s many sophisticated and whimsical tones and tableaux of mourning — for himself as well as Tiney — make his epitaph a deeper commemoration. 

    Is charm still exerted in poetry? I have found it recently not only in Merrill but also in A.R. Ammons’ no-holds-barred final book, unceremoniously titled Bosh and Flapdoodle. The poems, written in old age and illness, combine self-mockery and a basso continuo of fear. Ammons calls them “prosetry.” At first I didn’t know what to make of some of them, their slangy and farcical impudence routing Ammons’ general inclination to serious poetry of science and nature. The charm of these “last words,” is, as usual, bewildering to the reader. Incomprehensibly and grandly, one poem flaunts the title “America,” even though its titular scene — the entire country — seems attached to the minor geriatric problem of dieting. Eventually, the second part of the poem enables another view: America is both personal — when you are chastised into dieting — and grand in landscape and weather when you delete personal annoyances in favor of casting your glance more widely. At the close of the poem, which I omit here, the charm lies in the weird separability, and ultimate twinning, of the two points of view: individual and cosmic.

    The aging Ammons (in the implied narrative of the first part) has chronically bad dietary habits, and his doctor, wanting him to reform, sends him to a dietician. The poem opens on the poet’s “counseling” session with the dietician. Ammons chooses to charm us here by jolting us from voice to voice: one is the voice of the severe dietician, recommending unattractive diet items (and reproving disobedient choices); the second is the voice of the adult poet satirically rephrasing the unwelcome advice; and the third is the undersong of the resentful sotto voce id of the patient, who defensively luxuriates in asides as he solicits the memory of appetizing items of past meals, and slips in, at the end of the diet-poem, a resolve to transgress with “an occasional piece of chocolate-chocolate cake.” I have sorted out the voices here, but imagine what it feels like to read “America” fresh off the page, realizing that the title means, for part one, that everyone in the country is endlessly attempting counseling and self-discipline in eating, and endlessly falling back into appetite: 

    Eat anything: but hardly any: calories are

    calories: olive oil, chocolate, nuts, raisins

     

     — but don’t be deceived about carbohydrates

    and fruits: eat enough and they will make you

     

    as slick as butter (or really excellent cheese,

    say, parmesan, how delightful); but you may

     

    eat as much of nothing as you please, believe

    me: iceberg lettuce, celery stalks, sugarless

     

    bran (watch carrots; they quickly turn to sugar):

    you cannot get away with anything:

     

    eat it and it is in you: so don’t eat it: &

    don’t think you can eat it and wear it off

     

    running or climbing: refuse the peanut butter 

    and sunflower butter and you can sit on your

     

    butt all day and lose weight: down a few

    ounces of heavyweight ice cream and

     

    sweat your balls (if pertaining) off for hrs 

    to no, I say, no avail: so, eat lots of

     

    nothing but little of anything: an occasional 

    piece of chocolate-chocolate cake will be all

     

    right, why worry:

             The serve-and-return pattern of contradictory voicing parodies the counseling session by allowing the things the patient cannot in fact say aloud to rise to the surface. We hear not only his irritation at the attempted control by the dietician, but also his wistful glances back to the delights of parmesan cheese. The smallness of the occasion, the pathos of the geriatric plight, the defiant humor, the fluctuations of tone, the awareness of a reader of unknown gender —”sweat your balls (if pertaining) off” — the witty play with e-mail brevity (“hrs”) are all characteristic of charm, in Ammons as in Merrill and Cowper. Trifling with genre always delights the poet: whether Cowper is upending the epitaph, or Merrill is inventing a child’s thank-you sonnet, or Ammons is parodying patronizing advice, the poet’s self-awareness together with his awareness of an audience makes for a gaily sympathetic and sophisticated performance.

             But why is the title of the poem “America?” The first answer, the comic one, the poet would say, is because this is what all America (myself included) is doing — dieting while resenting dieting. But the second answer, the sublime one, arises from the last seven lines of “America,” as the declining poet finds when he turns his gaze from the indignities of age to the grandeur of the American landscape. In the landscape he finds an impersonal reassurance in “disaster renewal,” the cosmic self-repair of the natural seasons. Satiric “charm” falls away, replaced by awe at the natural resurrections of Spring.

             “America,” with its two contrasting parts, shows that the spell of charm need not be maintained throughout a poem. But the advantage of lyric charm is its capacity to relieve the unreality of an unmixed high seriousness. Instead, one sees oneself as an unimportant speck in an indifferent, if exciting, universe, finding a point of self-regard more independent than earnestness, one not omitting comic truth. Ammons is unsparing on the fact of cosmic indifference; Merrill demonstrates how a more ironic vision has replaced, in adulthood, the naive sweetness of childhood; and Cowper, like our later poets, does not obscure either the ravages of time or the power of sympathy. Cowper ranges through so many tones and tableaux while mourning his beloved hares that the poem seems not a pet-elegy, but rather a human one. As we follow its exquisite variations on charm and grief, classical reminiscence and personal hardship, we are instructed how three improbable pets, more than two centuries ago, could force a despairing poet to a smile.

     

    For the Birds (Strictly)

    ​​​​Strictly for the birds.  – Holden Caulfield

     

    Easy to think of what’s different,

    what’s broken or chastened

    somehow

     

    now that I’ve lived longer than

    my father ever did. No

    nightlights back then,

     

    for example, those steady little

    stars we plant and grow about

    the house now

     

    like nightflowers to make us less

    afraid. Just the moonlight then

    dreaming its way

     

    inside the open window, the

    body of light lying like a

    hologram across the kitchen

     

    floor, like some sleeping hobo,

    some vagrant vagrant, who’ll be

    sure to be gone

     

    in the morning. And the feeder

    outside, barely visible, too early

    for the birds, hanging so long

     

    and still, like the last Apache

    executed at dawn at Fort Yuma,

    Arizona in 1912

     

    before World Wars began, like

    the fasces ax and olive branch

    on the Mercury Dime,

     

    the one Wallace Stevens loved.

    Perhaps you were afraid too in

    that darker darkness

     

    and could have used a little

    light, something to hold on to

    before the dawn,

     

    some tiny votive burning just for

    the birds when everything

    seemed crazy, or

     

    strictly for the birds, as you always

    said. Maybe you told them all

    that they were safe

     

    and still alive, not dead, that

    soon enough it would be time

    to go to work, to sing.

    Before a Fall

    Pride comes before a fall, Solomon says, but any fool knows that’s not

    true. 

    Take Jesus, for example, or Gump Jaworski, who did a double half

    gainer 

    and most of a triple solchow on his last day of working for Gutters ‘R’

    Us 

    (“Gutter Problems? Gutter Call Us!”) when he fell off a company

    ladder 

    trying to steal a case of Budweiser tall boys from an open third floor

    window 

    of the Riverdale Co-op back in the day, and who would’ve gotten

    himself 

    a decent settlement if he’d had any disability insurance to speak of, 

    but he didn’t. 

    Come to think of it, it was the case of beer that landed first, just

    before he did, 

    right on top of it, breaking every bone in his head, and most every

    long-necked 

    bottle inside the case that wasn’t broken already, a feat he took no

    pride in 

    whatsoever, nor should he ever, though he bragged sometimes

    long after the fall 

    through his ill-fitting, whistling teeth that all the way down he had

    never let go 

    of the case. Or take Charley Pride, who sang so easy and let it all go

    with every song 

    he ever sang, who never fell at all as far as we know, and deserved all the pride 

    he ever felt in his life, singing “All I Have to Offer You is Me” the way

    he did, 

    even selling more records than Elvis for a while, a thing to be

    Tennessee proud 

    of there for sure.  There’s proof for all this from the natural world 

    if you want 

    it, and all the animal kingdoms too, the way they say lions come in

    prides, 

    but you can’t tell me the last time you’ve seen one of them take a fall, 

    let alone any pride in it, people laughing and spitting like hyenas all 

    the time. 

    And puffed-up Mr. D? John Donne told him straight up not to be

    proud, 

    but he’s always strutting, moving along, the country around him like 

    a building 

    collapsing, imploding on itself. See him taking selfies on the Capitol

    steps, 

    proud boy, proud as hell, filled with rage, with graveyard joy, unweaning pride, not before, 

    but after a fall. 

    The Safe Bet

    They say Lady Godiva put everything she

    had on a horse, 

    but what if the wager had grown from

    speculating whether 

    everything on earth is always growing

    steadily, incrementally, 

    or whether things are inevitably falling apart? 

    The safe bet 

    would be the latter, of course, the smart call.

    You’d have 

    gravity on your side, that wormy apple hitting

    feckless Newton 

    smack on the skull every time. There’d be

    9/11, the Falling Man, 

    the icy Titanic, Trump, Q, and each driverless,

    non-fungible Tesla to boot. 

     

    But National Geographic reports that Mount

    Everest actually grew two feet 

    last year. Tenzing and Norgay would’ve just

    fallen short. Today they’d be 

    leaping like slow motion Tik Tok NBA

    ballers trying to hang on the rim 

    of the moon. And those Oregon settlers

    buried side by side in hastily dug 

    graves two hundred years ago just worked

    their way to the surface after 

    all this time, some Farmer Brown’s boy’s dog

    sniffing at the rain-soaked 

    gray cannon balls of their skulls, their ribs

    curved up like little cathedrals.  

     

    It’s as if everything wants more open sky,

    more canopy over our heads, 

    to make room for all of what’s rising, all our

    loneliness growing greater 

    every night, as if the earth itself is a seed

    stuck in Whitman’s muddy 

    boots, as if the moon coming up over that

    ruddy fence is the face 

    of the child we’ve loved and have lost, as if

    that’s who we all 

    should be out looking for, betting the house

    every time. 

    Priorism, or the Joshua Katz Affair

    Teach your tongue to say: I do not know, lest you be duped.

    Talmud Berachot 4a

    The phrase “Joshua Katz,” as it is ground down and churned out by the national rumor mill, refers not to one character but to many. He is a conniving fiend; a wronged and saintly genius; a bitter man who has responded terribly to genuine mistreatment; the perpetrator of abuse; the victim of abuse; a valorous defender of independent thought; a sad sack manipulated by a powerful puppeteer named Robert George; a befuddled but well-meaning and brilliant professor, and so forth. It took me several months to notice that all of these Katzes refer to the same man, and still longer to recognize that the name, as used in public discourse, is not a name at all but a rallying cry. The rumors that are think-pieced about Katz do not reflect any serious empirical consideration about what exactly unfolded at Princeton the summer of 2022, though that is their purported subject — but of course that is not what they are intended to do. His name is a speech act, a token, a shorthand, a move in a game. How someone invokes “Joshua Katz” depends entirely on where that individual’s stands on trends that have little directly to do with the man. Ignorance is a primary fuel of opinion.

    Joshua Katz, a classicist, made tenure at one of the most prestigious universities in America when he was just thirty-six years old. That is not why I know his name, though it is among the reasons that the implosion of his academic life was an affair of national significance. (Our country’s pathological obsession with the glitteriest members of the Ivy League — provincial ecosystems that bear little resemblance to anything beyond their hallowed walls — is among our more embarrassing fixations.) Eighteen years after he received Princeton’s President’s Award for Distinguished Teaching, and fifteen years after he made tenure, Katz was ruthlessly fired!, or he was canceled!, or he was justly punished!, depending on which team you play for and how invested you are in your membership in the league.

    Katz is among the many citizens whose private catastrophes have been seized upon and treated something like a theatrical drama in which certain breeds of nauseatingly political Americans assume their customary positions and rehearse their familiar scripts. Scavenging the relevant search engines and piecing together a timeline of the Katz affair after the fever has broken has been a fruitful, if bizarre, anthropological project. At a distance, the earnest hysteria and sanctimonious outrage of all the opiners seems not only ridiculous but also hollow, as if none of these pontificators really cared about this particular drama, except as an opportunity to model the Right (or Left) View of it.

    The name that I give to this style of participation in public debates is priorism, because it comes with a handy framework, an a priori intellectual and even cognitive filter, into which each successive news cycle or morsel of cultural gossip is smoothly fitted. Priorism is a brutish substitute for interpretation; unlike priorism, responsible interpretation awaits facts, considers developments, and suspends judgment for the duration of inquiry while it resists the impulse to extrapolate wildly from bits and pieces. The primary objective of interpretation is to yield understanding, whereas priorism yields only a comforting sense of belonging and a hackish confirmation of an established worldview. Evidence that contradicts its framework is simply ignored or discarded or mocked, and in this way priorists are never thrown into crisis. Theirs is a phony kind of certainty. They, or at least the clever ones among them, are not exactly liars. They tell selective truths, edited accounts, absorbing what is useful and strong-arming it into their system. The spirit that moves even their true opinions is not the spirit of truthfulness but of conformity. Priorism, no matter of which ideological variety, offers its members the armor of a sympathetic, validating community. They are never discomfited, they are never alone, they are only ever affirmed. This, incidentally, is why priorism has these days become a promising career path.

    The national theatrical production called “Katz,” like the ones that preceded it, is, among other things, tedious, no matter the pitch in which the lines are recited, because we have all heard all this before. And further, the more familiar the opinion, and the closer it clings to the script, the warmer its reception: community, and its cheap praise, is guaranteed. The primary mode of its expression is regurgitation. (Re-tweet!) Our discourse is made up of a million platitudes, and these platitudes are repeated endlessly by the very people who purport to be, and are feted for being, our brightest. How do we manage to stay awake through each performance?

    Katz does not appear to be a dazzling individual. Among the oddities of this tale is that he can command national attention at all. It is generally agreed upon that he has an enigmatic, rapacious, and sharp mind, and a captivating energy which sometimes obfuscates his lack of more obvious charms. If ever he possessed charisma, it is not evident now; he is, judging from the essays he has churned out about the terrors of cancellation and the cowardice of his former friends, a bitter man. (It is 2023 — we are connoisseurs of cancellation, and we know the difference between a dignified pariah and an embittered one.) It was surprising to learn that, before the crisis began, long before I ever heard of him, Katz basked in the adoration of the entire Princeton student body. He has a quality rating of 5/5 on ratemyprofessors.com, and 100% of students said they would take his classes again. One respondent on that site gushes that Katz was “a reason to come to Princeton.” Another effused, “Don’t graduate without taking a class from Katz. He is not only brilliant but dynamic and interesting as well… Will know each person in his 100-person lecture personally.” And another: “Possibly the coolest teacher I had through all 4 years of college.” In October 2018, when Katz was already suspended for sexual misconduct but before this fact had become common knowledge, in phase A of the scandal, the website OneClass.com ranked the top ten professors at Princeton and awarded Katz the top spot. Undergraduates used to queue in winding lines to sign up for his courses. He received more than one teaching award, and was among the professors who served as a contributing columnist for the very student newspaper that would later pioneer his destruction.

    These were (some of) the facts available to the American public, and they sufficed for families gathered round their tables to engage in psychological speculation regarding the inner workings of a man they had never met. There is a certain sort of professor for whom undergraduate adoration is infinitely more intoxicating than any drug. The deprivation of this intoxicant seems to have infuriated him more than the other attendant indignities. Or: See how fickle and cruel college students can be? As soon as the torchbearers came knocking they turned on a man they had revered. And so on.

    The story of Joshua Katz revolves around the man, but it isn’t really about him — it is about us, about the cynical and insanely politicized world that we have constructed for ourselves, the kitsch that we slosh around in, the slogans that we slurp and spoon down one another’s throats. There are no heroes in the story. There aren’t any villains, either. In so far as villains are cunning, Katz doesn’t make a convincing villain. This is true despite the fact that his enemies have bent over backwards for the past three years trying to dress him up like one. (I do not mean to imply that he is not guilty of sexual misconduct. He has said himself that it is a sin for which he has repented.) This is among the reasons that he has been so enthusiastically enveloped by the right — for that set of priorists, being accused of villainy by progressives is the surest certificate of purity, just as being cast as a victim has the same effect for the opposite camp.

    It is easy enough to track the public response to the Joshua Katz affair, but the details of the story itself remain overwhelmingly mysterious, like a play within a play that the characters are reacting to though none of them has heard all the dialogue or seen all the action. As noted, this ignorance is an essential element of the story. Knowledge would spoil the fun. As far as I can gather, it is impossible for an outsider to figure out what actually transpired, and it is in all likelihood similarly impossible for an insider with protected but partial information to gauge what actually happened. Very little about this affair can be honestly asserted with confidence, but much has been confidently asserted.

     The broadest details have by now been widely reported (and selectively forgotten). In 2018, Professor Katz was disciplined for a consensual relationship with an undergraduate that occurred sometime in the mid-2000s. (That investigation began the same year Katz was supposed to serve on the “Committee of Three” or “C/3”, which is arguably the most important committee at Princeton. Serving professors help to decide, among other things, which of their peers get tenure. The fact that Katz was appointed to this powerful body speaks to the status that he enjoyed among the faculty.) An investigation into the relationship was initiated after a third party, another student with knowledge of the affair, contacted the university without the support or the consent of the woman (now graduated) with whom Katz had been entangled. She did not participate in the investigation. Based on the committee’s findings, which remain confidential, Katz was suspended for the academic year of 2018-2019. It seems that at the time little was made of his absence. There was no public outcry, and the suspension happened to fall a year after Katz had a scheduled sabbatical, so that his departure was prolonged rather than suddenly and disruptively enforced. Perhaps this experience radicalized him, or perhaps it simply coincided with political upheavals within the university that on their own shifted him rightward. Whatever the case, rightward he went.

    Katz had not been entirely apolitical prior to the events with which we are presently concerned. In 2017, a year before the drama, he was a signatory to a letter penned by fifteen professors from prestigious universities which invited the freshman class of that year to resist pressure to conform politically despite the social consequences. They warned that groupthink is rampant and powerful enough that “it leads [students] to suppose that dominant views are so obviously correct that only a bigot or a crank could question them. Since no one wants to be, or be thought of, as a bigot or a crank, the easy, lazy way to proceed is simply by falling into line with campus orthodoxies. Don’t do that. Think for yourself.” Two years later, in this same spirit, on July 8, 2020, Katz published an essay that would vaunt him onto the national stage. It was entitled “A Declaration of Independence by a Princeton Professor” and it appeared in Quillette. His debut as a participant in the public debate was as a member of, or at least a contributor to, the anti-cancel-culture brigade. With one glaring exception, it was a more or less competent defense of reason and clear-headedness.

    The “Declaration” was written in response to a letter by large numbers of the Princeton faculty, published on Independence Day and addressed to the president and senior administrators of the university. It put forth a suite of demands designed to combat the “Anti-Blackness” that “is foundational to America,” and was signed by over three hundred faculty members. Some of the demands were reasonable, as Katz himself states in his essay. For example: part 4, demand 10 insists that Princeton “fundamentally reconsider legacy admissions, which lower academic standards and perpetuate inequality.” Or, as Katz points out, “It is reasonable to ‘give new assistant professors summer move-in allowances on July 1’ and to ‘make [admissions] fee waivers transparent, easy to use, and well advertised.’ ‘Accord[ing] greater importance to service as part of annual salary reviews’ and ‘implement[ing] transparent annual reporting of demographic data on hiring, promotion, tenuring, and retention’ seem unobjectionable.” These demands were sensible and practical.

    But others, as Katz goes on to point out, were ridiculous. Consider, for example, Part 1 demand 5: “Reward the invisible work done by faculty of color with course relief and summer salary… Faculty of color hired at the junior level should be guaranteed one additional semester of sabbatical on top of the one-in-six provision.” Or Part 2, demand 4: “Enforce repercussions (as in, no hires) for departments that show no progress in appointing faculty of color. Reject search authorization applications and offers that show no evidence of a concerted effort to assemble a diverse candidate pool.” Here, we can agree, we have left the realm of best practices and entered the netherworld of radical identity politics, though Katz claimed that such proposals would lead to civil war on campus if implemented, which seems excessive. But the ugliest of the hyperbolic indulgences in his piece was his now-infamous characterization of a Princeton student group called the Black Justice League as a “small local terrorist organization.” This, from the man who not a year earlier signed a letter in which he joined in lamenting that groupthink had become so powerful that students reflexively assume only a bigot or a crank would oppose it.

    If your fingers have been in the remote vicinity of our culture’s pulse in recent years, you will have noticed that ordinary people with a bit of common sense have devolved from independent thinkers into gang members with an axe to grind. Those who declared themselves anti-groupthink developed their own groups which developed their own asphyxiating vernaculars and codes of conduct. Katz is a freshly minted member of such a group. He wrote recently, in Sapir, that cancellation has allowed him to see who his real friends are. I do not mean to doubt his need for friendship, but surely he must see that the basis of these new attachments are ideological. His new friends have uses for him. If he did not parrot their own scripts with such gusto, they would not be so friendly.

    The phrase “small local terrorist group” is the reason “Joshua Katz” has become part of the national chatter; it is the reason that he is reviled by the left and deified by the right. Five days after his defiant essay appeared, Princeton President Christopher Eisgruber publicly condemned Katz:

    While free speech permits students and faculty to make arguments that are bold, provocative, or even offensive, we all have an obligation to exercise that right responsibly… Joshua Katz has failed to do so, and I object personally and strongly to his false description of a Princeton student group as a ‘local terrorist organization. By ignoring the critical distinction between lawful protest ad unlawful violence, Dr. Katz has unfairly disparaged members of the the Black Justice League, students who protested and spoke about controversial topics but neither threatened nor committed any violent acts.

    Both sides, of course, degrade “free speech” by ping-ponging it cheaply back and forth over the ideological net. That same day, in the American Conservative, Rod Dreher called Eisgruber a coward whose proper role is to “defend free speech by faculty members, not kowtow to radicals.” Dreher declared, in the conventional right-populist way, that Eisgruber’s statement makes clear “who has privilege at Princeton and who does not.” As if Katz had a right not to be disagreed with; as if the members of the administration or the faculty among whom he had just so aggressively distinguished himself had no right to respond to him. You cannot create a provocation and them complain when others are provoked; and this goes for all sides.

    Here is a brief review of the most notable responses to Katz’s villainy/heroism. On July fourteenth the Wall Street Journal editorial board published a column praising Katz and warning that “cancel culture doesn’t need to get him fired to succeed. It succeeds by making him an outcast at his own university, and intimidating into silence others on campus who might agree.” On July 22, Mihael Poliakoff, the president of the American Council of Trustees and Alumni, paid tribute to Joshua Katz for “his intellectual integrity, his heart, and his courage.” and recognized him as a “Hero of Intellectual Freedom.” On July 26, Katz published an op-ed in the Wall Street Journal titled “I survived cancellation at Princeton: it was a close call, but I won’t be investigated for criticizing a faculty ‘open letter’ signed by hundreds.” (This was the first of innumerable subsequent essays, podcasts, and talks given by Katz about surviving cancellation.) In September, the American Council of Learned Societies withdrew Katz’s appointment as a delegate to the Union Académique Internationale. Katz sued the ACLS for “viewpoint discrimination.” A judge dismissed the lawsuit. In January of the following year, John McWhorter, writing in the Atlantic, praised Katz: “He is not an exemplar of white fragility, but a model for the future.”

    On February 2, 2021, seven months after Katz’s essay in Quillette was published, things got darker. The Daily Princetonian published the findings of its own investigation into three different relationships that Katz had had with female students. Katz’s lawyer slammed the article as a “planned smear… clearly yet another attempt to punish him for dissenting from the prevailing campus orthodoxy.” After these findings were published, the alumna who had been the subject of the investigation in 2018 sent a detailed written complaint about Katz to the university. In response to the receipt of that letter, the university commenced a new investigation, this time concerning Katz’s compliance with the previous one. This final investigation took place over the course of the subsequent thirteen months. On May 23 of the following year, the board of trustees resolved to fire Katz, and published a statement which reads in part:

    When [the alumna] came forward in 2021, she provided new information unknown to the University in 2018, and the University initiated a new investigation in accordance with its policies. The new investigation did not revisit the policy violations for which Dr. Katz was suspended without pay in 2018; it only considered new issues that came to light because of new information provided by the former student.

    The 2021 investigation established multiple instances in which Dr. Katz misrepresented facts or failed to be straightforward during the 2018 proceeding, including a successful effort to discourage the alumna from participating and cooperating after she expressed the intent to do so. It also found that Dr. Katz exposed the alumna to harm while she was an undergraduate by discouraging her from seeking mental health care although he knew her to be in distress, all in an effort to conceal a relationship he knew was prohibited by University rules. These actions were not only egregious violations of University policy, but also entirely inconsistent with his obligations as a member of the Faculty.

    Faculty discipline at Princeton is handled in accordance with the “Rules and Procedures of the Faculty,” which guarantee numerous procedural safeguards for faculty members facing proposed disciplinary action. In cases involving a proposed suspension or dismissal, the affected faculty member has the right to seek review by an independent committee composed of members of the Faculty elected by their peers.

    The recommendation to dismiss Dr. Katz was reviewed by the faculty committee, known as the Committee on Conference and Faculty Appeal. After reviewing the pertinent investigation reports and Dr. Katz’s submissions, and interviewing Dr. Katz and others, that committee found that the reasons presented in the dismissal recommendation of the Dean of the Faculty were supported by the record. That recommendation was subsequently submitted to the President, who evaluated it and submitted it to the Board for action.

    The Board voted to dismiss Dr. Katz on the recommendation of the University President and Dean of Faculty, after a review of the extensive record by an ad hoc committee of the Board appointed to consider the matter.

    That same day The New York Times published an article which insinuated that the investigation was simply a ruse, an excuse to fire Katz “for criticizing the anti-racist proposals made by Princeton faculty, students, and staff” — criticisms that he had made seven months before the investigation was opened, and over a year and a half before the committee resolved to fire him. It is unclear to me why the Times, of all places, and at that late date in the thought-policing to which it has itself significantly contributed, decided to use Katz’s firing as an opportunity to condemn the overreach of cancel culture.

    Whatever the reason, since the Times overtly accepted Katz’s reading of the situation, it seems that many others who would otherwise resist hasty conclusions permitted themselves to believe there must have been some higher proof of gross misconduct on Princeton’s part. This is the only explanation I have come up with for why so many other apparently reasonable people have repeated this line without providing persuasive evidence. On July 5, The Chronicle for Higher Education published “Princeton Betrays Its principles: The corrupt firing of Joshua Katz threatens the death of tenure,” which is a withering condemnation of Princetonian spinelessness. If it were true that Princeton used the investigation as an excuse to fire a man whose views were a cosmetic liability for the university, such a criticism would be justified. But there is no way to know for certain that this is what was done. One can only extrapolate broadly from available information.

    For instance, it is undoubtedly true that many members of a progressive mob unjustly demanded that Katz be fired simply for writing something with which they virulently disagreed. (Anyone who actually wanted Katz fired for the Quillette essay is guilty of precisely the progressive extremism of which Katz and his gang accuse Princeton.) It is also true that a university must thoroughly investigate a complaint submitted by a student suggesting that a professor is guilty of egregious misconduct. Is it possible that the dean of the university, a peer committee, the university president, and the board of trustees all perpetrated a hoax investigation for thirteen months because of a concerted effort to fire Katz? Perhaps. Is it manifestly evident? Certainly not.

    Cancel culture has destroyed innocent lives, and has exacted numerous excessive punishments, and has achieved a tyrannical power within certain precincts of elite America. But the sheer fact of cancellation proves nothing about what is true and what is false, who is innocent and who is guilty. Cancellation is not itself one of the facts that need to be established regarding any specific case. Nor has martyrdom ever proven the truth of a faith. And one can acknowledge this even while also contending that those who ordered Katz’s head on a platter simply because of a phrase in an article must be opposed even by non-bigots and non-cranks.

    Similarly, is it possible that Joshua Katz wrote that infamous phrase in his Quillette essay because he knew that the progressives were out for his blood, and so he was throwing his lot in with the other team? Did he write it on purpose, with cunning, so that he could later argue, after his inevitable cancellation on other grounds, that his mistreatment was a matter of free speech and not a matter of sexual misconduct? Perhaps. It is as a convincing a theory as any other, and none are very convincing. I have heard both versions of the Katz affair defended with perfect confidence by people I respect. All their confidence is baseless. The incontrovertible fact is that we do not know the facts.

    Our very conception of participation in the public discourse is predicated upon a gross distortion of the proper relationship between truth and opinion. We abhor silence, as if having nothing to say is somehow worse than saying much without much  substance. It is not shameful to recognize one’s own incompetence for judgment, if judgment requires knowledge that one does not possess. Regarding subjects which we cannot adequately know, especially those which concern the private lives of other people, it is honorable not to have an opinion.

    A dear friend of mine, I will call her Jane, was raped by a boy with whom she had attended middle school and high school, and with whom she had been close for most of her life. He was monstrously drunk at the time, so much so that he does not remember the act (at least he has never indicated to her that he does). She did not tell the police or any of their mutual friends, neither directly afterwards, when basic functioning was a task that she could hardly manage, or many months later, when the fog had begun to lift and she dreaded re-engulfment. For quite a long while after the trauma, she interpreted any remotely analogous incident primarily as a tale of rape. (Whenever a new cycle of ignorant gossip about a sexual-assault-related claim captivates the nation, and the same priorists who last time asked “why didn’t the victim come forward earlier?” ask it again, Jane’s nails puncture her palms.)

    This event determined her disposition towards every allegation levied against a man regarding sexual misconduct — even cases like Katz’s, though Katz was certainly not accused of rape, or of a less violent kind of sexual assault. Jane maintained that prior disposition until her brother was accused of sexual assault by a woman with whom he had gone to college. For years after the accusation was made, her brother receded into a depression that sapped him of the strength or the inclination to leave his bed, or to read, or to speak to most anyone other than Jane. During that bleak era, she would schedule her days around their phone calls, convinced that her voice was all that kept him from suicide. She believes, on the basis of her own cross-examination of her brother, in his innocence.

    Discussing the Joshua Katz affair with Jane is psychologically and sociologically fascinating. Depending on the day, or the aspect under discussion, or the attitude of her interlocutors, she assumes either the priorism of a victim of rape or the priorism of her brother’s sister. Whichever of these personas participates in a given conversation, it is evident that Jane is agitated by contradictory loyalties, which is why it is so difficult for her to conjure genuine concern about the facts of the case she is discussing. She doesn’t know whether Katz is guilty or innocent; she knows that her brother is innocent, and she does not want to be the kind of person who would have assumed her brother’s guilt, or would not have advocated for his rights to fair treatment and due process. She has no idea why Katz’s female student did not participate in the primary investigation, but she knows why she has never gone to the police, and she does not want to be the kind of person who would doubt the veracity of the testimony of a woman such as herself.

    It must be acknowledged that Jane’s antithetical loyalties have a certain integrity, even though they inhibit her from developing a dispassionate view of this case. While there are many priorists who have exploited the Katz affair for their side, not all priorists are operating in bad faith, in the sense that they are not all primarily motivated by a desire for community membership. Priorism is not always crass, though it always facilitates an intellectual incompetence. The views that Jane develops purely on the basis of her prior loyalties, which are outgrowths of her own dark experience, are not cheaply held. They are understandable, even admirable impulses — but they are not intellectually supportable ones. Loyalty is a precious human expression, and it can be enriching and beautifying. But it must be closely watched, tempered, and monitored to keep from becoming blind and devolving into tribalism.

    Sooner or later, in the analysis of our scandals, owing to the complexity of the questions and the mixed availability of evidence and the laziness of our public discussion, one must consider seriously questions of epistemology — of what we can know and how we can know it. The “fake news” and “alternative facts” of the Trumpists brought this philosophical conundrum into the open, though it has always been a fundamental concern for conscientious citizens. And now it seems to be everywhere, as each gang cherry-picks their experts and sends them into battle on every subject from medicine to foreign policy. The Katz affair is another example of the ruined reputation of authority.

     “It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.” So wrote the English mathematician and philosopher W.K. Clifford in his essay “The Ethics of Belief,” which appeared in 1877. In his essay Clifford advances a defense of evidentialism, an epistemic doctrine which stipulates that a belief is only ever rightly and morally held if it is supported by conclusive evidence. Clifford insists that even if a belief is true but is held for any reason other than that it was empirically or logically proven, it is wrong to hold it. (Milton’s wonderful phrase for this intellectual predicament was “a heretic in the truth.”) And the word “belief” does not refer simply to the question of religious faith, which was the immediate though hidden subject of his essay, but extends also to every variety of knowledge: “No simplicity of mind, no obscurity of station, can escape the universal duty of questioning all that we believe.” It is an exorbitant imperative, practically impossible to fulfill, and paralyzing even to attempt.

    Nineteen years after this essay appeared, William James published his famous rebuttal to Clifford, in “The Will to Believe.” Therein James defined a hypothesis as “anything that may be proposed to our belief; and just as the electricians speak of live and dead wires, let us speak of any hypothesis as either live or dead. A live hypothesis is one which appeals as a real possibility to him to whom it is proposed.” He goes on to defend the human “right to believe at our own risk any hypothesis that is live enough to tempt our will.” This laxity about truth is necessary, James argues, in order for certain strains of higher belief to remain possible. He is salvaging an epistemology for religion. Clifford’s rigor, he warns, prescribes an agnosticism which “would absolutely prevent [one] from acknowledging certain kinds of truth” and is therefore “irrational.” (It was daring of James to invoke rationality in the service of his idea of faith.)

    But surely James’ prescription is as untenable as Clifford’s — it is as intellectually lazy as Clifford’s is intellectually severe. Clifford slams the door shut, James removes the hinges. Can this great controversy about religion be applied to politics? It is surely impossible for a citizen to gain proficiency in all the subjects that would allow her to cast a thoroughly informed vote, for example. Insisting on Cliffordian certainty in public affairs is futile. As Clifford himself noted, we all rely upon authorities, and it is our responsibility to determine what counts as authority in various fields. But if evidentiary certainty is not possible, does this ignorance emancipate us from Cliffordian scruples and gain us a Jamesian freedom to believe whatever kindles to us?

    Consider a concrete instance: the Katz affair. Given all that we do not know, does James’ latitude apply in such a case? Do we have a “right to believe at our own risk any hypothesis” regarding Katz and his cancellation “that is live enough to tempt our will”? Is it cowardly to hold one’s tongue and not contribute to the fight over first principles simply because one is not certain? If one is convinced that independent thought is under siege, isn’t it fair to extrapolate from what one already knows about American society and infer that Katz was unfairly treated? And even if Katz was not unfairly treated — does it really matter? Shouldn’t anyone who opposes the attack on independent thought defend Katz, as he has become a synecdoche for the larger problem?

    Similarly, suppose one believes that there is a certain kind of professor who serially abuses his position and takes advantage of his students. And suppose that this same person is also convinced that there are charlatans who have made careers out of condemning cancel culture, and such people, the most powerful among them, have connived to dress Katz up as a wronged saint. Doesn’t such a person have a duty to speak up regardless of whether or not she has evidence of Katz’s specific guilt?

    The answer, of course, is no. It is undignified to play the fool in the name of ideological loyalty. No citizen or ally is required to simply repeat what others on her team expect her to say, no matter the valorousness of that team’s general code. None of us have the right, let alone the duty, to feign certainty. It is entirely honorable to have no opinion about the Katz affair, or about any controversy that will not admit of clarity. We have instead the onerous obligation to defend our values while putting pressure on platitudes and slogans. Put down your scripts. They refer to nothing beyond themselves.

     

    Problems and Struggles

    “So Socrates!” he teased, “you are still saying the same things I heard you say long ago.” Socrates replied: “It is more terrifying than that: not only am I always saying the same things, but also about the same things.”

                      Xenophon, Memorabilia, IV.4.6

                              (translated by Jonathan Lear)

    In the plenitude of discouragements that is contemporary history, the one that perhaps stings me the most is my increasing despair about the possibility of persuasion. Who changes their mind anymore? What is the difference between an open society that is intellectually petrified and a closed society? In a democratic society, which governs itself by exchanges and tabulations of opinion, surely the first requirement of meaningful citizenship is receptivity. Thoughtlessness is a betrayal of democracy. Mill said that democracy is “government by discussion.” The purpose of discussion is to test the merit of opinions with the presumption that one may convince others, or become convinced by others, of new views. One of the quintessential experiences of democratic life is to admit that one is wrong. In debates about large principles and large programs, everybody cannot be right, and sometimes not even a little right; and in a liberal order the adjudication of contradictions is accomplished not by guns but by arguments. Or so we like to tell ourselves. But the degrading spectacle of what passes for public debate in America has shaken my hoary faith in the dependability of argument. Is social media a discussion? Is a shriek an argument? Where is the reasoned deliberation that Milton and Madison and Mill regarded as the foundation of a decent polity? They intuited that the road from unreason to indecency is not long, and we are diabolically confirming their intuition. We have made “public reason” into an oxymoron. We are drowning in discursive garbage. Even the people who believe in persuasion seem to persuade only each other. They are just another American community of the elect — the mild and articulate sect of the arguers.

    Many observers have noticed this intellectual crack-up. They suggest a host of solutions. We must keep our minds open. We must listen more carefully. We must respect each other. We must be reasonable, and even rational. We must identify our biases and correct for them. We must bring evidence. We must lower the temperature. We must enhance our capacity for empathy. We must connect with each other, and with the Other. We must practice epistemic humility. These homilies are everywhere, and all the preaching is true. We should indeed do all these noble and necessary things. These are the traits of a democratic individual. But is it not time to notice the futility of this wisdom in present-day America? Nobody seems to be hearing that we should listen. These exhortations leave almost no trace on our public life, which gets insistently dumber and nastier. They have become a sad and lovely genre of their own, a journalistic counterpoint of urgent but soothing platitudes. They may be accomplishing nothing more than providing solace and companionship for those who utter them. I have uttered many of them myself, and I stand by them. They are the only answers. But I am beginning to feel a little foolish, and disconnected, and marginal; I do not feel sufficiently helpful.

             To some extent, of course, it was ever thus. There never was a time when Madisonian graciousness ruled our politics. Philadelphia in 1787 and Illinois in 1858 were epiphanies, not norms. Indeed, the promiscuity of the nineteenth-century American press can make social media seem redundant, in its slanders and its outrages. The manipulability of public opinion has always been a primary assumption of American politics and its cunning practitioners. Was there ever a medium of communication that was inhospitable to zeal, or that turned its back on lies? Have fanatics and extremists ever been at a loss for instruments of influence? There is some consolation to be had, I suppose, from this long history of what ails us. We are not the first to have fallen short of our discursive ideals.

    Moreover, it is good that people stick up for what they believe. Intellectual stubbornness is in its way a mark of intellectual maturity. The malleable are too often mistaken for the reasonable. It is good that people hold strong convictions, and that they confer upon beliefs a prominent role in their identities. Yet the strength of a conviction has no bearing on its merit. Beliefs are not like foods that taste better hot. Too many people hold their beliefs for bad reasons, or for no reasons at all — merely because other people like themselves hold them, in the “cascades” and the “contagions” that have exercised social scientists in their study of our era of conformity. In the articulation of our beliefs, the most common substitute for reasons are passions. The idolatry of feelings that has characterized our culture for many decades has now been extended to our politics. But what does passion have to do with persuasion? Persuasion by passion is a nice definition of demagoguery.

             This time we have fallen very short. The collapse is especially painful for someone such as myself, who has spent his years in the argument business. It was, and still is, an idealistic calling. It came with many scruples about the integrity of argument. We worked hard when we argued, and we tried never to make it personal. (Almost nobody has a perfect record about turning an argument into a quarrel. I certainly do not. Sometimes hostility follows naturally from having understood the dangerous nonsense that your interlocutor is peddling.) There was an atmosphere of exhilaration that surrounded the seriousness. Of course there were also degraded forms of the practice: the gladiatorial kind, for which debate is a kind of sport, an exhibition of dialectical virtuosity, a contest of cleverness; and the academic kind, in which debate consists in making “moves” and “turns” and combinations thereof, as in a professional game; and the festival-of-ideas kind, in which thinking is presented breezily with a “hard stop,” for the entertainment of the paying customers and the rich. But there remained, there still remain, intellectuals with a sense of honor, for whom truth and method matter most, and who regard their activity, rightly, as a significant contribution to their society. One would think that such people are never more valuable than in a crisis — but they are learning, I fear, that it is precisely in a crisis that they may be least valuable, and most easily overridden. In 2016, for example, almost every thoughtful conservative columnist in the country valiantly opposed Trump, and it was as if they never existed. Right now the argument for persuasion, an American argument if ever there was one, seems to be experiencing the same indifference.

    Yet there is another way to consider this problem, and others, so as to elude despair and to find strength. It is to regard it not as a problem, but as a struggle.     

    The success with which we meet the difficulties that we face depends first on an accurate description of them. Nothing destroys hope so quickly as asking a question in a way that makes it impossible to answer. Such a question leaves us with the crippling impression that the world is finally intractable, that there is nothing that can be done. It is one of pessimism’s finest tricks. There are predicaments, of course, in which nothing can be done — but they are rare, even in adversity, and they, too, must be accurately characterized, if we are to be sure that we are being thwarted by reality and not by ourselves.

             There are problems and there are struggles. Problems have solutions; struggles have outcomes. Problems are technical; struggles are historical. Problems recur; struggles persist. Problems teach impatience; struggles teach patience. Problems are fixed; struggles are fought. Problems require skill; struggles require character. Problems demand knowledge; struggles demand wisdom. Problems may end; struggles may not end. A problem that does not end is a defeat or a failure; a struggle that does not end is a responsibility and a legacy.

             We are not given to choose between a world of problems and a world of struggles, and so we must be dexterous. Different temperaments incline to, or feel especially beset by, the one or the other; and this may be the case with communities and societies, too. The American affinity for problems over struggles is well known: the great American epic of practicality and its rewards. We care so much about practicality that eventually it was raised into a philosophy, according to which the proven satisfactions of a hammer and a nail were powerful enough to rid us of nothing less than metaphysics. William James, who perversely regarded pragmatism as a spiritual dispensation, once defined reality as “a perfect jungle of concrete expediencies.” Whether or not reality is like that, American reality is. The wildness of American religiosity may be understood as the response to such an environment of rampant utilities. (Silicon Valley is a hotbed of New Age rubbish.) Yet the American obsession with how things work has produced many admirable results, not least the technocracy that now inspires the wrath of the populists. Over many decades it has done more for the public good than any mob ever did, even if sometimes it has attempted to plant its standpoint where it does not belong and sought in its fanatical meliorism to reduce struggles to the scale of problems. But eventually struggles, too, have a place for policy, which is best not made by visionaries.

             Thinkers from Augustine to Heidegger have belittled the uses of things. The “ready-to-hand,” owing to its “serviceability,” is ontologically shallow, according to the latter, and much too distant from Being. According to the former, the uti, the use of something for the sake of something else, is similarly secondary and extrinsic to the highest meanings, and he ponders whether “men should enjoy themselves, use, or do both.” The American experience of enjoyment in use, of pleasure in function, is beyond his imagination. Such a hierarchy of value would be wrecked by a visit to an American hardware store. The anti-pragmatists are disquieted by a love of the extrinsic just as the pragmatists are disquieted by a love of the intrinsic. The answer to Augustine’s question, obviously, is that we must do both.

    Moreover, there is glory, and not only necessity, in our practical achievements (just as, say, there is beauty, and not only necessity, in architecture). Homo faber, if he is to make things and build things, must include among his talents a sense of form and a concept of design, and an ability to work out the purposes of an object as well as its material properties. The gulf between instrumentality and art is not as wide as the aesthetes and the Platonists would have us believe. I learned this lesson in Kensington, Maryland, where there used to be a shop that sold antique tools — carpentry tools, construction tools, kitchen tools, fireplace tools — a paradise of practicality; and when I first walked into the shop I was struck not by the spectacle of utility but by the spectacle of imagination. The shapes and the metals were gorgeous. I still own the heavy late-nineteenth-century iron cooking pot, with its delicate handles and its handsomely pockmarked lid, that I acquired there. It is a welcome drag on my aspirations to loftiness.

             Here is a passage from one of the many American books on (this is its subtitle) “how to perfect the fine art of problem-solving”:

    Problem-solving is a critical survival skill because things go wrong for us all the time. Working through problems is crucial for productivity, profit, and peace. Our problem-solving skills, however, have been short-circuited by our complicated, technology-reliant world. Why learn how to fix something when Google can do it? Unfortunately, calamity doesn’t always fit in a search bar. And increasingly in our modern, perilous world, the issues that emerge are subtle, laced in subtext, or teeter on the tip of a slippery slope — all attributes that require a human touch to solve. As said humans, we must not only be able to address the problems that arise across all professions and walks of life, we must also be able to solve them. Before they drown, damn, or destroy us. Thankfully, problem-solving is a skill that can be learned.  

    I can practically hear The Star-Spangled Banner in the background. But every word is unimpeachable, except perhaps the reference to peace, which belongs more realistically to the realm of struggle. The undaunted confidence in human agency, the respect for the concrete, the commendation of the artisanal and the collaborative, the faith in education and the transmission of skills: these are elements of the mentality that built cities and created technological revolutions, and their dazzling social and economic benefits. The inventors, the tinkerers, the adjusters, the repairers, the tweakers: they are pillars of everyday existence, who defy our sense of helplessness and relieve us of many of the oppressions of our material setting. They make life more dignified, because there is dignity in safety and comfort and the conquest of anxiety. 

    The same mentality, alas, these same elements, are also the source of our Icarian perils. Sometimes our ability to make things exceeds our ability to comprehend what we are making, and we deploy our inventions before we adequately understand their purposes and their effects. “Problem-solving” is ethically contentless; it serves many causes and many codes. Evil, like goodness, seeks technical support, which is why “pragmatic,” in ordinary usage, also has a pejorative connotation. (As does “fixer”.) The question of how things work is never the most fundamental question one can ask about human affairs. But fundamental questions are not the only questions that we are obliged to ask. We are, even the largest-souled among us, commonplace creatures who live fragilely in a world of cracks and fixes. We are fortified more by reforms than by revolutions. So blessed be the fixers, especially those who recognize the limits of the fix as a model for all human solutions.

    Not all the difficulties that beset us can be described as problems that can be fixed. Some of them are deeper and thicker and more lasting, and therefore more immune to our practical brilliance and our utilitarian talents. They are conditions, inherited states-of-affairs, systems and structures, traditions and loyalties, inner dispositions in the individual and the community, cultural premises hallowed by the generations, abstract conceptions and reified ideals. They imbue everything we do, but we cannot take a hammer to them. (Except wantonly, of course: violence in a problem-fixing society is owed in part to the special frustration of problems that cannot be fixed. Frustration, and the inability to live with it, is one of the characteristic hazards of the can-do worldview.) Indeed, the ubiquity of their effects, their saturation of all the private and public realms, contributes to their durability. And yet they must be fought.

    There is the difference: fixing is not exactly a fight, even when it is hard. No fight is necessary when satisfaction can be technically and efficiently achieved, and there are no first principles at stake. A solution to a problem may be wrong without being evil. Trial-and-error is a benign war on error; a correction of mistakes, not of sins. The question of how best to fight inflation, or how best to curtail our dependence on fossil fuels, or how best to halt nuclear proliferation — such questions may provoke virulent debates, but the virulence is generally not philosophical. These are “how” questions, and not all “how” questions must become “why” questions. A debate about means when there is a consensus about ends is much more easily resolved than a debate about ends. Conversely, one time-honored way of wrecking a debate about means is to turn it into a debate about ends — to make every difficulty into a matter of first principles, to transform problems into struggles. The transformation of a problem into a struggle is a fine strategy for the enemies of a solution.

             Perhaps the fundamental difference between a problem and a struggle is time. The temporal horizons of struggle are long —sometimes very long, even longer than a lifetime. Sometimes we bequeath a struggle to our children. The struggler, like the lover, is prepared to wait. A problem, by contrast, does not tolerate such duration. It needs to be solved soon, if we are to function; whereas struggles are not the condition of our functioning but of our just and proper functioning. One of the meanest facts of human life is that unjust societies can function. (Making a society function is one of the oldest excuses for injustice.) But there is some comfort, too, in that fact, since a just society has never existed. Our only alternatives may be imperfection or extinction.

    Fiat justitia pereat mundus: the old Latin maxim captures the tense relation between perfection and reality. Let justice be done even if the world perish! That was the maxim’s customary reading, not least by Kant, who described as “a sound principle of right…which should be seen as an obligation of those in power not to deny or detract from the rights of anyone out of disfavor or sympathy to others.” But what sort of justice is the destruction of the world? Where is the virtue in nothingness? (Kant dodged this ethically complicating objection with a strange paraphrase of the maxim’s meaning: “let justice reign even if all the rogues in the world must perish.”) We may read the maxim differently, then, and less as a mandate for zeal: we may read it as a warning that the insistence upon perfect justice may destroy everything, as a caution about absolutism in a just struggle. Be careful not to destroy the world when you seek justice! And I have seen a peculiarly American inflection of the adage.  At the Supreme Court there hangs a portrait of John Marshall painted by Rembrandt Peale in 1834. The jurist is set heroically in a stonework oval with Roman ornamentation, and beneath him is a stone on which are carved the large words FIAT JUSTITIA. The rest of the maxim, the worry about the consequences of righteousness, has disappeared. Only a society consecrated to newness, a society that regarded itself as a beginning in what is right, could so blithely have banished the shadows from the ancient injunction.

    A struggle does not allow for such innocence, if only because of its wealth of sobering experience. If you have struggled against an injustice, then you have known it, and witnessed it, and existed with it. You have learned too much about the world to believe that pragmatism is all the equipment that you will need to meet it. There are other inner resources that must be readied: steadfastness, patience, tenacity, resilience, courage. The less your life has need of those qualities, the happier (and the luckier) it is. A life of problems is not like a life of struggles. The trials of fixing are real, but they differ from the trials of struggling — the fixer’s trials are more like exasperations. But an exasperation with history, particularly with a history of suffering, is no mere exasperation: it is a sense of tragedy. It broaches the hardest question of all, which is the question of the warrant for hope.

    A life in struggle is a life in hope, and hope gets stronger as its basis in reality gets weaker, until finally it floats free of experience and proclaims a pure assertion of the will to exist. The more empirical the hope, the less it is needed. But unempirical hope, or hope after catastrophe, is, for that reason, invincible; and it would be an offense against all the communities of struggle, all the shattered but intact peoples, to dismiss such hope as illusion, when it is the purest evidence of unbroken vitality. In a beautiful study of the spiritual perdurability of the Crow Nation, Jonathan Lear has called this “radical hope,” by which he means an inner independence from history that permits one to entertain “the possibility of new possibilities.” For this reason, anyone involved in a struggle will not count a bad day as the last word, because he lives in expectation of it, and he is accustomed to a different pace for progress, to the unsteadiness of forward motion, to delays and reversals and losses. The larger the goal, the rougher the road to it.

    If we prefer to see ourselves as a nation of problem-solvers, it may be in part because we prefer to look away from the strugglers in our midst. Having completed their tasks, problem-solvers proceed to the most typical American activity of all: they move on. But the strugglers cannot move on. They are prisoners of circumstances, and of the power that with its prejudice arranged their circumstances. Their inner freedom is a measure of outer necessity. Our centuries of innovations and breakthroughs were also centuries of oppression and discrimination. Our country has harbored many communities of struggle: the Native Americans, for example. For a hundred years or so the labor movement represented a community of struggle, and it may do so again. But no Americans have a more natural understanding of struggle than black Americans. Their emancipation, which we treat as a discrete historical event circa 1863, was (in the words of one historian) “the long emancipation.”

    The story of African American culture is a story of melancholy and its mastery. There is joy in the blues, which is not the case with many other traditions of sad song. The slave songs and the spirituals are intimate with the “trouble of the world,” but I have never heard one of them recommend surrender. “O me no weary yet, o me no weary yet, I have a witness in my heart, o me no weary yet.” The slaves sang, “Lord, make me more patient”; they sang, “Hold out to the end.” And many decades later the poets expressed the same extreme commitment to endurance. Here is Sterling A. Hayden, addressing a Southern “nameless couple” who have suffered much hardship:

    Even you said

    That which we need

    Now in our time of fear, —

    Routed your own deep misery and dread,

    Muttering, beneath an unfriendly sky,

    “Guess we’ll give it one mo’ try,

    Guess we’ll give it one mo’ try.”

    And here is Countee Cullen’s “The Dark Tower”, whose title refers to a place on 136th Street in Harlem where poets used to meet, as if the poem, in its first person plural, might speak for them all.

     

    We shall not always plant while others reap

    The golden increment of bursting fruit,

    Not always countenance, abject and mute,

    That lesser men should hold their brothers cheap;

    Not everlastingly while others sleep

    Shall we beguile their limbs with mellow flute,

    Not always bend to some more subtle brute;

    We were not made eternally to weep.

     

    The night whose sable breast relieves the stark,

    White stars is no less lovely being dark,

    And there are buds that cannot bloom at all

    In light, but crumple, piteous, and fall;

    So in the dark we hide the heart that bleeds,

    And wait, and tend our agonizing seeds.

    There is the temperament of struggle: waiting and tending to one’s agonizing seeds, which one day, owing precisely to the pain of their cultivation, will grow.

    Are Americans, particularly liberal Americans, still capable of such a temperament? Have we, in the inward velocity of our digital and consumerist present, forfeited the mental readiness for the extended future, or squandered it on futurism? I arrived at this broad and imprecise distinction between problems and struggles in order to understand the despair that I see around me. I attribute that despair to a confusion between these orders of difficulty. It makes sense to despair of solving a problem — some things, after all, cannot be fixed; but it makes no sense to despair in a struggle, because disappointment is a regular feature of struggle, and perseverance comes before success. Injustice is much bigger than a problem. Anybody who combats injustice without the wisdom of struggle will fail in the effort to prevent it from becoming a fate. There are concrete instances of injustice, of course, which can be addressed with legal or political remedies. But there are no policies for the human heart. An earned income tax credit cannot heal psychic and cultural wounds. Discrimination can be ended by practical means, but not racism. Discrimination is a problem, but racism is a struggle. Racism, and all the other panics about difference, will never disappear. They are as old as civilization, and the greatest affront to it. All that can be done is to raise the legal and political and social costs of a particular expression of a prejudice, and then, having inflicted defeat upon it, await its resurgence, which must never surprise us even when it shocks us. The struggler is not a pessimist, but he is a disabused man. The appearance of anti-Semitism in America does not refute the revolutionary promise of America for Jews, because which student of Jewish history, which student of Christian history, which student of evil in human history, ever believed that once and for all anti-Semitism would end? Anti-Semitism was never illegitimate in the European political tradition, and in the Russian one, but it is illegitimate in America according to the terms of our founding. (Whereas white supremacy was inscribed in some of them.)

    When friends tell me, as a consequence of Trump and the ascendancy of the radical American right, that America is over, or when they tell me, as a consequence of Netanyahu and the ascendancy of the Israeli right, that Israel is over, I castigate them for being disinclined to struggle. (I have three motherlands: America, Israel, and my library.) When they tell me, as they spin the globe, that democracy is over, I reply that the rise of authoritarianism is not an event, but an era; and that it will take a long time, a generation or more, to push back the authoritarians and restore the prestige of the open society; and that we must not measure the crisis in election cycles, because it is more profound than politics; and that the inability of democracy to defend itself has always been its greatest historical failing; and that its rejection does not refute it — in sum, that we are in a historical struggle. The refusal to recognize it as such makes it more likely to fail. It is, moreover, a privilege to serve. The struggle for democracy, like the struggle for justice, makes life less trivial. Camus believed that Sisyphus was happy.

             But do we, as they say in foreign policy, any longer have the staying power? The analogy with foreign policy is actually quite useful. One already hears and reads about “Ukraine fatigue” in America. We are fatigued by their fight for survival? The vanity! If the Ukrainian war is just, then it is just even when we get tired of it. The Biden administration has responded more or less splendidly to Putin’s aggression, but more will be needed, because this is not a problem, it is a struggle. (The Ukrainians have established “resiliency centers” against the destruction of the country’s infrastructure and the winter cold.) It was right about now that I expected the administration’s determination to collide with the country’s lack of determination. I mean, it’s been a whole year. Pretty soon we will have another “forever war” on our hands.

    There is no more damning evidence that the readiness for struggle is waning in America than our stupid retreat from Afghanistan. Twenty years is not even close to forever, except for people who do not understand historical time and have been damaged by the warp speed of American life. There were sound moral and strategic reasons for our presence in Afghanistan; and this is unwittingly conceded every time the same opinion pages that stridently called for an end to the “forever war” publish poignant pieces about the plight of Afghan women and Afghan schoolchildren in the kingdom of the Taliban. What did they think was going to happen? The whole world was taught that it could wait America out, that we have only a limited competence for commitment. Unlike us, our enemies know how to practice the art of waiting. They are not intimidated, or bored, by the longue duree. In their global rivalry with us, they are preparing for a struggle.

    The psychology of struggle is a brake also against another danger that faces us. Owing to the magnitude and the multiplicity of the crises that confront us, the apocalyptic spirit has been given new life. Hysteria is increasingly accepted as intelligent, as a condign response to a proper analysis of things. In our culture we are riveted by endings, especially by spectacular ones. There is a new fashion in the-end-of-history, which is just as blind as the old one. Unlike the old one, this one is animated not by a sensation of triumph but by a sensation of weariness, by a loss of heart. History may now be numbered among the causes of depression. The prophecies of decline and destruction are overwhelming. In politics, the belief that time is running out, that it is too late to change course, that all that awaits us is cataclysm, has two antithetical consequences: apathy and apocalypse.

    An apocalyptic is someone who decides to treat a struggle as a problem, and to get it over with. He wants a quick eschatological fix; his understanding is distorted by his desperation. Despondency has sapped him of his will and his energy, or rather, it has left of his will and his energy only enough for the less exacting way of radicalism, which (as we know from the radical past) will either blow things up or exhaust itself. Struggle, in other words, even struggle unto the generations, is the quintessential anti-apocalyptic path. It will not be waited out, or permanently hobbled by gloom. In its decision to outwit despair, in its solemn promise that its resolution will be invulnerable to fortune, the spirit of struggle arms us not only against the injustice that we fight but also against our own frailties. We may reflect, and be calm, and hold together, in the storm, because we are wiser than the storm. Like Durer’s knight we can advance, but unlike Durer’s knight we are not alone.

    The Court Gone Wrong

    What is happening on the Supreme Court of the United States? 

    The Court has overruled Roe v. Wade. It has rejected the whole idea of a right to privacy. It is sharply restricting the ability of federal agencies to protect safety, health, and the environment. It is limiting voting rights. It is expanding the rights of gun owners, commercial advertisers, and those who wish to spend a lot of money on political campaigns. It is moving very quickly, and almost always in directions favored by the political right.

             None of this comes out of the blue. It is the culmination of four decades of intense work, meant to move constitutional law in exactly these directions — work by activists and scholars, politicians and lawyers-for-hire, corporate lobbyists and the National Rifle Association, religious organizations and the Federalist Society. It was a long process, but it seems fair to announce that they have finally won.

             I received a firsthand sense of what was afoot in 2002, when I found myself in a large audience at the University of Chicago Law School, waiting to hear a speech by Douglas H. Ginsburg, who was then Chief Judge of the influential Court of Appeals in Washington, DC. Tall and thin, with a bemused and scholarly manner, Judge Ginsburg is an able and fair-minded judge. He is a generous and kind person to boot. He is also a graduate of the University of Chicago Law School, which was my home institution at the time. I like and admire him. But on that day I was flabbergasted by what I heard; actually I was appalled. Judge Ginsburg called for something like a constitutional revolution. 

     

    Judge Ginsburg contended that the Supreme Court abandoned the United States Constitution in the 1930s, when it capitulated to Franklin Delano Roosevelt and his New Deal. He sought to return to the Constitution as it was understood before the capitulation.

    Ginsburg began by emphasizing that “ours is a written Constitution.” Making a bow in the direction of populism, he contended that this observation is controversial in only one place: “the most elite law schools.” The fact that the Constitution is written has major implications. If judges are “to be faithful to the written Constitution,” they must try “to illuminate the meaning of the text as the Framers understood it.” 

    In Ginsburg’s account, judges were faithful to the Constitution for most of the nation’s history — from the founding period, in fact, through the first third of the twentieth century. But sometime in the 1930s, “the wheels began to come off.” In that period the nation faced the Great Depression, and President Franklin Delano Roosevelt tried to do something about it, above all with his New Deal, which greatly expanded the power of federal agencies, through, for example, the creation of the National Labor Relations Board and the Securities and Exchange Commission. Responding to “the determination of the Roosevelt Administration,” Ginsburg declared, the Supreme Court abandoned its commitment to the Constitution as written.

    How did this happen? Judge Ginsburg’s first example was Congress’ power, under the Constitution, to “regulate commerce . . . among the several states.” What does this mean? Judge Ginsburg referred, with enthusiastic approval, to the Supreme Court’s view that Congress lacked the constitutional power to ban child labor. But his strongest complaint involved the Supreme Court’s decision, in 1937, to uphold the National Labor Relations Act, which protects the right of workers to organize and to join labor unions. In upholding the Act, the Supreme Court said that when strikes occur, interstate commerce is affected. A strike in Pennsylvania often has a big impact elsewhere. 

    Judge Ginsburg objected that this is “loose reasoning” and “a stark break from the Court’s precedent.” But his complaint went much deeper. The Court’s acceptance of the National Labor Relations Act was not merely “extreme.” It was also “illustrative.” He objected that the Supreme Court has upheld the Clean Air Act, which, in his view, violates the separation of powers by granting excessive discretion, and hence legislative power, to the Environmental Protection Agency. Under the Constitution, legislative power rests in Congress; Judge Ginsburg said that because the Clean Air Act allows the Environmental Protection Agency to make the law, the “structural constraints in the written Constitution have been disregarded.” 

    But even this is just the tip of the iceberg. Since the 1930s, the Court has “blinked away” crucial provisions of the Bill of Rights. Of these, Judge Ginsburg singled out the Constitution’s Takings Clause, which says that government may take private property only for public use and upon the payment of “just compensation.” Judge Ginsburg complained that the Takings Clause has been read to provide “no protection against a regulation that deprives” people of most of the economic value of their property. In other words, the Court allows government to impose regulations, especially in the environmental area, that do not quite “take” private property but that much diminish its value. Judge Ginsburg objected that the Supreme Court has not required government to compensate people for their losses. 

    At the same time that the Court has “blinked away” the individual rights of the American Constitution, judges have manufactured new rights of their own devising. In his view, these rights are fake news. In this way, members of the Supreme Court have acted not as judges, but as a “council of revision with a self-determined mandate.” What does Judge Ginsburg have in mind? His chief objection was to the right of privacy. It seemed clear that he rejected Roe v. Wade

    But he went much further than that. He singled out the Court’s decision in 1965 in Griswold v. Connecticut, the foundation of modern privacy law. In that case, the Court struck down a law forbidding married people to use contraceptives. Judge Ginsburg objected that a judge “devoted to the Constitution as written might conclude that the document says nothing about the privacy of” married couples. The Griswold decision, he added, is “not an aberration.” It is matched by recent decisions holding that the Constitution imposes limits on capital punishment, such as its decision in 2002 striking down a death sentence imposed on a mentally ill defendant. 

    Judge Ginsburg’s narrative, then, is simple and straightforward. Until 1933 or so, the Court followed the Constitution. At that point, it adopted a “freewheeling style.” But Judge Ginsburg offered real hope for the future. In recent years, a small but growing group of scholars and judges have been calling for more fidelity to the constitutional text, focusing on the original meaning. “Like archeologists, legal and historical researchers have been rediscovering neglected clauses, dusting them off, and in some instances even imagining how they might be returned to active service.” 

    Judge Ginsburg’s leading example? The Second Amendment to the Constitution, which protects the right “to keep and bear arms.” Judge Ginsburg gave a strong signal that judges might well strike down gun control legislation. His exact words? “And now let the litigation begin.”

    Judge Ginsburg was speaking here of what he himself called the Constitution in Exile — the real Constitution, the one that should be restored. What made his argument so remarkable is that Judge Ginsburg was, and is, a responsible person with a first-rate intellect — and, in his judicial capacity, he displays a large measure of restraint. But in his speech twenty years ago, calling for radical changes in constitutional understandings, Judge Ginsburg was hardly speaking in a vacuum. On the contrary, he was summarizing a line of argument that such conservative luminaries as Robert Bork, Edwin Meese, and Antonin Scalia had been developing for decades. That line of argument had been embraced by many members of the Federalist Society and the Republican Party as well. “And now let the litigation begin” — that was their mantra.

    Judge Ginsburg set out a kind of Constitutional Wish List. The goal was to transform constitutional law, and to do so in major ways. For those on the right, the Constitutional Wish List included the following:

      A broad understanding of the individual right to possess guns.

      A rejection of Roe v. Wade.

      A rejection of the right to privacy in general.

      New limits on the power of modern administrative agencies, including the Environmental Protection Agency. 

      Dramatically strengthened property rights.

      Sharp reductions in Congress’ power under the Commerce Clause.

    In 2002, all this seemed unlikely in the extreme. Would the Supreme Court really be prepared to turn so many constitutional understandings upside down? Astonishing but true, we now have to put a checkmark next to each and every item on the list. They have all been achieved.

     Before 2008, the Supreme Court had rejected the idea that the Constitution creates an individual right to possess guns. Now the Court recognizes that right — and is steadily expanding it. Before 2022, Roe v. Wade was the law of the land. Now it is overruled. Before 2022, the right of privacy seemed firmly ingrained. Now it is gone. Until recently the Court had embraced, in ways large and small, the power of modern administrative agencies, including the Environmental Protection Agency. Now it has sharply limited that power. Just as Judge Ginsburg hoped, property rights have indeed been enhanced. The Court did uphold the Affordable Care Act, by a vote of 5-4, but in the process it announced new limits on Congress’ power under the Commerce Clause. And all this might be just the beginning. With respect to voting rights, freedom of speech, the rights of criminal defendants, freedom of religion, and much more, dramatic changes seem to be coming.

     There are two ways to understand the recent developments. The first, in the spirit of Judge Ginsburg’s argument, is jurisprudential. It insists that the Court is now being “faithful to the written Constitution” — that it is (finally!) following the Constitution “as written.” On this understanding, the Supreme Court has become “originalist,” which means that it is adhering to “the original public meaning” of the Constitution. If that is the right understanding, we need to ask a single question: Is originalism right?

    The second understanding is political. It is that the Court’s understanding of the Constitution is uncomfortably close to the political preferences of the current Republican Party. On that view, the Court is lawless. It is acting as a political body, even if it understands itself as being faithful to the written Constitution.

    Let us begin with originalism. What is it, and what does it entail? 

    That is a surprisingly hard question to answer. The term itself was coined in 1980 by the Stanford law professor Paul Brest, in a law review article that sketched what, in his view, were devastating objections to the whole idea. Brest meant to challenge a view about constitutional interpretation associated with Bork and Raoul Berger (a legal historian at Harvard) that was, at the time, a kind of fringe position, with little support even among right-of-center academics. (At the time, conservative scholars tended to argue more broadly in favor of “judicial restraint,” understood as respect for the decisions of the political process.) As a fringe position, originalism had little influence and political salience.

    What a difference forty years make! Originalism now comes in many shapes and sizes. It is used as a political rallying cry. It has been elaborated in great detail by a host of sophisticated law professors, among them Lawrence Solum and William Baude; law professors who embrace originalism disagree vigorously with one another about what originalism means and requires. Some originalists follow Ginsburg in emphasizing the intentions of the Constitution’s authors; others think that the search for the authors’ intentions is a fool’s errand. Some originalists think that it is important to respect precedent, even if they are not originalist; other originalists think that it is entirely wrong and that the original understanding should trump the Supreme Court’s mistakes. Some originalists think there is a difference between “interpretation,” where judges must follow the original meaning, and “construction,” where judges have nothing to follow and must exercise discretion; other originalists reject this distinction and seem to be appalled by it.

    Amid all the debates, one variety of originalism now seems to be on the ascendency. It is called “public meaning originalism.” Justices Thomas, Alito, Gorsuch, and Barrett seem committed to it, and Justice Kavanaugh seems to like it a lot. On this view, the Constitution must be interpreted in a way that fits with its original public meaning. That means that terms such as “freedom of speech,” “executive power,” “cruel and unusual punishment,” and “due process of law” must be understood not only in accordance with their semantic meaning, but also with the meaning that people would have given to them at the time of ratification in 1789. Interpretation, in this view, depends on an inquiry into history, not on any kind of moral judgment. As Richard Fallon puts it, public meaning originalists contend that the public meaning can be “discovered as a matter of historical and linguistic fact.” In Solum’s words, “the meaning of the constitutional text is a function of the conventional semantic meanings of the words and phrases as they are enriched and disambiguated by the public context of constitutional communication.” 

    Originalists are keenly aware that it is often hard to discover the original public meaning of words as they were used in the late eighteenth century. They know that reasonable people, including specialists, disagree on historical questions. They also know that unanticipated social changes can greatly complicate the search for historical answers. What is the original public meaning of “freedom of speech” as applied to radio and television? How should we understand protection against “unreasonable searches and seizures” as applied to the Internet? The most careful originalists do not ignore these questions. Still, they insist that if judges are originalists, many questions are easy. They add that when originalism leaves some questions open, or makes them really hard to answer, at least it provides the right orientation. 

     

    There is no doubt that if judges followed the original public meaning of the Constitution, constitutional law would be radically transformed. The national government would be permitted to discriminate on the basis of both race and sex. If the national government wanted to segregate people by race, it could almost certainly do that. The right to free speech would be greatly truncated. Blasphemy could probably be made a crime. States could probably allow public figures to recover huge sums of money for defamation. 

    The idea of one person, one vote would be out the window. If the federal government wanted to take away people’s Social Security benefits, or welfare benefits of various sorts, it might not have to give them any kind of hearing. Contrary to Judge Ginsburg’s view, protection of property rights would be reduced, not expanded: some of the most careful scholarly work suggests that according to the original public meaning, the Constitution protects only against physical invasions of property, and imposes no barrier to regulation that greatly diminishes the value of property. All this, by the way, is just the beginning of what would be possible.

             Originalists are acutely aware that their preferred method might lead to outcomes that many people would abhor, and they have a variety of responses. Some originalists insist on the importance of democracy and on the need to rely on democratic processes, not on courts. If originalism might allow government to do what some people consider to be terrible things — for example, to ban contraceptives or to sterilize people — originalists respond that in a self-governing society, the appropriate correctives come from We the People, not from unelected judges. Consider the case of abortion: originalists say that if the right to choose is to be protected, it must be because majorities want them to be.

    Other originalists emphasize the rule of stare decisis: judges should ordinarily respect their own precedents, even if they are wrong. True, the Court was willing to overrule Roe v. Wade, but even in doing so the Court proclaimed that other privacy rulings, including those that protect the right to use contraceptives, were not necessarily at risk. Still other originalists contend that the answers to the historical questions might not be so terrible. Many originalists are at pains to say that on their approach, states may not segregate school children by race. Some originalists contend that on originalist grounds a broad right to freedom of speech is secure.

    Most fundamentally, originalists argue that their approach is mandatory rather than optional. If it requires abhorrent conclusions, that is, in a sense, a sign of intellectual integrity, a badge of honor. In their view, originalism is the only legitimate approach to interpretation, and it is justified independently of the outcomes that it produces. 

    Each of these arguments must be addressed on its own terms. Democracy is fundamental of course, but is it really right to think that the scope of freedom of speech, racial equality, and personal privacy should be defined by political majorities? Would the United States have been better off if the Supreme Court in the twentieth century had limited these and other rights to the understandings of the eighteenth and nineteenth centuries? Consider these words from Justice Felix Frankfurter, from a memorandum that he wrote in 1953 for his files during the Supreme Court’s deliberations over the constitutionality of school segregation:

     

    But the equality of the laws . . . is not a fixed formula defined with finality at a particular time. It does not reflect, as a congealed summary, the social arrangements and beliefs of a particular epoch. It is addressed to the changes wrought by time and not merely the changes that are the consequences of physical development. Law must respond to transformations of views as well as that of outward circumstances. The effects of changes in men’s feelings for what is right and just is equally relevant in determining whether a discrimination denies the equal protection of the laws.

    Some originalists believe in respecting precedents, but many do not. Is it sufficient to say, on behalf of a theory of interpretation, that it would not do nearly as much damage as it might, because some judges are willing to ignore that theory?

    The most important argument about originalism is that it is mandatory. Many originalists seem to think that the very idea of interpretation requires their preferred approach. This is a colossal mistake. The Constitution does not contain instructions for its own interpretation. It does not have an Originalism Clause, directing judges to be originalists. Originalism is a choice. Whether it is the right choice must depend, inevitably, on whether it would make the American constitutional order better rather than worse. That is not a hard question. 

    In these circumstances, it is natural and fitting to wonder: what, exactly, have liberals been doing over the last few decades? For that matter, what have conservatives been doing, if they reject originalism or seek other paths? The short answer is: a lot. Like Paul Brest in 1980, many liberals have been vigorously attacking originalism, sometimes on the grounds that it is much squishier than it purports to be, sometimes on the grounds that it would lead to a host of intolerable results. 

    Liberals have also been developing their own theories of interpretation. For decades, Ronald Dworkin argued for “moral readings” of the Constitution, in which judges would infuse broad phrases with their preferred moral content. Some members of the Supreme Court, including Anthony Kennedy and Sonia Sotomayor, have seemed to agree with Dworkin; consider the Court’s decision to require states to recognize same-sex marriages. In 1980, John Hart Ely published Democracy and Distrust, which argued that judges should protect democracy itself, by safeguarding democratic processes and those who are at a particular disadvantage in them. Some members of the Court, including Ruth Bader Ginsburg and Stephen Breyer, have often seemed to agree with Ely. Left-of-center theorists and practitioners, such as Larry Kramer, the former dean of Stanford Law School, have developed other approaches as well, with occasional (and steadily mounting) enthusiasm for a more modest role for the Court, with an insistence that the justices are most likely to protect those who have the most power. But on the current Court, dominated by Republican appointees, it is not easy to find five votes in favor of positions associated with Dworkin, Ely, and Kramer.

              This point puts a bright spotlight on the elephant in the room: the relationship between constitutional law and political convictions. It would be a true miracle if originalism, properly applied, consistently led to outcomes favored by the extreme right-wing of the contemporary Republican Party. To update Ginsburg’s Wish List: robust gun rights, a ban on affirmative action, reduced voting rights, restrictions on campaign finance laws, no abortion rights, no privacy rights, strengthened property rights, sharp limits on the power of administrative agencies, greater protection of commercial advertising, no right to same-sex marriage, reduced rights for criminal defendants. What are the odds, really, that a particular method of interpretation, honestly applied, would always result in outcomes pleasing to one political side? On the Supreme Court, however, justices who favor originalism are drawn, time and time again, to rulings that belong on that particular Wish List.

    It is important to say that among law professors who are interested in originalism, we can find humility or uncertainty about what, exactly, the relevant history shows. And among law professors who are interested in originalism, we can sometimes find left-of-center conclusions — as in the view that the Equal Protection Clause requires the authorities to protect people of color every bit as well, and as much, as they protect white people. But there is no mistaking the fact that as it is being practiced by real judges, originalism is consistently producing conclusions that delight the political right.

             In these circumstances, it is fair to wonder whether the Supreme Court is doing law at all. 

     

    Digitization, Surveillance, Colonialism

             As I write these words, articles are mushrooming in newspapers and magazines about how privacy is more important than ever after the Supreme Court ruling that has overturned the constitutionality of the right to have an abortion in the United States. In anti-abortion states, browsing histories, text messages, location data, payment data, and information from period-tracking apps can all be used to prosecute both women seeking an abortion and anyone aiding them. The National Right to Life Committee recently published policy recommendations for anti-abortion states that include criminal penalties for people who provide information about self-managed abortions, whether over the phone or online. Women considering an abortion are often in distress, and now they cannot even reach out to friends or family without endangering themselves and others. 

    So far, Texas, Oklahoma, and Idaho have passed citizen-enforced abortion bans, according to which anyone can file a civil lawsuit to report an abortion and have the chance of winning at least ten thousand dollars. This is an incredible incentive to use personal data towards for-profit witch-hunting. Anyone can buy personal data from data brokers and fish for suspicious behavior. The surveillance machinery that we have built in the past two decades can now be put to use by authorities and vigilantes to criminalize pregnant women and their doctors, nurses, pharmacists, friends, and family. How productive.

    It is not true, however, that the overturning of Roe v. Wade has made privacy more important than ever. Rather, it has provided yet another illustration of why privacy has always been and always will be important. That it is happening in the United States is helpful, because human beings are prone to thinking that whatever happens “over there” say, in China now, or in East Germany during the Cold War to those “other people,” doesn’t happen to us — until it does. 

    Privacy is important because it protects us from possible abuses of power. As long as human beings are human beings and organizations are organizations, abuses of power will be a constant temptation and threat. That is why it is supremely reckless to build a surveillance architecture. You never know when that data might be used against you — but you can be fairly confident that sooner or later it will be used against you. Collecting personal data might be convenient, but it is also a ticking bomb; it amounts to sensitive material waiting for the chance to turn into an instance of public shaming, extortion, persecution, discrimination, or identity theft. Do you think you have nothing to hide? So did many American women on June 24, only to realize that week that their period was late. You have plenty to hide — you just don’t know what it is yet and whom you should hide it from.

    In the digital age, the challenge of protecting privacy is more formidable than most people imagine — but it is nowhere near impossible, and every bit worth putting up a fight for, if you care about democracy or freedom. The challenge is this: the dogma of our time is to turn analog into digital, and as things stand today, digitization is tantamount to surveillance. 

    Behind the effort to digitize the world there is a corporate imperative for growth. Big tech companies want to keep growing, because businesses are rarely stable animals — companies that are not on their way up are usually on their way down. But they have been so successful and are so gigantic that it is not easy for big tech to find room to grow. Like Alice in Wonderland, trapped in the rabbit’s house after growing too big, tech companies have their arms and legs sticking out the windows and chimney of the house of democracy. One possibility for further growth is to attract new users. But how to find fresh blood when most adults with internet access worldwide are already your users? One option, which Facebook is unscrupulously pursuing, is to focus on younger and younger children. The new target group for the tech company is children between the ages of six and nine. This option is risky. There are several investigations into Facebook and Instagram for knowingly causing harm to minors. What, then, are the other options for the expanding behemoths? 

    The preferred option these days is to digitize more aspects of the world. Despite the rapid advancement of digital technologies, most of our reality is still analog, even after the onset of covid. Most of our shopping is offline. Most readers prefer paper books. Much of our homes, our clothes, many of our conversations, our perceptions, our thoughts, and our loved ones are analog. That is, most of our experience has not been translated into ones and zeroes, which are the building blocks of digital technology. Experience, almost by definition, is directly lived, unmediated by a screen.

    Tech giants wish to change all that. They share the desire to digitize the world because it is an easy way to gain more ground, to expand by enlarging the house. In this sense, digitization is the new colonialism. Digitization is the way to grow an empire in the twenty-first century. Everything analog is a potential resource — something that can be digitally conquered and converted into data and then traded, directly or indirectly. That is why Google keeps coming up with new products. Maps? Chrome? Android? Those were not designed for you. They are all different ways of collecting different data from you. That is why Facebook and Ray-Ban have together come out with new glasses that have microphones and a camera: more “data capture,” which in reality means the conquest of life by corporate avarice. That is why Apple is launching an augmented reality product, and why Microsoft is proposing a platform that creates three-dimensional avatars for more interactive meetings. And why Facebook — sorry, Meta — is insisting on its metaverse. 

    The tech titans assure us, of course, that their new inventions will respect our privacy. What they fail to mention is what I call the Iron Law of Digitization: to digitize is to surveil. There is no such thing as digitization without surveillance. The very act of turning what was not data into data is a form of surveillance. Digitizing involves creating a record, making things taggable and searchable. To digitize is to make trackable that which was beyond reach. And what is it to track if not to surveil?

    A good example of the close link between tracking and surveillance are AirTags. In 2021, Apple launched the AirTag: a small coin-like device with a speaker, a Bluetooth antenna, and a battery, designed to help people keep track of their items. You can attach an AirTag to your keys and link it to your phone, and if you lose your keys, the device will ping Apple products around it and use Bluetooth to triangulate its location, which you can see on a map on your phone. The AirTag can also beep to let you know where it is.

    Keeping track of your keys seems innocent enough, but the AirTag is designed to track more in general. You can track a wallet instead of keys, or a purse — and not necessarily your purse. Privacy and security experts warned Apple that AirTags would be used for stalking. In response, Apple said it had implemented a notification feature that alerts people with iPhones if there is an AirTag following them. But this measure is insufficient in various ways. First, many people don’t have iPhones, and if you have an Android you have to download an app to be notified through your phone; the vast majority of people have not downloaded it and will likely not download it. You might think that the phone notification is not necessary, because AirTags are meant to start beeping at a random time between eight and twenty-four hours after they have been separated from their paired iPhones, but the beeping is so low that people might not hear it. Moreover, eight hours is plenty of time for a stalker to follow and find his victim. Even if you have an iPhone, my own experience is that there is no guarantee that you will be notified about an AirTag that is tracking you. A few months ago my brother and I rented a car from a peer-to-peer network. After a few hours of driving the car, my brother’s iPhone notified him that there was an AirTag around. The owner of the car had placed it in a locked glove compartment. My iPhone, however, never notified me of the AirTag — even after having been near the car for more than twenty-four hours. We never heard any beeping.

    The New Jersey Regional Operations & Intelligence Center issued a warning to police that AirTags posed an “inherent threat to law enforcement,” as criminals could use them to identify officers’ “sensitive locations” and personal routines. One year after their launch, there were at least 150 police reports in the United States mentioning AirTags, and recently, one murder case. That might not seem like much, but cases are likely in the thousands, given how many people might not notice they are being tracked or might not report it to the police. Not that reporting it to the police is of great help. Police often don’t know what to do about it; sometimes they don’t even take a report, which leaves vulnerable people (women, most often) unprotected. 

    Stalking affects an estimated 7.5 million people in the United States every year and, not surprisingly, it is on the rise. Last year a study by the security company Norton found that “the number of devices reporting stalkerware on a daily basis increased markedly by 63% between September 2020 and May 2021.” We are producing more and more technology to track — of course stalking is on the rise! To expect anything different would be to engage in self-delusion. In the pre-internet age, it was expensive, effortful, and risky to spy on someone. Today, you can buy an AirTag for $29.

    What is most striking about the AirTag example is how foreseeable these issues were. It’s not that the AirTag was misused in any surprising or imaginative way. When an AirTag is used for stalking, it is being used exactly according to its design. Some dual uses of technology are surprising. Gunpowder was originally designed for medicinal purposes — who would have thought it might change war forever? But tracking technologies are designed to track — and tracking is surveillance, and surveillance amounts to control. Human beings are social beings, which means that most of the time what we are most interested in is other people. We should hardly be surprised when tracking technology is employed to track people, the most salient element of most people’s lives. AirTags are the tracking device par excellence. They are designed to track and to do nothing else. Yet smartphones, for all their many uses, are also tracking devices. Your phone can make calls and take photographs, but above all it collects information about you and others.

    Too many people enthusiastic about digital technology are under the impression, as convenient as it is misguided, that if people consent to data collection, and if the data processing happens within our own phone or computer, there is no problem with privacy. If only it were so simple. There are at least two reasons why there are still privacy issues when it comes to the collection of personal data in our devices.

    First, there is no informed consent in data collection. The consent we give is neither consent, because it is not truly voluntary, nor informed, because no one has any idea where that data may end up and what inferences may be drawn from it in the future. We are forced into “consenting” because if we do not consent we cannot be full participants in our society. There is no leeway for negotiation in platforms’ “terms and conditions.” It’s their way or the highway, and their way can change at any time and without warning. But we could not give informed consent even if we had the chance, because data is so abstract and unpredictable in the kinds of uses it may have, and the kind of inferences it will be able to produce, that not even data scientists can give informed consent. No one knows what consequences today’s data collection will have.

    Second, data creation is itself morally significant. The term “data collection” is somewhat misleading, in that it seems to suggest that to collect data is to assemble things that are already there. But data are not natural phenomena, like mushrooms that we find in the forest. We do not find data. We create data. Data collection implies data creation. And that act of creation is a morally significant decision, because data can be dangerous. Data can tell on us: whether we are thinking about changing jobs, whether we are planning to have children, whether we might be thinking of divorcing, whether we might be considering having an abortion. Data can harm people. For this reason, data creation carries with it a moral responsibility and a duty of care towards data subjects. 

    “What privacy problem can there be if the data is on the user’s encrypted phone?,” a tech executive asked me once, assuming that users are in control of their phones, and ignoring the many examples that show otherwise. Our phones have a life of their own. They send data to third parties without us even realizing it, for starters. Every phone connected to the internet is hackable. Domestic abusers can take advantage of technologies to control their partners and their children. If an abuser forces you to share your password, the data that your phone has created without your asking it (where you have been, who you have called, etc.) can work against you. A TSA officer can ask you to unlock your device at the border and can download your data. That can happen even if you are American, and even if it is your work phone, in which you have confidential professional data. The police can ask you to unlock your phone too. And who can guarantee that an insurer will not ask you for access to that data in the future? If you do a commercial DNA test, even if it was only for fun, you are obligated to disclose it to your insurer. Can we be sure insurance companies will not ask for access to our smartwatches or smartphones some day? As soon as personal data has been created and stored, there is a privacy risk for the data subject, which then spills on to be a risk to society. 

    The risks to society are significant and varied. They go from national security (all that personal data can be used to extort public officials and military personnel, for instance) to threats to democracy, which will be my focus here.

    Just like the old colonialism, digitization carries with it a certain ideology that it seeks to impose. It comes with ideas of what progress looks like. Old colonialism imposed a certain language, etiquette, clothing, social institutions, and ways of life. New colonialism imposes code, exposure as etiquette, a weakening of old social institutions, and ways of life that lead to societies of control.

    Technology is never neutral. Tech companies find it convenient to present their products as neutral tools, but marketing bears little relation to truth. Artifacts inevitably embody values. We make artifacts so that they do something for us, and we wouldn’t bother making them if we did not value whatever it is that they do. Since technology is designed with a purpose in mind, artifacts end up having affordances. An affordance is what the artifact invites you to do. It is an implicit relationship between the designer and the user through the object designed. A chair affords you to sit on it. We design things like buttons and handles to match our bodies, perceptive systems, and desires. A gun affords you to use it to threaten, hurt, and possibly kill; it does not afford you to cook with it. Pans and skillets afford you to cook. Surveillance tools afford control; they afford the chance of keeping a close watch on something or someone. A camera allows you to watch anyone who appears in its purview. And a camera is a tool for surveillance irrespective of whether the footage is encrypted and in your phone. This is not to imply that encryption is not important. It certainly is, because it adds very necessary security to sensitive data. But no amount of encryption will detract from a camera the affordance to surveil. 

    Contemporary surveillance tools all too often are a double-sided mirror, which not only enables you to watch others but also enables others to watch you. They are often also camouflaged as some other kind of tool, like a phone or a TV. Before the age of the internet, surveillance tools were mostly one-directional. A Stasi agent monitoring a suspect in East Berlin through a wiretap could listen to her target without thereby opening the possibility of being wiretapped herself. But the internet allows for multiple directional flows of information. You might buy an Amazon Ring camera to watch whoever gets near your door, but that device allows Amazon (and your housemates) to learn things about you. It can track when you leave your home, and when you come home and with whom. It can also be used to inform the police (in some cases without your permission and even without a warrant). And anything that can be online is hackable, so you are enlisting into the risk of criminals accessing your footage, for example, to figure out when you are away so they may rob your home. 

    Your Ring camera is not only surveilling you — it is also watching and listening to your neighbors. Amazon has recently rejected the request made by Senator Ed Markey that the company introduce privacy-preserving changes to its doorbell camera after product testing showed that Ring routinely records audio conversations happening as far away as the opposite sidewalk. Your neighbor could be recording the conversations that you have at your doorstep or driveway and could post them online. If you use a screen door and keep your front door open, a Ring device could be recording the conversations you have in your living room. The potential for blackmail, stalking, and public shaming is immense.

    Other surveillance tools are much less obvious than a camera. Take something like Alexa. It’s a speaker that plays music. It is a timer. It can read you the news. It can allow you to order all kinds of products. It doesn’t look or feel like a surveillance machine, but it is keeping a close watch on you. Amazon wants to turn Alexa into an appliance that can predict what you want. For it to accomplish its task, it has to know you very well. Alexa collects data from what you say and shares it with as many as forty-one advertising partners. If you have not opted out, human beings might be reviewing what you tell Alexa. And, sure, you can have your data periodically deleted and opt out of human review, but your data will still be used to train Alexa, whether you like it or not. 

    In more than one out of ten transcripts analyzed, Alexa “woke up” accidentally and recorded something surreptitiously. The same thing happens to other digital assistants. An Apple whistleblower confessed to have “heard people talking about their cancer, referring to dead relatives, religion, sexuality, pornography, politics, school, relationships, or drugs with no intention to activate Siri whatsoever.” The police might be interested in getting access to that data. Alexa recordings have already been used in all kinds of legal cases, from proving infidelity in a divorce case and identifying drug users in a household to providing evidence in murder cases. If police can access recordings made by our devices at home, how is that different from having the police living under your roof? We would never be at ease having the police living in our homes, so why do we invite Alexa in? Aren’t we uncomfortably close to building a police state, or at the very least building the structure that could support an almost omniscient police state?

    In the 1990s, we owned the objects we bought. Today we still pay for our phones and doorbells, but they work for other people, and often against our own interests. And of course it’s not just AirTags, smart doorbells, smartphones, and Alexas. It’s your smartwatch, and your smart TV, and car, and electricity meter, and kettle, and laundry machine. Everything “smart” is a spy. And while every piece of data may seem uninteresting and innocuous, you would not believe how precise a picture emerges from joining the dots of all those data points. 

    Data creation and data collection will only increase if we continue the trend towards augmented and virtual reality. These technologies will want to collect much more data about everything, from your indoor spaces to the movement of your eyes. Eye-tracking technology will be crucial in creating a rich digital environment. It is likely that virtual reality will mimic human sight, which focuses on something and blurs the background. If everything is equally salient, it is harder to navigate your surroundings and you can easily get motion sickness. To simulate our natural visual experience by offering low-quality images in your peripheral vision and high-quality images on what you focus on, the tech needs to identify what you want to pay attention to. Eye-tracking is the most important source of information for that. Relatedly, eye-tracking can be used to increase the user’s ability to direct and control her experience.

             Unfortunately, your gaze can be incredibly revealing. Your eye movements, iris texture, and pupil size and reactions can inform others about your identity (through iris recognition), state of mind (e.g., if you are distracted), emotions (e.g., if you are afraid), cognitive abilities (based on factors like how long you look at something before acting), your likes and dislikes (including your sexual interests), your level of fatigue (through analyzing your blinking), whether you are intoxicated, and your health status (by looking for patterns of eye movements that might be symptomatic of problems such as Alzheimer’s or schizophrenia). Even if some of these inferences might be scientifically questionable, experience suggests that companies are likely to try their luck with them anyway.

    By creating and collecting so much personal data, it is becoming more difficult to avoid surveillance. Even if you leave your phone at home (I know, a big if), you might still get caught by surveillance through dozens of cameras as you go about your day. If we plaster our cities with sensors of various kinds, there is no opting out or escaping it. The danger is in the long term. Surveillance is a slow-acting poison. Its consequences are not immediately apparent. All of which leads to the surveillance delusion: the mistaken belief that surveillance has many advantages and no significant costs. For every individual decision, surveillance can seem like an attractive solution in the short term, when we imagine that all goes exactly as planned: it seems to keep us safer, it helps us track what we care about. But the long-term and systemic effects of surveillance are often overlooked. Under the surveillance delusion, only the benefits of surveillance are valued, and surveillance is understood to be a convenient solution to problems that could be solved through less intrusive means. But surveillance often creates more weighty problems for democracy in the long run than the ones it can solve.

    Democracy is a complex house with many pillars sustaining it, and it can crumble so slowly that we might not know immediately when we are undermining it. Journalism, for example, “the fourth estate,” has long been considered an important pillar of democracy. Citizens have to be well informed enough about their society to be able to make autonomous democratic decisions, such as whom to vote for. When we reduce privacy, we weaken journalism. In July 2021, a leak revealed that more than fifty thousand human rights activists, academics, lawyers, and journalists around the world had been targeted by authoritarian governments using Pegasus, a hacking software sold by the Israeli surveillance company NSO Group. It is probably not a coincidence that the most represented country among the people who were targeted with spyware, Mexico, is also the deadliest country in the world for journalists, accounting for almost a third of journalists killed worldwide in 2020. When journalists do not have privacy, they cannot keep themselves or their sources safe. As a result, people stop going to journalists to tell their stories, and journalists quit their jobs before they lose their lives, or they focus on safe stories, and investigative journalism slowly dies, thereby gravely hurting democracy. 

    Some people think that if surveillance is done by corporations and not the government, the concern is lessened. Others think the opposite: that if surveillance is done by the government and only by the government, we will be safe. Both views are wrong: corporate surveillance is as dangerous as government surveillance and vice versa, and even peer-to-peer surveillance undermines ways of life that are supportive of freedom and democracy. 

    Giving too much personal data to governments will grant them too much power, which can support authoritarian tendencies. As I have argued, surveillance tools afford control, and when governments hold too much control over the population they become authoritarian. You might happen to trust your current government, but you cannot be sure that you will trust the next government. And you cannot be sure that a foreign power will not hack the data held by your government, or even invade your country. One of the first stops for Nazis in a newly invaded city was the registry, because that is where the personal data was held that would lead them to Jews. The best predictor that something will happen in the future is that it has already happened in the past, and personal data has already been used to perpetrate genocide. A contemporary Nazi regime with access to the kind of fine-grained data we are collecting would be near indefeasible. That alone makes surveillance reckless. China is using its surveillance apparatus against “enemies of the state”: from minorities such as the Uyghurs and the Tibetans to the defenders of democracy in Hong Kong. We must dismantle architectures of surveillance before they get used against us.

    Corporate surveillance is just as much of a problem. First, any data collected by companies can — and often does — end up in the hands of governments, whether through governments purchasing data, legitimately acquiring it (e.g., through a warrant or subpoena), or hacking it. In practical terms, corporate and government surveillance are indistinguishable. Moreover, corporations do not have our best interest at heart, and these days they are certainly not guardians of democracy or the common good. Thanks to corporate surveillance you can be unfairly discriminated against for a job, or insurance, or a loan. And personal data can be used to produce personalized propaganda, pit citizens against one another, and undermine civic friendship and democracy. Companies, after all, think of themselves as answerable only to shareholders. 

    Corporate surveillance is all the more worrying in the case of companies that can become more powerful than entire countries. Once again, this worry gives us reason to learn from old colonialism. At its summit, the East India Company was the largest corporation in the world, and it had twice as many soldiers as the British government. Among its many sins were slave trafficking, facilitating the opium trade, exacerbating rural poverty and famine, and looting India. A senior official of the old Mughal regime in Bengal wrote in his diaries: “Indians were tortured to disclose their treasure; cities, towns and villages ransacked; jaghires and provinces purloined.” So it’s not only that powerful corporations can violate human rights. To some extent, they can also act like states when they are the protagonists of colonialism. As William Dalrymple puts it, 

     

    We still talk about the British conquering India, but that phrase disguises a more sinister reality. It was not the British government that seized India at the end of the 18th century, but a dangerously unregulated private company headquartered in one small office, five windows wide, in London, and managed in India by an unstable sociopath — Clive.

    Just like at the end of the eighteenth century, corporations are leading colonialism in the twenty-first century. This time round it is big tech doing the looting (of our privacy, at the very least). They are the entities setting the agenda and imposing a culture of exposure around the world. Big tech companies benefit from our spending as much time as possible on their devices and platforms, sharing as much personal data as possible — which is why they sell the idea of exposure as a virtue: tell us what you feel, where you go, what you eat, what you think about other people, what worries you, and how we can make money off you. And if you don’t want to tell? Well, that must be because you have something to hide, which in big-tech-speak is not about protecting yourself from wrongdoers but about being a wrongdoer yourself. Big tech colonialism shames us into exposure for their own profit, and in doing so, they poison the public sphere. 

    Cultures of exposure are another good example of how surveillance leads to control. The pressure to overshare encourages social vices such as stalking and witch-hunting. If everyone is pressured into exposing their opinions and habits, it is a matter of time before someone finds some of them objectionable and starts hunting people for their views. It is interesting how something that used to be regarded as inappropriate — exhibitionism — has now morphed into being considered a social imperative — transparency. Some measure of transparency is certainly appropriate when it comes to institutions — but not when applied to individual citizens. Both exhibitionism and social policing cause “either-you’re-with-us-or-against-us” mentalities and thereby jeopardize civic friendship. 

    Liberal democracies aim to allow as much freedom to citizens as possible while ensuring that the rights of all are respected. They enforce only the necessary limits so that citizens can pursue their ideal of the good life without interfering with one another. But for a liberal order to work, it is not only governments and corporations that have to give citizens a space free from unnecessary invasions; citizens have to let one another be as well. Civility requires that citizens exercise restraint in the public sphere, especially regarding what we think of one another. To expect people to be saints is unreasonable. “Everyone is entitled to commit murder in the imagination once in a while,” as Thomas Nagel has remarked. If we push people to share more than they otherwise would, we will end up with a more toxic environment than if we encourage people to edit or curate or limit what they bring into the public sphere. A culture of exposure invites us to share our imaginary acts of murder, needlessly pitting us against each other. Sparing each other from our less palatable facets is not a vice, but a virtue. Protecting privacy — our own and that of others — is a civic duty.

    Totalitarian societies tend to match institutional surveillance with peer-to-peer surveillance to achieve near-total control of the population. During China’s Cultural Revolution, people were encouraged to denounce their neighbors and even their family members. Children sent their parents to their deaths. The same thing happened in Stalin’s Soviet Union. The East German Stasi used an astonishingly high number of informants to infiltrate the general population. When we use social media for trolling, witch-hunting, and publicly shaming others, we behave more like subjects of totalitarian states than as citizens of free societies. 

    We resist the colonialism of digitization partly through culture. We defy digital colonialism when we value the analog, the unrecorded, the untracked. Tibetan Buddhist monks have a tradition of spending days creating beautifully intricate mandalas using colored sand. When they finish their work of art, they sweep it all away in a ceremony. The sand is collected in a jar which is wrapped in silk and taken to a river, where it is scattered. Sand mandalas are a homage to impermanence. Unlike paintings, which strive to resist the passage of time, sand mandalas are there to remind us that there is beauty in the ephemeral. 

    We challenge digital colonialism when we enjoy life without wanting to freeze it into a photograph. We resist totalitarianism when we decline to publicly shame someone for a mistake that anyone could have made. We preserve intimacy when we allow a conversation to go unrecorded. We stand up for democracy when we buy a paper book at a bookshop using cash. 

    Yet culture is not enough. We also need the right technology. Architectures of surveillance afford control over the population. Our current technology — all of it the result of engineering and corporate decisions, and none of it inevitable in its present configurations — is priming society for an authoritarian takeover. Analog technology is more respectful of citizens. We could also make digital technology less intrusive by creating and collecting less personal data, by periodically deleting data, and by improving our cybersecurity standards. In a global context in which a country such as China is exporting surveillance equipment to around one hundred and fifty countries, the job of liberal democracies is to be a counterweight to that authoritarian influence by exporting privacy through culture, technology, and legal standards.

    We need the right regulation to match culture and technology, because collective action problems can only be solved through collective action responses. For starters, we should ban the sale of personal data. As long as personal data can be bought and sold, companies will not resist the double temptation of creating and collecting as much of it as possible, and then selling it to the highest bidder. The trade in personal data is jeopardizing democracy through personalized propaganda. We do not sell votes, and for many of the same reasons we should not sell personal data. 

    We should also limit the purview of the digital. Asking technology companies not to digitize the world is like asking builders to please refrain from paving over natural spaces. Unless society sets legal limits, profit-seeking will reign. Corporations will sell our democracies if it is lucrative enough and we let them. Governments create protected areas to restrain the impulse to build over every square inch. We need similar protected areas from surveillance. It is in the very nature of big tech to turn the analog into digital, but turning everything into a spy is a threat to freedom and democracy. Full digitization equals total surveillance. There is some data that is better not to create. There is some information that is better not to store. There are some experiences that are better left unrecorded. 

    Just over a decade ago, enjoying digital technology was a luxury. Increasingly, luxury is being able to enjoy space and time away from digital technology. Spaces that are free of digital technology stimulate deeper connections between people, more honest conversations, free experimentation, the enjoyment of nature, being grounded in our embodiment, and embracing lived experience. That is why Silicon Valley elites are raising their children without screens. 

    We need urgently to defend the analog world for everyone. If we let virtual reality proliferate without limits, surveillance will be equally limitless. If we do not set some ground rules now on what should not be digitized and augmented, then virtual reality will steamroll privacy, and with it, healthy democracies, freedom, and well-being. It is close to midnight. 

     

    The Autocrat’s War

    The Emperor Nicholas was alone in his accustomed writing-room in the Palace of Czarskoe Selo, when he came to the resolve. He took no counsel. He rang a bell. Presently an officer of his Staff stood before him. To him he gave his orders for the occupation of [the Danubian] Principalities. Afterwards he told Count Orloff what he had done. Count Orloff became grave, and said, “This is war.” 

    Alexander William Kinglake

    The Invasion of the Crimea, 1863 

    Alexander William Kinglake, the nineteenth-century British travel writer and historian who published a history of the Crimean War in eight volumes, could hardly have known how and in what surroundings Nicholas I made the fateful decision that caused the declaration of war by the Ottomans. In the imagination of nineteenth-century historians and writers, wars were the products of high politics, and the Crimean War, one of the most senseless, ridiculous, and tragic defeats in Russian history, was commonly blamed upon the Russian tsar and his abysmal vanity, arrogance, religious fanaticism, and nationalism. Court historiographers spilled a lot of ink trying to exonerate Nicholas I and shift the blame for launching the bloody war onto Russia’s treacherous allies and insidious rivals.

    It is therefore even more surprising that Nikolai Chernyshevsky — Russia’s first revolutionary democrat, who apparently read Kinglake’s volume in his prison cell at the Peter and Paul Fortress in 1863 — also thought that the tsar was not the guilty party: “Who shed these rivers of blood? … Who? Oh, if only conscience and facts had allowed us to think ‘the late sovereign,’ how good this would have been! The late tsar is long dead, and we would not have to worry about Russia’s future…. But, my dear reader, neither the dead tsar nor the government is guilty of the Sevastopol war.” 

    According to Chernyshevsky, the main suspect was the Russian educated “public,” who had laid the blame on the dead tsar and continued to dwell without punishment or remorse: “The public is immortal; it does not resign, and there is no hope that this persona that caused the Crimean war ceases to represent the Russian nation and to have great influence upon its fate.” Without due respect for the greatness of Russian poets and writers, including Pushkin, Chernyshevsky blamed them for impressing on the minds of light-minded Russians the fantasies of taking control over Constantinople and beating the Ottomans on their land.

    Nobody, for sure, wanted the war, and only when they kissed their loved ones farewell did the same people who had carelessly joked about the “Russian Bosporus” understand what the war was about. Russia suffered a humiliating defeat, senselessly wasting thousands of lives and millions of rubles. Yet the horrors of the Crimean war, even if only seen through the eyes of Russian soldiers and not their Turkish (or British and French) counterparts, were soon forgotten. 

    Not long after the shameful debacle, the government approved the establishment of a “Slavic committee” in Moscow that aimed to “prevent” and anticipate Western influence upon the Southern Slavs of the Ottoman Empire. Twenty years later, Nicholas’ son Alexander II waged another war against the Turks, claiming to protect the Christian population of the Ottoman Empire. The second Eastern war in 1877–1878 was a military success, but most importantly, it was a propagandistic triumph that took off the table the question of responsibility for another imperialist adventure. Clearly the government had learned the lesson of the Crimean embarrassment: dealing with the questions of causality and responsibility had to be an integral part of the war effort and strategy.

    The catastrophic war against Ukraine that started in 2014 and entered a bloodier phase in February 2022 has already produced heated debates about its causes. The question of whether this is “Putin’s war,” or “Russia’s war,” or “the Russians’ war” echoes Chernyshevsky’s dilemma, but the answers, usually emotional and spontaneous, express the incomprehensibility of violence rather than a serious attempt to understand the roots of the disaster. Writers habitually compare Putin’s Russia to Hitler’s Germany, drawing parallels between the lethargic character of the Germans’ denial of Nazi crimes and the Russian public’s support of war in Ukraine. While this comparison points to a plausible diagnosis — a peculiar intellectual antibiosis of society — the causes of the disease in its respective settings are most likely different. In any case, current debates about whom to blame often simplify the issue, operating with imprecise categories and ignoring the context. Scholarly analysis will have to frame the problem as broader and punchier, considering the role and responsibility of the autocrat and the ruling clique not only in waging the war but also in turning the majority of the population into his supporters and accomplices. 

             While a cold and dispassionate analysis of the genesis of the current war may seem improbable at the moment, there is one thing that we can do: look back at past conflicts and analyze how Russia’s wars usually began. This comparison suggests that the formulas discussed above — “one man’s war” or a “nation’s war” — are themselves the products of the rhetorical attempts to either celebrate or exonerate rulers and to shift responsibility for waging the conflicts, either successful or failed, onto society. Wars belong to a particular category of events that are always shrouded in mythology: state propaganda doubles its efforts when it deals with armed conflicts. In the panoply of myths, one persistent trope stands out. It describes the archetypal scenario of a war’s outset; and Russia’s failed wars were not only those that Russia lost militarily, but also those that did not follow the prescribed scenario, the ones that laid bare the ruler’s personal role. To deal with the problem of causality and responsibility, however, it is important to distinguish the rituals of launching wars from the actual political mechanisms of their enactment. 

    As the war in Ukraine grinds on, it is illuminating to consider the precedents of Russia’s imperial wars of the nineteenth and early twentieth centuries, so as to trace how the wars began, how those beginnings were described, and what those beginnings tell us about the range of responsibility for unleashing violence. Despite the time lapse, the comparison between the politics of war in imperial Russia and in contemporary Russia is useful and legitimate: as Putin’s persistent references to the Russian imperial legacy demonstrate, he intentionally and unintentionally emulates the old mechanisms of autocratic governance. Wars, and not domestic reforms, however “great” they may have been, represented the main mechanism of legitimation in autocracies. Almost all the rulers of the Romanov dynasty fought at least one war during their rule. It is reasonable, therefore, to suggest that autocracies do not merely share a general inclination toward violence, but also display similar mechanisms of geopolitical decision-making. At the outset of war, the key moment of every monarch’s rule, an autocrat claims a complete authority that in peaceful times may appear limited and constrained.

             This complete authority, the way in which war is used to strengthen dictatorial power, may not be put fully on display. To justify war, an autocrat may cite an alleged provocation from below or a popular demand to which he responds. He may shift the burden of responsibility for human losses onto his advisers while accruing to himself the political benefits of victories. For this reason, the real mechanisms of war politics should be critically examined. And there is the additional question of the role of society. Does it bear responsibility for the violence, as Chernyshevsky thought? Does society have agency in an autocratic state, and does the autocrat take “public opinion” into account? Additionally, is collective responsibility a useful category, or should only individual perpetrators or groups and organizations take the stand in the makeshift court of history? 

    Let us begin with the role of the autocrat. In Timothy Frye’s wise and counter-intuitive words, “Recognizing Putin as an autocrat … brings into sharp focus the inherent limits of his power that are common to autocratic rule.” Historians of Russian autocracy concur with this observation: at no point in Russian history, they say, was a Muscovite tsar, an empress, or an emperor fully “autocratic” in making their choices and decisions. Boyar clans, unofficial parties at court, groups of ministers, court favorites, and lobbyists all worked toward forming the sovereign’s will and making him or her deliver the right decision at the right moment. Mikhail Dolbilov describes this process as “divining,” that is, “constructing” the ruler’s will and couching it in the language of laws and orders. 

    The interactions between the tsars and the advisers, however, were never one-sided: monarchs manipulated people masterfully, exploiting contradictions and conflicts among their favorites and courtiers, artificially sharpening disagreements, and shifting moral and political responsibility for crucial decisions onto representatives of the elites. In addition to these informal networks of power, autocrats also relied on a variety of political bodies. In both monarchical Russia and Putin’s autocracy, legislative chambers and political offices have existed mainly to legitimize the rulers’ decisions and to bind political elites by shared responsibility. There is also the class of technocrats and bureaucrats who bear the burden of governance and execute the monarch’s orders. We may conclude, therefore, that “autocratic will” is a complex set of mechanisms based on the preponderance of informal practices, customs, and rituals over rules and laws.

    But when it comes to wars, the traditional rituals and practices of decision-making prove moot. The role of government usually recedes into the background, and the autocrat surrounds himself with unofficial advisers, often shifting gears in motion, dismissing trusted politicians and bringing forward new people and favorites from the inner circle. Such famous bureaucrat reformers as Mikhail Speransky and Sergei Witte both lost their leading positions on the eve of wars, in 1812 and 1902 respectively. Speransky’s fall was staged as a tragedy: sending off his State Secretary to Siberia, Alexander I cried and lamented that he was sacrificing his adviser for the safety of the empire in view of Napoleon’s imminent invasion. The replacement of Speransky with nationalist conservative politicians represented a part of the pre-war drama, but in reality it reflected the tsar’s efforts to strengthen his absolute authority. 

    Witte’s story is also remarkable: a powerful minister of finance and de facto first person in the imperial government, he lost the political battle against a handful of unofficial advisers to the tsar, who pushed the emperor toward the more aggressive policy in the Far East that ultimately led to a war with Japan in 1904–1905. Describing this episode in his memoir, Witte portrayed the poor gullible tsar as a weak-willed child, easily manipulated by a group of unscrupulous and militant politicians. The story, fairly accurate in details, nevertheless looks like the traditional scenario about wars’ beginnings that features competition between pro-war and anti-war factions at court fighting for the tsar’s attention. In these competitions, the only winner was usually the monarch: launching wars was a way to get rid of importunate reformers, to consolidate supporters, to shake up the political establishment, and to refresh the absoluteness of the tsar’s authority. In the case of the Russo-Japanese War, the trick failed, leading to revolution and the constitutional reform of 1905–1906 that stripped the tsar of some monarchical prerogatives.

    Until the end of the tsarist regime, war and foreign policy remained within the protected sphere of the tsar’s personal rule. The narratives of the wars’ origins, considered without the long preambular part of diplomatic negotiations, were consequently staged as the dramas of the tsar’s choice between different camps, actors, and opinions. Even though unleashing the war was always the tsar’s personal choice and decision, the rhetoric and rituals of war dramas required the presence of others — noble defenders of the empire’s honor, faint-hearted bureaucrats, or evil instigators of violence. The scenarios of wars were designed in such a way that the autocrat was always at the center — and yet never alone. A typical plot of a “good” war as portrayed in the official myths always included 1) attempts at reconciliation and the ruler’s patient search for peace; 2) the people’s demand, and the advisers’ suggestion, to act more decisively; 3) the tsar’s reluctance to shed the blood of his soldiers; and, ultimately, 4) his determination to make the sacrifice for the sake of the empire’s honor and peace. 

    It is important to keep in mind, however, that the conventional plot of the war drama differed from the real politics of autocratic decision-making. Consider the example of the Russo-Turkish war of 1877–1878. Although Putin has never referred to it (perhaps because Turkey remains one of Russia’s somewhat infidel allies), the official narrative of that war, as well the model of interactions between its political actors, eerily resembles the situation on the eve of Russia’s invasion of Ukraine this year. The Russo-Turkish war is usually portrayed as a war for the liberation of the Slavs of the Ottoman Empire, a reaction to Turkish atrocities in Bulgaria and Herzegovina. According to the traditional narrative, Alexander II reluctantly agreed to step in after Russia’s diplomatic efforts to resolve the Eastern crisis had brought no results, while a collective “Europe” demonstrated a cold indifference to the fate of Christians in the Ottoman Empire. The lofty rhetoric of liberation was meant to hide the fact that Russia was ultimately the aggressor; and although it did not plan to incorporate Slavic lands into its territorial domain — it “only” wanted to create dependent satellite states in the Balkans — Russia ended up seizing a portion of Ottoman territory in Eastern Anatolia. 

    The official narrative of the outbreak of the Russo-Turkish war resembles the  libretto of a nineteenth-century opera with a plot developed on two levels — the crowd scenes (the Russian public cheering on the Slavs) and the main drama at the tsar’s court and within the imperial family. The crowd is stirred by the news about the Turks’ atrocities and it demands justice; and promptly produced paintings of pale-skinned Slavic women tortured by dark-skinned barbarous Turks provided the perfect scenery. Troops of volunteers march to the Balkans, and peasants and poor folk send in their modest donations to help their Slavic brethren. Meanwhile, in the imperial palace, the tsar is tragically torn between his human compassion and his duty as the Russian monarch to put the interests of his people above all. At court, there are two forces pulling in different directions: one, exemplified by the tsar’s top bureaucrats, argues for caution and restraint; another, represented by the Empress Maria Aleksandrovna and the heir to the throne, the future Alexander III, is fully on the side of the bellicose public and the champions of the Slavs. The defense minister Dmitry Miliutin listens to the “outpourings” of the sovereign’s heart and records them in his diary. The tsar is sad and alone. His “hollow cheeks” and his eyes swollen with tears betray his sufferings; his health is deteriorating. He stoically withstands unfair criticism for indecisiveness and passivity, yet he is tormented by doubts. The tsar feels for the poor Slavs, yet he knows that all blame for the losses and the casualties of war always “fall on those who make the first step.” And the poignancy of the tsar’s dilemma contrasts with the coldness of Russia’s European counterparts, especially Austria-Hungary and Britain, who cynically pursue their political interests, faking support of the Balkan Slavs. And after a few months of honest attempts to make the Turks change their policy, Alexander II concludes that Russia cannot avoid the war and resolves to act.

    The Russo-Turkish war became a turning point in Russian politics, marking the end of the era of the Great Reforms and prompting the reorientation of Russian domestic and international policies. Even if Alexander’s trepidations were sincere and he came to believe in Russia’s mission to liberate the Slavs, there is no doubt that he used the split of the elites to his political advantage and manipulated the groups at court as well as his family. Wars are almost never the outcomes of external factors alone: to understand their sources, one must also look inside and analyze the internal domestic tensions between the elites, the ruler, and the groups of interests. 

    When it comes to the current war in Ukraine, we do not yet have the luxury of first-hand accounts, but political analysts and intelligence reports suggest that Putin, just like Nicholas I, made this decision in solitude. Putin turned his obsession with Ukraine’s resilience in the face of Russia’s pressure into a state matter, a creed that keeps his close friends and allies together. Little is known about Putin’s inner circle; but the public appearances — and disappearances — of certain statesmen and politicians allow us to deduce that since the beginning of the invasion in February the narrow group of trusted friends and advisers has become smaller and tighter, while the role of technocrats has become entirely subsidiary. The government’s influence has been significantly reduced, and the role of the Security Council, chaired by the president but unofficially led by Nikolai Patrushev, Putin’s old friend and a former head of the FSB, has increased. All those who remained in power were compelled to publicly express their support of the “special operation in Ukraine.” 

    In this case, as in multiple episodes from the war history of the Russian Empire, the decision came from the autocrat who, as the ritual prescribes, solicited advice from the people and the elites. The meeting of the Security Council, broadcast on Russian state television, showed a handful of top officials, who, shaking and in trembling voice, gave their consent to the invasion. Yet if we look beyond the ritual, wars in autocracies are always the ruler’s wars. When it comes to the decision to fight a war, the “inherently limited” power of autocrats becomes, in fact, unlimited. Wars represent a way to build and maintain autocracies, even if they can also lead to their collapse.

    Let us now return to Chernyshevsky’s question: does a people, or just the educated part of it known as “the public,” bear responsibility for unleashing the war? In the aftermath of the Crimean War, people considered themselves the victims of Nicholas I’s regime. Yet in 1877 the situation looked different. The second Eastern war was portrayed as a war by popular demand that was almost forcefully imposed on the tsar. True, the ideas of cultural and political patronage over Balkan Slavs in the 1870s had gained popularity in Russian society. The so-called Slavic Committees in Moscow, St. Petersburg, and provincial cities initially focused on strengthening cultural unity and on humanitarian help, but after the suppression of the rebellion in Herzegovina in 1875 they switched to more active support of the “insurgents,” sending supplies and recruiting volunteers to fight for the freedom of Slavic “brethren.” The flip side of this activity was, indeed, the rise of anti-Turkish sentiments. The government publicly demonstrated its neutrality and non-involvement; it also quietly tried to lean on the Slavic committees and to channel the outpour of pro-Slav emotions in the right direction.

    But did these pan-Slavic circles — allegedly grassroot organizations supported and patronized by the ruling elite — represent “society”? A closer look at Russia’s political landscape of the 1870s shows that it is almost impossible to draw a line separating the “state” from the “public.” Russia did not have legal political parties until 1905, and the public sphere was closely policed by the government. As a result, a handful of conservative journalists and writers — Mikhail Katkov, Vladimir Meshcherskii, Ivan Aksakov — dominated the public mind, controlled the flows of information, and formed the language of public debates. Their influence was not, however, limited to the public. Katkov and Meshcherskii were privy to ruling circles: along with the tsar and other members of the elite, they were the main stakeholders in the campaign against the Ottomans. In contrast, liberal and democratic proponents of the Slavic cause were repressed, silenced, and exiled. The sad irony of the pro-Slavic campaign of 1876 lay in the fact that in the same year when the tsar resolved to support the autonomy of Slavs in the Ottoman Empire, he signed the ill-famed Ems edict prohibiting both the publication of books and theatrical production in the Ukrainian language, scornfully called a “dialect” in this law. As the Ukrainian historian and politician Mykhailo Drahomanov long ago pointed out, the “liberation” of Ottoman Slavs by the anti-liberal Russian Empire, where Slavic peoples, including Ukrainians and Poles, were deprived of even basic elements of autonomy, was a misnomer. Moreover, as Drahomanov observed, all talk about public initiative in support of the Slavs made no sense: the “unofficial Russia” that championed the campaign was undistinguishable from the “official” one. 

    Drahomanov’s words could easily be used to describe the situation in contemporary Russia. Those who are now allowed to speak on behalf of society are closely linked to the state; those who disagree with the state’s policy have been silenced and jailed, or they have had to emigrate or go underground. Most of the millions of people who support Putin and his plans of imperial revival know little about the world outside Russia, or even outside their town; they have been raised on state propaganda and are unwilling to question the veracity of the myths that it produces. They are excited about military victories because different ideas have never been inculcated in their minds, neither by the schools nor by the Orthodox Church. Many of them live in misery and abandonment, and they seek emotional comfort not in kindness and compassion but in an illusionary victory in a “special operation.”

    Wars have always been portrayed as the moments of unification between the autocrat and the masses — a kind of political communion, a shared national epiphany. The war consensus transcends the bureaucratic buffer that, in peaceful times, stands between the ruler and his subjects. When, in the 1870s, nationalists celebrated these consummations of unity, others saw the attempt to drag simple folks into war politics as cynical and dangerous. As Prince Petr Viazemskii remarked, “The people cannot wish the war but inadvertently push toward it … The government silently lures the people into this political chaos, and they may pay for it dearly.” Putin justified the invasion by the sufferings of the Russian-speaking population in eastern Ukraine, which was allegedly vying for autonomy and aspiring to strengthen ties with Russia. The orchestrated pro-Russian demonstrations and marches in Donetsk and Luhansk replicated almost verbatim the process of building up pro-war, pro-Slav, and anti-Turkish sentiments in the 1870s, as did the public euphoria in Russia in response to the annexation of Crimea in 2014. In lieu of the “barbarous” Turks, there are the Ukrainian “Nazis,” who, according to Putin, tormented the population in eastern Ukraine. The people — duped by state propaganda — may express nationalist sentiments, but the autocrat never really takes them into consideration when he gives the order to attack. 

    It is important, therefore, to make a distinction between the rhetorical references to public support and the reality of the decision-making process. The historian David McDonald, commenting on certain assumptions about the state’s responsiveness to nationalist sentiments in pre-revolutionary Russia, has rightly observed that these assumptions “neglect the finer mechanisms of causation and overlook the fact that imperial statesmen were highly reluctant to cede any voice at all to society in matters of foreign policy. While public opinion played an episodic role in the discussion of foreign policy, as a state matter, such issues could be considered only by professional officials responsible to His Imperial Majesty.” In autocratic orders, the notion of a war by popular demand is nonsense. 

    The ruler, in other words, does not care what an average Russian, or Russian society as a whole, thinks about Ukraine or the Ottoman Empire. Although a significant part of the Russian population supports the war now, it did not cause the war, and the majority had been opposed to the idea of armed conflict before the invasion. Of course, there were Russian writers, most notably Dostoyevsky, who penned pan-Slavic articles, and journalists who created the racist images of Turks, and there have been Russian politicians and intellectuals who have haughtily refused to recognize the cultural and political sovereignty of Ukraine; and they are all responsible for endorsing violence. Every soldier who has pulled a trigger, launched a missile, or thrown a grenade is complicit; every governor or theater director who has voiced support for the “special operation” bears the guilt for the lost lives of innocent Ukrainians. But all the individual responsibilities for these actions do not add up to the collective responsibility of “the Russians.” The notion of collective responsibility allows war criminals and the outspoken supporters of violence to escape judgment. The “responsibility of nations” often means no one’s guilt. 

    Does this suggest that Chernyshevsky was wrong in blaming “the public” and not the tsar for the horrors of the Crimean War? Not exactly. He was right in predicting that Russian educated society would fail to comprehend the simple thought that any war, victorious or not, is hideous, that war cannot be a source of glory and dignity, neither for a man nor for an empire. This thought remained alien to the Russian nobility, which continued to seek honor on the battlefield, and, with the exception of Tolstoy, the holy fool of Russian literature, the thought did not find expression in literary works. It is therefore very important to understand how and why a hostility to war, or an aversion to it, failed to develop in a country where every single family has lost at least one member in one of the many wars fought in the last hundred years. Chernyshevsky was also right in pointing out the cultural hauteur of the Russian literary elite who inculcated in their “public” a sense of imperial superiority — over Turks, Europe, Ukrainians, and others. This sense of superiority now fuels support for the Ukraine war among contemporary Russians. As many commentators have already pointed out, Russia has as yet failed to go through the process of de-imperialization and reckoning with its imperial (pre-revolutionary and Soviet) past. 

    Another theme that emerges with surprising persistence in the “who is to blame?” debates concerns the responsibility of the West for “provoking” Putin. The West must not repent for offending Putin and injuring his self-esteem, because it would only play into the autocrat’s hands. Putin’s propaganda openly justifies its aggression by alleging the hostility of Western powers who have turned Ukraine into a playground for their military operations against Russia. The motive of this putative Western threat is another cliché, copy-pasted from a typical scenario of Russian imperial warfare, in which almost every war, wherever it took place, was seen as a war against a collective “West.” Putin’s remarks last June about the world order as divided into two camps, namely sovereign states and their colonies, and his attempts to present this war as Russia’s defense of its sovereignty against the West, repeat almost verbatim the ideas of Russian nineteenth-century nationalists. 

    Mikhail Katkov, one of the main proponents of war against the Ottomans, thought that only by isolating itself from the West could Russia avoid the sad fate of falling into economic and political dependence on Europe. Isolationism — the rejection of shared cultural, legal, financial, and political standards and values — appeared to be a way of regaining and strengthening Russian independence. Indeed, the war with Turkey in 1877-1878 eventually turned into a civilizational clash with Europe and ended two decades of Russia’s fine attempts at Westernization and reforms. The most visible manifestation of Russia’s anti-Westernism was the reactionary reign of Alexander III, his official Slavophilism and imperialist policy. For its part, the de-Westernization of Russia in 2022 will be remembered by the disappearance of McDonald’s, empty shopping malls, and the deficit of imported consumer goods — but there have been less visible and more profound changes in the systems of education, industry, and finance. Russian universities and academic institutions have been cut off from networks of international cooperation, investors have walked away from the country’s economy, and Russian producers have to learn from scratch how to replace imported parts and machines. 

    The Russian invasion of Ukraine seemed improbable until the last moment, because it defied rationality and threatened to ruin the Russian economy and inflict unthinkable losses on the Russian population. Yet its economic irrationality has been twisted by the autocrat to prove the unselfishness of the war’s goals, and to demonstrate Russia’s uniqueness and difference from the obnoxiously pragmatic and materialistic West. Prince Dmitrii Obolenskii expressed this mood on the eve of the Russo-Turkish war: “I know that we have no money. I know that the generals are bad…. But this does not matter, because the main question is, What are we?” As in 1877, Russian authorities in 2022 high-mindedly boast about their altruism, although the main burden of war, as always, falls on the shoulders of the poor. No one can predict the human and material costs — for Russia, Ukraine, and the entire world — of the current war, but we must make sure that this accounting is made and all the losses are tallied, and that the people who inflicted the losses bear the responsibility.

    The invasion in Ukraine has had profound effects not only on the physical dimensions of people’s existences, but it has also affected the way they experience time, place, and history. Historical planes have shifted, dumping Russia into a temporal pit without a future and with a questionable past. Putin, who directs this bloody drama, suspends the historical specificity of these events by constantly referring to his crowned predecessors and following the imperial scenarios of war, as if an atemporal pattern, a Russian destiny, is simply being re-enacted. This suspension of time is not accidental — for Putin’s regime, war has turned into a mode of existence, an endless present, an eschatological battle without a strategy and a timeline. Some of Putin’s critics have inadvertently fallen into his trap, mistaking his rhetoric for reality. Instead of studying Russia’s imperial past to understand the precise mechanisms of autocratic power and thereby untangle the jumble and mess of Putin’s ideas, they look back to the past in order to revert to meta-historical stereotypes and cliches so as to judge and accuse. The discourse about “the Russians’ war” is often built on poorly understood historical parallels and assumptions regarding Russians’ genetic propensity for violence and their inability to develop an inner sense of freedom. This invidious essentializing is the mirror image of the pro-war Slavophile nonsense about the mystical singularity of Holy Russia. 

    The analysis of the causes of this terrible war should look beyond the rhetorical fog of Putin’s propaganda and include the serious treatment of the politics of war and the structure — the logic — of autocratic power. At what point, and why, does an autocrat resolve to initiate a war? Which elements and factors in the internal dynamics of an autocratic system trigger aggression? Why do the mechanisms of restraint not work? We must also begin a careful historical inquiry into how (and whether) Russian society has dealt with the problems of violence and responsibility. Ritual repentance on Facebook pages on behalf of the Russian nation will remain useless until we understand the actual causes of war. And when the time comes, the people responsible for the horrors of the current war will (I hope) face judgment, and courts will establish the guilt of individuals complicit in encouraging, supporting, financing, or justifying the war. There is a significant nexus, analytical and moral, between causality and culpability.