Problems and Struggles

    “So Socrates!” he teased, “you are still saying the same things I heard you say long ago.” Socrates replied: “It is more terrifying than that: not only am I always saying the same things, but also about the same things.”

                      Xenophon, Memorabilia, IV.4.6

                              (translated by Jonathan Lear)

    In the plenitude of discouragements that is contemporary history, the one that perhaps stings me the most is my increasing despair about the possibility of persuasion. Who changes their mind anymore? What is the difference between an open society that is intellectually petrified and a closed society? In a democratic society, which governs itself by exchanges and tabulations of opinion, surely the first requirement of meaningful citizenship is receptivity. Thoughtlessness is a betrayal of democracy. Mill said that democracy is “government by discussion.” The purpose of discussion is to test the merit of opinions with the presumption that one may convince others, or become convinced by others, of new views. One of the quintessential experiences of democratic life is to admit that one is wrong. In debates about large principles and large programs, everybody cannot be right, and sometimes not even a little right; and in a liberal order the adjudication of contradictions is accomplished not by guns but by arguments. Or so we like to tell ourselves. But the degrading spectacle of what passes for public debate in America has shaken my hoary faith in the dependability of argument. Is social media a discussion? Is a shriek an argument? Where is the reasoned deliberation that Milton and Madison and Mill regarded as the foundation of a decent polity? They intuited that the road from unreason to indecency is not long, and we are diabolically confirming their intuition. We have made “public reason” into an oxymoron. We are drowning in discursive garbage. Even the people who believe in persuasion seem to persuade only each other. They are just another American community of the elect — the mild and articulate sect of the arguers.

    Many observers have noticed this intellectual crack-up. They suggest a host of solutions. We must keep our minds open. We must listen more carefully. We must respect each other. We must be reasonable, and even rational. We must identify our biases and correct for them. We must bring evidence. We must lower the temperature. We must enhance our capacity for empathy. We must connect with each other, and with the Other. We must practice epistemic humility. These homilies are everywhere, and all the preaching is true. We should indeed do all these noble and necessary things. These are the traits of a democratic individual. But is it not time to notice the futility of this wisdom in present-day America? Nobody seems to be hearing that we should listen. These exhortations leave almost no trace on our public life, which gets insistently dumber and nastier. They have become a sad and lovely genre of their own, a journalistic counterpoint of urgent but soothing platitudes. They may be accomplishing nothing more than providing solace and companionship for those who utter them. I have uttered many of them myself, and I stand by them. They are the only answers. But I am beginning to feel a little foolish, and disconnected, and marginal; I do not feel sufficiently helpful.

             To some extent, of course, it was ever thus. There never was a time when Madisonian graciousness ruled our politics. Philadelphia in 1787 and Illinois in 1858 were epiphanies, not norms. Indeed, the promiscuity of the nineteenth-century American press can make social media seem redundant, in its slanders and its outrages. The manipulability of public opinion has always been a primary assumption of American politics and its cunning practitioners. Was there ever a medium of communication that was inhospitable to zeal, or that turned its back on lies? Have fanatics and extremists ever been at a loss for instruments of influence? There is some consolation to be had, I suppose, from this long history of what ails us. We are not the first to have fallen short of our discursive ideals.

    Moreover, it is good that people stick up for what they believe. Intellectual stubbornness is in its way a mark of intellectual maturity. The malleable are too often mistaken for the reasonable. It is good that people hold strong convictions, and that they confer upon beliefs a prominent role in their identities. Yet the strength of a conviction has no bearing on its merit. Beliefs are not like foods that taste better hot. Too many people hold their beliefs for bad reasons, or for no reasons at all — merely because other people like themselves hold them, in the “cascades” and the “contagions” that have exercised social scientists in their study of our era of conformity. In the articulation of our beliefs, the most common substitute for reasons are passions. The idolatry of feelings that has characterized our culture for many decades has now been extended to our politics. But what does passion have to do with persuasion? Persuasion by passion is a nice definition of demagoguery.

             This time we have fallen very short. The collapse is especially painful for someone such as myself, who has spent his years in the argument business. It was, and still is, an idealistic calling. It came with many scruples about the integrity of argument. We worked hard when we argued, and we tried never to make it personal. (Almost nobody has a perfect record about turning an argument into a quarrel. I certainly do not. Sometimes hostility follows naturally from having understood the dangerous nonsense that your interlocutor is peddling.) There was an atmosphere of exhilaration that surrounded the seriousness. Of course there were also degraded forms of the practice: the gladiatorial kind, for which debate is a kind of sport, an exhibition of dialectical virtuosity, a contest of cleverness; and the academic kind, in which debate consists in making “moves” and “turns” and combinations thereof, as in a professional game; and the festival-of-ideas kind, in which thinking is presented breezily with a “hard stop,” for the entertainment of the paying customers and the rich. But there remained, there still remain, intellectuals with a sense of honor, for whom truth and method matter most, and who regard their activity, rightly, as a significant contribution to their society. One would think that such people are never more valuable than in a crisis — but they are learning, I fear, that it is precisely in a crisis that they may be least valuable, and most easily overridden. In 2016, for example, almost every thoughtful conservative columnist in the country valiantly opposed Trump, and it was as if they never existed. Right now the argument for persuasion, an American argument if ever there was one, seems to be experiencing the same indifference.

    Yet there is another way to consider this problem, and others, so as to elude despair and to find strength. It is to regard it not as a problem, but as a struggle.     

    The success with which we meet the difficulties that we face depends first on an accurate description of them. Nothing destroys hope so quickly as asking a question in a way that makes it impossible to answer. Such a question leaves us with the crippling impression that the world is finally intractable, that there is nothing that can be done. It is one of pessimism’s finest tricks. There are predicaments, of course, in which nothing can be done — but they are rare, even in adversity, and they, too, must be accurately characterized, if we are to be sure that we are being thwarted by reality and not by ourselves.

             There are problems and there are struggles. Problems have solutions; struggles have outcomes. Problems are technical; struggles are historical. Problems recur; struggles persist. Problems teach impatience; struggles teach patience. Problems are fixed; struggles are fought. Problems require skill; struggles require character. Problems demand knowledge; struggles demand wisdom. Problems may end; struggles may not end. A problem that does not end is a defeat or a failure; a struggle that does not end is a responsibility and a legacy.

             We are not given to choose between a world of problems and a world of struggles, and so we must be dexterous. Different temperaments incline to, or feel especially beset by, the one or the other; and this may be the case with communities and societies, too. The American affinity for problems over struggles is well known: the great American epic of practicality and its rewards. We care so much about practicality that eventually it was raised into a philosophy, according to which the proven satisfactions of a hammer and a nail were powerful enough to rid us of nothing less than metaphysics. William James, who perversely regarded pragmatism as a spiritual dispensation, once defined reality as “a perfect jungle of concrete expediencies.” Whether or not reality is like that, American reality is. The wildness of American religiosity may be understood as the response to such an environment of rampant utilities. (Silicon Valley is a hotbed of New Age rubbish.) Yet the American obsession with how things work has produced many admirable results, not least the technocracy that now inspires the wrath of the populists. Over many decades it has done more for the public good than any mob ever did, even if sometimes it has attempted to plant its standpoint where it does not belong and sought in its fanatical meliorism to reduce struggles to the scale of problems. But eventually struggles, too, have a place for policy, which is best not made by visionaries.

             Thinkers from Augustine to Heidegger have belittled the uses of things. The “ready-to-hand,” owing to its “serviceability,” is ontologically shallow, according to the latter, and much too distant from Being. According to the former, the uti, the use of something for the sake of something else, is similarly secondary and extrinsic to the highest meanings, and he ponders whether “men should enjoy themselves, use, or do both.” The American experience of enjoyment in use, of pleasure in function, is beyond his imagination. Such a hierarchy of value would be wrecked by a visit to an American hardware store. The anti-pragmatists are disquieted by a love of the extrinsic just as the pragmatists are disquieted by a love of the intrinsic. The answer to Augustine’s question, obviously, is that we must do both.

    Moreover, there is glory, and not only necessity, in our practical achievements (just as, say, there is beauty, and not only necessity, in architecture). Homo faber, if he is to make things and build things, must include among his talents a sense of form and a concept of design, and an ability to work out the purposes of an object as well as its material properties. The gulf between instrumentality and art is not as wide as the aesthetes and the Platonists would have us believe. I learned this lesson in Kensington, Maryland, where there used to be a shop that sold antique tools — carpentry tools, construction tools, kitchen tools, fireplace tools — a paradise of practicality; and when I first walked into the shop I was struck not by the spectacle of utility but by the spectacle of imagination. The shapes and the metals were gorgeous. I still own the heavy late-nineteenth-century iron cooking pot, with its delicate handles and its handsomely pockmarked lid, that I acquired there. It is a welcome drag on my aspirations to loftiness.

             Here is a passage from one of the many American books on (this is its subtitle) “how to perfect the fine art of problem-solving”:

    Problem-solving is a critical survival skill because things go wrong for us all the time. Working through problems is crucial for productivity, profit, and peace. Our problem-solving skills, however, have been short-circuited by our complicated, technology-reliant world. Why learn how to fix something when Google can do it? Unfortunately, calamity doesn’t always fit in a search bar. And increasingly in our modern, perilous world, the issues that emerge are subtle, laced in subtext, or teeter on the tip of a slippery slope — all attributes that require a human touch to solve. As said humans, we must not only be able to address the problems that arise across all professions and walks of life, we must also be able to solve them. Before they drown, damn, or destroy us. Thankfully, problem-solving is a skill that can be learned.  

    I can practically hear The Star-Spangled Banner in the background. But every word is unimpeachable, except perhaps the reference to peace, which belongs more realistically to the realm of struggle. The undaunted confidence in human agency, the respect for the concrete, the commendation of the artisanal and the collaborative, the faith in education and the transmission of skills: these are elements of the mentality that built cities and created technological revolutions, and their dazzling social and economic benefits. The inventors, the tinkerers, the adjusters, the repairers, the tweakers: they are pillars of everyday existence, who defy our sense of helplessness and relieve us of many of the oppressions of our material setting. They make life more dignified, because there is dignity in safety and comfort and the conquest of anxiety. 

    The same mentality, alas, these same elements, are also the source of our Icarian perils. Sometimes our ability to make things exceeds our ability to comprehend what we are making, and we deploy our inventions before we adequately understand their purposes and their effects. “Problem-solving” is ethically contentless; it serves many causes and many codes. Evil, like goodness, seeks technical support, which is why “pragmatic,” in ordinary usage, also has a pejorative connotation. (As does “fixer”.) The question of how things work is never the most fundamental question one can ask about human affairs. But fundamental questions are not the only questions that we are obliged to ask. We are, even the largest-souled among us, commonplace creatures who live fragilely in a world of cracks and fixes. We are fortified more by reforms than by revolutions. So blessed be the fixers, especially those who recognize the limits of the fix as a model for all human solutions.

    Not all the difficulties that beset us can be described as problems that can be fixed. Some of them are deeper and thicker and more lasting, and therefore more immune to our practical brilliance and our utilitarian talents. They are conditions, inherited states-of-affairs, systems and structures, traditions and loyalties, inner dispositions in the individual and the community, cultural premises hallowed by the generations, abstract conceptions and reified ideals. They imbue everything we do, but we cannot take a hammer to them. (Except wantonly, of course: violence in a problem-fixing society is owed in part to the special frustration of problems that cannot be fixed. Frustration, and the inability to live with it, is one of the characteristic hazards of the can-do worldview.) Indeed, the ubiquity of their effects, their saturation of all the private and public realms, contributes to their durability. And yet they must be fought.

    There is the difference: fixing is not exactly a fight, even when it is hard. No fight is necessary when satisfaction can be technically and efficiently achieved, and there are no first principles at stake. A solution to a problem may be wrong without being evil. Trial-and-error is a benign war on error; a correction of mistakes, not of sins. The question of how best to fight inflation, or how best to curtail our dependence on fossil fuels, or how best to halt nuclear proliferation — such questions may provoke virulent debates, but the virulence is generally not philosophical. These are “how” questions, and not all “how” questions must become “why” questions. A debate about means when there is a consensus about ends is much more easily resolved than a debate about ends. Conversely, one time-honored way of wrecking a debate about means is to turn it into a debate about ends — to make every difficulty into a matter of first principles, to transform problems into struggles. The transformation of a problem into a struggle is a fine strategy for the enemies of a solution.

             Perhaps the fundamental difference between a problem and a struggle is time. The temporal horizons of struggle are long —sometimes very long, even longer than a lifetime. Sometimes we bequeath a struggle to our children. The struggler, like the lover, is prepared to wait. A problem, by contrast, does not tolerate such duration. It needs to be solved soon, if we are to function; whereas struggles are not the condition of our functioning but of our just and proper functioning. One of the meanest facts of human life is that unjust societies can function. (Making a society function is one of the oldest excuses for injustice.) But there is some comfort, too, in that fact, since a just society has never existed. Our only alternatives may be imperfection or extinction.

    Fiat justitia pereat mundus: the old Latin maxim captures the tense relation between perfection and reality. Let justice be done even if the world perish! That was the maxim’s customary reading, not least by Kant, who described as “a sound principle of right…which should be seen as an obligation of those in power not to deny or detract from the rights of anyone out of disfavor or sympathy to others.” But what sort of justice is the destruction of the world? Where is the virtue in nothingness? (Kant dodged this ethically complicating objection with a strange paraphrase of the maxim’s meaning: “let justice reign even if all the rogues in the world must perish.”) We may read the maxim differently, then, and less as a mandate for zeal: we may read it as a warning that the insistence upon perfect justice may destroy everything, as a caution about absolutism in a just struggle. Be careful not to destroy the world when you seek justice! And I have seen a peculiarly American inflection of the adage.  At the Supreme Court there hangs a portrait of John Marshall painted by Rembrandt Peale in 1834. The jurist is set heroically in a stonework oval with Roman ornamentation, and beneath him is a stone on which are carved the large words FIAT JUSTITIA. The rest of the maxim, the worry about the consequences of righteousness, has disappeared. Only a society consecrated to newness, a society that regarded itself as a beginning in what is right, could so blithely have banished the shadows from the ancient injunction.

    A struggle does not allow for such innocence, if only because of its wealth of sobering experience. If you have struggled against an injustice, then you have known it, and witnessed it, and existed with it. You have learned too much about the world to believe that pragmatism is all the equipment that you will need to meet it. There are other inner resources that must be readied: steadfastness, patience, tenacity, resilience, courage. The less your life has need of those qualities, the happier (and the luckier) it is. A life of problems is not like a life of struggles. The trials of fixing are real, but they differ from the trials of struggling — the fixer’s trials are more like exasperations. But an exasperation with history, particularly with a history of suffering, is no mere exasperation: it is a sense of tragedy. It broaches the hardest question of all, which is the question of the warrant for hope.

    A life in struggle is a life in hope, and hope gets stronger as its basis in reality gets weaker, until finally it floats free of experience and proclaims a pure assertion of the will to exist. The more empirical the hope, the less it is needed. But unempirical hope, or hope after catastrophe, is, for that reason, invincible; and it would be an offense against all the communities of struggle, all the shattered but intact peoples, to dismiss such hope as illusion, when it is the purest evidence of unbroken vitality. In a beautiful study of the spiritual perdurability of the Crow Nation, Jonathan Lear has called this “radical hope,” by which he means an inner independence from history that permits one to entertain “the possibility of new possibilities.” For this reason, anyone involved in a struggle will not count a bad day as the last word, because he lives in expectation of it, and he is accustomed to a different pace for progress, to the unsteadiness of forward motion, to delays and reversals and losses. The larger the goal, the rougher the road to it.

    If we prefer to see ourselves as a nation of problem-solvers, it may be in part because we prefer to look away from the strugglers in our midst. Having completed their tasks, problem-solvers proceed to the most typical American activity of all: they move on. But the strugglers cannot move on. They are prisoners of circumstances, and of the power that with its prejudice arranged their circumstances. Their inner freedom is a measure of outer necessity. Our centuries of innovations and breakthroughs were also centuries of oppression and discrimination. Our country has harbored many communities of struggle: the Native Americans, for example. For a hundred years or so the labor movement represented a community of struggle, and it may do so again. But no Americans have a more natural understanding of struggle than black Americans. Their emancipation, which we treat as a discrete historical event circa 1863, was (in the words of one historian) “the long emancipation.”

    The story of African American culture is a story of melancholy and its mastery. There is joy in the blues, which is not the case with many other traditions of sad song. The slave songs and the spirituals are intimate with the “trouble of the world,” but I have never heard one of them recommend surrender. “O me no weary yet, o me no weary yet, I have a witness in my heart, o me no weary yet.” The slaves sang, “Lord, make me more patient”; they sang, “Hold out to the end.” And many decades later the poets expressed the same extreme commitment to endurance. Here is Sterling A. Hayden, addressing a Southern “nameless couple” who have suffered much hardship:

    Even you said

    That which we need

    Now in our time of fear, —

    Routed your own deep misery and dread,

    Muttering, beneath an unfriendly sky,

    “Guess we’ll give it one mo’ try,

    Guess we’ll give it one mo’ try.”

    And here is Countee Cullen’s “The Dark Tower”, whose title refers to a place on 136th Street in Harlem where poets used to meet, as if the poem, in its first person plural, might speak for them all.

     

    We shall not always plant while others reap

    The golden increment of bursting fruit,

    Not always countenance, abject and mute,

    That lesser men should hold their brothers cheap;

    Not everlastingly while others sleep

    Shall we beguile their limbs with mellow flute,

    Not always bend to some more subtle brute;

    We were not made eternally to weep.

     

    The night whose sable breast relieves the stark,

    White stars is no less lovely being dark,

    And there are buds that cannot bloom at all

    In light, but crumple, piteous, and fall;

    So in the dark we hide the heart that bleeds,

    And wait, and tend our agonizing seeds.

    There is the temperament of struggle: waiting and tending to one’s agonizing seeds, which one day, owing precisely to the pain of their cultivation, will grow.

    Are Americans, particularly liberal Americans, still capable of such a temperament? Have we, in the inward velocity of our digital and consumerist present, forfeited the mental readiness for the extended future, or squandered it on futurism? I arrived at this broad and imprecise distinction between problems and struggles in order to understand the despair that I see around me. I attribute that despair to a confusion between these orders of difficulty. It makes sense to despair of solving a problem — some things, after all, cannot be fixed; but it makes no sense to despair in a struggle, because disappointment is a regular feature of struggle, and perseverance comes before success. Injustice is much bigger than a problem. Anybody who combats injustice without the wisdom of struggle will fail in the effort to prevent it from becoming a fate. There are concrete instances of injustice, of course, which can be addressed with legal or political remedies. But there are no policies for the human heart. An earned income tax credit cannot heal psychic and cultural wounds. Discrimination can be ended by practical means, but not racism. Discrimination is a problem, but racism is a struggle. Racism, and all the other panics about difference, will never disappear. They are as old as civilization, and the greatest affront to it. All that can be done is to raise the legal and political and social costs of a particular expression of a prejudice, and then, having inflicted defeat upon it, await its resurgence, which must never surprise us even when it shocks us. The struggler is not a pessimist, but he is a disabused man. The appearance of anti-Semitism in America does not refute the revolutionary promise of America for Jews, because which student of Jewish history, which student of Christian history, which student of evil in human history, ever believed that once and for all anti-Semitism would end? Anti-Semitism was never illegitimate in the European political tradition, and in the Russian one, but it is illegitimate in America according to the terms of our founding. (Whereas white supremacy was inscribed in some of them.)

    When friends tell me, as a consequence of Trump and the ascendancy of the radical American right, that America is over, or when they tell me, as a consequence of Netanyahu and the ascendancy of the Israeli right, that Israel is over, I castigate them for being disinclined to struggle. (I have three motherlands: America, Israel, and my library.) When they tell me, as they spin the globe, that democracy is over, I reply that the rise of authoritarianism is not an event, but an era; and that it will take a long time, a generation or more, to push back the authoritarians and restore the prestige of the open society; and that we must not measure the crisis in election cycles, because it is more profound than politics; and that the inability of democracy to defend itself has always been its greatest historical failing; and that its rejection does not refute it — in sum, that we are in a historical struggle. The refusal to recognize it as such makes it more likely to fail. It is, moreover, a privilege to serve. The struggle for democracy, like the struggle for justice, makes life less trivial. Camus believed that Sisyphus was happy.

             But do we, as they say in foreign policy, any longer have the staying power? The analogy with foreign policy is actually quite useful. One already hears and reads about “Ukraine fatigue” in America. We are fatigued by their fight for survival? The vanity! If the Ukrainian war is just, then it is just even when we get tired of it. The Biden administration has responded more or less splendidly to Putin’s aggression, but more will be needed, because this is not a problem, it is a struggle. (The Ukrainians have established “resiliency centers” against the destruction of the country’s infrastructure and the winter cold.) It was right about now that I expected the administration’s determination to collide with the country’s lack of determination. I mean, it’s been a whole year. Pretty soon we will have another “forever war” on our hands.

    There is no more damning evidence that the readiness for struggle is waning in America than our stupid retreat from Afghanistan. Twenty years is not even close to forever, except for people who do not understand historical time and have been damaged by the warp speed of American life. There were sound moral and strategic reasons for our presence in Afghanistan; and this is unwittingly conceded every time the same opinion pages that stridently called for an end to the “forever war” publish poignant pieces about the plight of Afghan women and Afghan schoolchildren in the kingdom of the Taliban. What did they think was going to happen? The whole world was taught that it could wait America out, that we have only a limited competence for commitment. Unlike us, our enemies know how to practice the art of waiting. They are not intimidated, or bored, by the longue duree. In their global rivalry with us, they are preparing for a struggle.

    The psychology of struggle is a brake also against another danger that faces us. Owing to the magnitude and the multiplicity of the crises that confront us, the apocalyptic spirit has been given new life. Hysteria is increasingly accepted as intelligent, as a condign response to a proper analysis of things. In our culture we are riveted by endings, especially by spectacular ones. There is a new fashion in the-end-of-history, which is just as blind as the old one. Unlike the old one, this one is animated not by a sensation of triumph but by a sensation of weariness, by a loss of heart. History may now be numbered among the causes of depression. The prophecies of decline and destruction are overwhelming. In politics, the belief that time is running out, that it is too late to change course, that all that awaits us is cataclysm, has two antithetical consequences: apathy and apocalypse.

    An apocalyptic is someone who decides to treat a struggle as a problem, and to get it over with. He wants a quick eschatological fix; his understanding is distorted by his desperation. Despondency has sapped him of his will and his energy, or rather, it has left of his will and his energy only enough for the less exacting way of radicalism, which (as we know from the radical past) will either blow things up or exhaust itself. Struggle, in other words, even struggle unto the generations, is the quintessential anti-apocalyptic path. It will not be waited out, or permanently hobbled by gloom. In its decision to outwit despair, in its solemn promise that its resolution will be invulnerable to fortune, the spirit of struggle arms us not only against the injustice that we fight but also against our own frailties. We may reflect, and be calm, and hold together, in the storm, because we are wiser than the storm. Like Durer’s knight we can advance, but unlike Durer’s knight we are not alone.

    The Court Gone Wrong

    What is happening on the Supreme Court of the United States? 

    The Court has overruled Roe v. Wade. It has rejected the whole idea of a right to privacy. It is sharply restricting the ability of federal agencies to protect safety, health, and the environment. It is limiting voting rights. It is expanding the rights of gun owners, commercial advertisers, and those who wish to spend a lot of money on political campaigns. It is moving very quickly, and almost always in directions favored by the political right.

             None of this comes out of the blue. It is the culmination of four decades of intense work, meant to move constitutional law in exactly these directions — work by activists and scholars, politicians and lawyers-for-hire, corporate lobbyists and the National Rifle Association, religious organizations and the Federalist Society. It was a long process, but it seems fair to announce that they have finally won.

             I received a firsthand sense of what was afoot in 2002, when I found myself in a large audience at the University of Chicago Law School, waiting to hear a speech by Douglas H. Ginsburg, who was then Chief Judge of the influential Court of Appeals in Washington, DC. Tall and thin, with a bemused and scholarly manner, Judge Ginsburg is an able and fair-minded judge. He is a generous and kind person to boot. He is also a graduate of the University of Chicago Law School, which was my home institution at the time. I like and admire him. But on that day I was flabbergasted by what I heard; actually I was appalled. Judge Ginsburg called for something like a constitutional revolution. 

     

    Judge Ginsburg contended that the Supreme Court abandoned the United States Constitution in the 1930s, when it capitulated to Franklin Delano Roosevelt and his New Deal. He sought to return to the Constitution as it was understood before the capitulation.

    Ginsburg began by emphasizing that “ours is a written Constitution.” Making a bow in the direction of populism, he contended that this observation is controversial in only one place: “the most elite law schools.” The fact that the Constitution is written has major implications. If judges are “to be faithful to the written Constitution,” they must try “to illuminate the meaning of the text as the Framers understood it.” 

    In Ginsburg’s account, judges were faithful to the Constitution for most of the nation’s history — from the founding period, in fact, through the first third of the twentieth century. But sometime in the 1930s, “the wheels began to come off.” In that period the nation faced the Great Depression, and President Franklin Delano Roosevelt tried to do something about it, above all with his New Deal, which greatly expanded the power of federal agencies, through, for example, the creation of the National Labor Relations Board and the Securities and Exchange Commission. Responding to “the determination of the Roosevelt Administration,” Ginsburg declared, the Supreme Court abandoned its commitment to the Constitution as written.

    How did this happen? Judge Ginsburg’s first example was Congress’ power, under the Constitution, to “regulate commerce . . . among the several states.” What does this mean? Judge Ginsburg referred, with enthusiastic approval, to the Supreme Court’s view that Congress lacked the constitutional power to ban child labor. But his strongest complaint involved the Supreme Court’s decision, in 1937, to uphold the National Labor Relations Act, which protects the right of workers to organize and to join labor unions. In upholding the Act, the Supreme Court said that when strikes occur, interstate commerce is affected. A strike in Pennsylvania often has a big impact elsewhere. 

    Judge Ginsburg objected that this is “loose reasoning” and “a stark break from the Court’s precedent.” But his complaint went much deeper. The Court’s acceptance of the National Labor Relations Act was not merely “extreme.” It was also “illustrative.” He objected that the Supreme Court has upheld the Clean Air Act, which, in his view, violates the separation of powers by granting excessive discretion, and hence legislative power, to the Environmental Protection Agency. Under the Constitution, legislative power rests in Congress; Judge Ginsburg said that because the Clean Air Act allows the Environmental Protection Agency to make the law, the “structural constraints in the written Constitution have been disregarded.” 

    But even this is just the tip of the iceberg. Since the 1930s, the Court has “blinked away” crucial provisions of the Bill of Rights. Of these, Judge Ginsburg singled out the Constitution’s Takings Clause, which says that government may take private property only for public use and upon the payment of “just compensation.” Judge Ginsburg complained that the Takings Clause has been read to provide “no protection against a regulation that deprives” people of most of the economic value of their property. In other words, the Court allows government to impose regulations, especially in the environmental area, that do not quite “take” private property but that much diminish its value. Judge Ginsburg objected that the Supreme Court has not required government to compensate people for their losses. 

    At the same time that the Court has “blinked away” the individual rights of the American Constitution, judges have manufactured new rights of their own devising. In his view, these rights are fake news. In this way, members of the Supreme Court have acted not as judges, but as a “council of revision with a self-determined mandate.” What does Judge Ginsburg have in mind? His chief objection was to the right of privacy. It seemed clear that he rejected Roe v. Wade

    But he went much further than that. He singled out the Court’s decision in 1965 in Griswold v. Connecticut, the foundation of modern privacy law. In that case, the Court struck down a law forbidding married people to use contraceptives. Judge Ginsburg objected that a judge “devoted to the Constitution as written might conclude that the document says nothing about the privacy of” married couples. The Griswold decision, he added, is “not an aberration.” It is matched by recent decisions holding that the Constitution imposes limits on capital punishment, such as its decision in 2002 striking down a death sentence imposed on a mentally ill defendant. 

    Judge Ginsburg’s narrative, then, is simple and straightforward. Until 1933 or so, the Court followed the Constitution. At that point, it adopted a “freewheeling style.” But Judge Ginsburg offered real hope for the future. In recent years, a small but growing group of scholars and judges have been calling for more fidelity to the constitutional text, focusing on the original meaning. “Like archeologists, legal and historical researchers have been rediscovering neglected clauses, dusting them off, and in some instances even imagining how they might be returned to active service.” 

    Judge Ginsburg’s leading example? The Second Amendment to the Constitution, which protects the right “to keep and bear arms.” Judge Ginsburg gave a strong signal that judges might well strike down gun control legislation. His exact words? “And now let the litigation begin.”

    Judge Ginsburg was speaking here of what he himself called the Constitution in Exile — the real Constitution, the one that should be restored. What made his argument so remarkable is that Judge Ginsburg was, and is, a responsible person with a first-rate intellect — and, in his judicial capacity, he displays a large measure of restraint. But in his speech twenty years ago, calling for radical changes in constitutional understandings, Judge Ginsburg was hardly speaking in a vacuum. On the contrary, he was summarizing a line of argument that such conservative luminaries as Robert Bork, Edwin Meese, and Antonin Scalia had been developing for decades. That line of argument had been embraced by many members of the Federalist Society and the Republican Party as well. “And now let the litigation begin” — that was their mantra.

    Judge Ginsburg set out a kind of Constitutional Wish List. The goal was to transform constitutional law, and to do so in major ways. For those on the right, the Constitutional Wish List included the following:

      A broad understanding of the individual right to possess guns.

      A rejection of Roe v. Wade.

      A rejection of the right to privacy in general.

      New limits on the power of modern administrative agencies, including the Environmental Protection Agency. 

      Dramatically strengthened property rights.

      Sharp reductions in Congress’ power under the Commerce Clause.

    In 2002, all this seemed unlikely in the extreme. Would the Supreme Court really be prepared to turn so many constitutional understandings upside down? Astonishing but true, we now have to put a checkmark next to each and every item on the list. They have all been achieved.

     Before 2008, the Supreme Court had rejected the idea that the Constitution creates an individual right to possess guns. Now the Court recognizes that right — and is steadily expanding it. Before 2022, Roe v. Wade was the law of the land. Now it is overruled. Before 2022, the right of privacy seemed firmly ingrained. Now it is gone. Until recently the Court had embraced, in ways large and small, the power of modern administrative agencies, including the Environmental Protection Agency. Now it has sharply limited that power. Just as Judge Ginsburg hoped, property rights have indeed been enhanced. The Court did uphold the Affordable Care Act, by a vote of 5-4, but in the process it announced new limits on Congress’ power under the Commerce Clause. And all this might be just the beginning. With respect to voting rights, freedom of speech, the rights of criminal defendants, freedom of religion, and much more, dramatic changes seem to be coming.

     There are two ways to understand the recent developments. The first, in the spirit of Judge Ginsburg’s argument, is jurisprudential. It insists that the Court is now being “faithful to the written Constitution” — that it is (finally!) following the Constitution “as written.” On this understanding, the Supreme Court has become “originalist,” which means that it is adhering to “the original public meaning” of the Constitution. If that is the right understanding, we need to ask a single question: Is originalism right?

    The second understanding is political. It is that the Court’s understanding of the Constitution is uncomfortably close to the political preferences of the current Republican Party. On that view, the Court is lawless. It is acting as a political body, even if it understands itself as being faithful to the written Constitution.

    Let us begin with originalism. What is it, and what does it entail? 

    That is a surprisingly hard question to answer. The term itself was coined in 1980 by the Stanford law professor Paul Brest, in a law review article that sketched what, in his view, were devastating objections to the whole idea. Brest meant to challenge a view about constitutional interpretation associated with Bork and Raoul Berger (a legal historian at Harvard) that was, at the time, a kind of fringe position, with little support even among right-of-center academics. (At the time, conservative scholars tended to argue more broadly in favor of “judicial restraint,” understood as respect for the decisions of the political process.) As a fringe position, originalism had little influence and political salience.

    What a difference forty years make! Originalism now comes in many shapes and sizes. It is used as a political rallying cry. It has been elaborated in great detail by a host of sophisticated law professors, among them Lawrence Solum and William Baude; law professors who embrace originalism disagree vigorously with one another about what originalism means and requires. Some originalists follow Ginsburg in emphasizing the intentions of the Constitution’s authors; others think that the search for the authors’ intentions is a fool’s errand. Some originalists think that it is important to respect precedent, even if they are not originalist; other originalists think that it is entirely wrong and that the original understanding should trump the Supreme Court’s mistakes. Some originalists think there is a difference between “interpretation,” where judges must follow the original meaning, and “construction,” where judges have nothing to follow and must exercise discretion; other originalists reject this distinction and seem to be appalled by it.

    Amid all the debates, one variety of originalism now seems to be on the ascendency. It is called “public meaning originalism.” Justices Thomas, Alito, Gorsuch, and Barrett seem committed to it, and Justice Kavanaugh seems to like it a lot. On this view, the Constitution must be interpreted in a way that fits with its original public meaning. That means that terms such as “freedom of speech,” “executive power,” “cruel and unusual punishment,” and “due process of law” must be understood not only in accordance with their semantic meaning, but also with the meaning that people would have given to them at the time of ratification in 1789. Interpretation, in this view, depends on an inquiry into history, not on any kind of moral judgment. As Richard Fallon puts it, public meaning originalists contend that the public meaning can be “discovered as a matter of historical and linguistic fact.” In Solum’s words, “the meaning of the constitutional text is a function of the conventional semantic meanings of the words and phrases as they are enriched and disambiguated by the public context of constitutional communication.” 

    Originalists are keenly aware that it is often hard to discover the original public meaning of words as they were used in the late eighteenth century. They know that reasonable people, including specialists, disagree on historical questions. They also know that unanticipated social changes can greatly complicate the search for historical answers. What is the original public meaning of “freedom of speech” as applied to radio and television? How should we understand protection against “unreasonable searches and seizures” as applied to the Internet? The most careful originalists do not ignore these questions. Still, they insist that if judges are originalists, many questions are easy. They add that when originalism leaves some questions open, or makes them really hard to answer, at least it provides the right orientation. 

     

    There is no doubt that if judges followed the original public meaning of the Constitution, constitutional law would be radically transformed. The national government would be permitted to discriminate on the basis of both race and sex. If the national government wanted to segregate people by race, it could almost certainly do that. The right to free speech would be greatly truncated. Blasphemy could probably be made a crime. States could probably allow public figures to recover huge sums of money for defamation. 

    The idea of one person, one vote would be out the window. If the federal government wanted to take away people’s Social Security benefits, or welfare benefits of various sorts, it might not have to give them any kind of hearing. Contrary to Judge Ginsburg’s view, protection of property rights would be reduced, not expanded: some of the most careful scholarly work suggests that according to the original public meaning, the Constitution protects only against physical invasions of property, and imposes no barrier to regulation that greatly diminishes the value of property. All this, by the way, is just the beginning of what would be possible.

             Originalists are acutely aware that their preferred method might lead to outcomes that many people would abhor, and they have a variety of responses. Some originalists insist on the importance of democracy and on the need to rely on democratic processes, not on courts. If originalism might allow government to do what some people consider to be terrible things — for example, to ban contraceptives or to sterilize people — originalists respond that in a self-governing society, the appropriate correctives come from We the People, not from unelected judges. Consider the case of abortion: originalists say that if the right to choose is to be protected, it must be because majorities want them to be.

    Other originalists emphasize the rule of stare decisis: judges should ordinarily respect their own precedents, even if they are wrong. True, the Court was willing to overrule Roe v. Wade, but even in doing so the Court proclaimed that other privacy rulings, including those that protect the right to use contraceptives, were not necessarily at risk. Still other originalists contend that the answers to the historical questions might not be so terrible. Many originalists are at pains to say that on their approach, states may not segregate school children by race. Some originalists contend that on originalist grounds a broad right to freedom of speech is secure.

    Most fundamentally, originalists argue that their approach is mandatory rather than optional. If it requires abhorrent conclusions, that is, in a sense, a sign of intellectual integrity, a badge of honor. In their view, originalism is the only legitimate approach to interpretation, and it is justified independently of the outcomes that it produces. 

    Each of these arguments must be addressed on its own terms. Democracy is fundamental of course, but is it really right to think that the scope of freedom of speech, racial equality, and personal privacy should be defined by political majorities? Would the United States have been better off if the Supreme Court in the twentieth century had limited these and other rights to the understandings of the eighteenth and nineteenth centuries? Consider these words from Justice Felix Frankfurter, from a memorandum that he wrote in 1953 for his files during the Supreme Court’s deliberations over the constitutionality of school segregation:

     

    But the equality of the laws . . . is not a fixed formula defined with finality at a particular time. It does not reflect, as a congealed summary, the social arrangements and beliefs of a particular epoch. It is addressed to the changes wrought by time and not merely the changes that are the consequences of physical development. Law must respond to transformations of views as well as that of outward circumstances. The effects of changes in men’s feelings for what is right and just is equally relevant in determining whether a discrimination denies the equal protection of the laws.

    Some originalists believe in respecting precedents, but many do not. Is it sufficient to say, on behalf of a theory of interpretation, that it would not do nearly as much damage as it might, because some judges are willing to ignore that theory?

    The most important argument about originalism is that it is mandatory. Many originalists seem to think that the very idea of interpretation requires their preferred approach. This is a colossal mistake. The Constitution does not contain instructions for its own interpretation. It does not have an Originalism Clause, directing judges to be originalists. Originalism is a choice. Whether it is the right choice must depend, inevitably, on whether it would make the American constitutional order better rather than worse. That is not a hard question. 

    In these circumstances, it is natural and fitting to wonder: what, exactly, have liberals been doing over the last few decades? For that matter, what have conservatives been doing, if they reject originalism or seek other paths? The short answer is: a lot. Like Paul Brest in 1980, many liberals have been vigorously attacking originalism, sometimes on the grounds that it is much squishier than it purports to be, sometimes on the grounds that it would lead to a host of intolerable results. 

    Liberals have also been developing their own theories of interpretation. For decades, Ronald Dworkin argued for “moral readings” of the Constitution, in which judges would infuse broad phrases with their preferred moral content. Some members of the Supreme Court, including Anthony Kennedy and Sonia Sotomayor, have seemed to agree with Dworkin; consider the Court’s decision to require states to recognize same-sex marriages. In 1980, John Hart Ely published Democracy and Distrust, which argued that judges should protect democracy itself, by safeguarding democratic processes and those who are at a particular disadvantage in them. Some members of the Court, including Ruth Bader Ginsburg and Stephen Breyer, have often seemed to agree with Ely. Left-of-center theorists and practitioners, such as Larry Kramer, the former dean of Stanford Law School, have developed other approaches as well, with occasional (and steadily mounting) enthusiasm for a more modest role for the Court, with an insistence that the justices are most likely to protect those who have the most power. But on the current Court, dominated by Republican appointees, it is not easy to find five votes in favor of positions associated with Dworkin, Ely, and Kramer.

              This point puts a bright spotlight on the elephant in the room: the relationship between constitutional law and political convictions. It would be a true miracle if originalism, properly applied, consistently led to outcomes favored by the extreme right-wing of the contemporary Republican Party. To update Ginsburg’s Wish List: robust gun rights, a ban on affirmative action, reduced voting rights, restrictions on campaign finance laws, no abortion rights, no privacy rights, strengthened property rights, sharp limits on the power of administrative agencies, greater protection of commercial advertising, no right to same-sex marriage, reduced rights for criminal defendants. What are the odds, really, that a particular method of interpretation, honestly applied, would always result in outcomes pleasing to one political side? On the Supreme Court, however, justices who favor originalism are drawn, time and time again, to rulings that belong on that particular Wish List.

    It is important to say that among law professors who are interested in originalism, we can find humility or uncertainty about what, exactly, the relevant history shows. And among law professors who are interested in originalism, we can sometimes find left-of-center conclusions — as in the view that the Equal Protection Clause requires the authorities to protect people of color every bit as well, and as much, as they protect white people. But there is no mistaking the fact that as it is being practiced by real judges, originalism is consistently producing conclusions that delight the political right.

             In these circumstances, it is fair to wonder whether the Supreme Court is doing law at all. 

     

    Digitization, Surveillance, Colonialism

             As I write these words, articles are mushrooming in newspapers and magazines about how privacy is more important than ever after the Supreme Court ruling that has overturned the constitutionality of the right to have an abortion in the United States. In anti-abortion states, browsing histories, text messages, location data, payment data, and information from period-tracking apps can all be used to prosecute both women seeking an abortion and anyone aiding them. The National Right to Life Committee recently published policy recommendations for anti-abortion states that include criminal penalties for people who provide information about self-managed abortions, whether over the phone or online. Women considering an abortion are often in distress, and now they cannot even reach out to friends or family without endangering themselves and others. 

    So far, Texas, Oklahoma, and Idaho have passed citizen-enforced abortion bans, according to which anyone can file a civil lawsuit to report an abortion and have the chance of winning at least ten thousand dollars. This is an incredible incentive to use personal data towards for-profit witch-hunting. Anyone can buy personal data from data brokers and fish for suspicious behavior. The surveillance machinery that we have built in the past two decades can now be put to use by authorities and vigilantes to criminalize pregnant women and their doctors, nurses, pharmacists, friends, and family. How productive.

    It is not true, however, that the overturning of Roe v. Wade has made privacy more important than ever. Rather, it has provided yet another illustration of why privacy has always been and always will be important. That it is happening in the United States is helpful, because human beings are prone to thinking that whatever happens “over there” say, in China now, or in East Germany during the Cold War to those “other people,” doesn’t happen to us — until it does. 

    Privacy is important because it protects us from possible abuses of power. As long as human beings are human beings and organizations are organizations, abuses of power will be a constant temptation and threat. That is why it is supremely reckless to build a surveillance architecture. You never know when that data might be used against you — but you can be fairly confident that sooner or later it will be used against you. Collecting personal data might be convenient, but it is also a ticking bomb; it amounts to sensitive material waiting for the chance to turn into an instance of public shaming, extortion, persecution, discrimination, or identity theft. Do you think you have nothing to hide? So did many American women on June 24, only to realize that week that their period was late. You have plenty to hide — you just don’t know what it is yet and whom you should hide it from.

    In the digital age, the challenge of protecting privacy is more formidable than most people imagine — but it is nowhere near impossible, and every bit worth putting up a fight for, if you care about democracy or freedom. The challenge is this: the dogma of our time is to turn analog into digital, and as things stand today, digitization is tantamount to surveillance. 

    Behind the effort to digitize the world there is a corporate imperative for growth. Big tech companies want to keep growing, because businesses are rarely stable animals — companies that are not on their way up are usually on their way down. But they have been so successful and are so gigantic that it is not easy for big tech to find room to grow. Like Alice in Wonderland, trapped in the rabbit’s house after growing too big, tech companies have their arms and legs sticking out the windows and chimney of the house of democracy. One possibility for further growth is to attract new users. But how to find fresh blood when most adults with internet access worldwide are already your users? One option, which Facebook is unscrupulously pursuing, is to focus on younger and younger children. The new target group for the tech company is children between the ages of six and nine. This option is risky. There are several investigations into Facebook and Instagram for knowingly causing harm to minors. What, then, are the other options for the expanding behemoths? 

    The preferred option these days is to digitize more aspects of the world. Despite the rapid advancement of digital technologies, most of our reality is still analog, even after the onset of covid. Most of our shopping is offline. Most readers prefer paper books. Much of our homes, our clothes, many of our conversations, our perceptions, our thoughts, and our loved ones are analog. That is, most of our experience has not been translated into ones and zeroes, which are the building blocks of digital technology. Experience, almost by definition, is directly lived, unmediated by a screen.

    Tech giants wish to change all that. They share the desire to digitize the world because it is an easy way to gain more ground, to expand by enlarging the house. In this sense, digitization is the new colonialism. Digitization is the way to grow an empire in the twenty-first century. Everything analog is a potential resource — something that can be digitally conquered and converted into data and then traded, directly or indirectly. That is why Google keeps coming up with new products. Maps? Chrome? Android? Those were not designed for you. They are all different ways of collecting different data from you. That is why Facebook and Ray-Ban have together come out with new glasses that have microphones and a camera: more “data capture,” which in reality means the conquest of life by corporate avarice. That is why Apple is launching an augmented reality product, and why Microsoft is proposing a platform that creates three-dimensional avatars for more interactive meetings. And why Facebook — sorry, Meta — is insisting on its metaverse. 

    The tech titans assure us, of course, that their new inventions will respect our privacy. What they fail to mention is what I call the Iron Law of Digitization: to digitize is to surveil. There is no such thing as digitization without surveillance. The very act of turning what was not data into data is a form of surveillance. Digitizing involves creating a record, making things taggable and searchable. To digitize is to make trackable that which was beyond reach. And what is it to track if not to surveil?

    A good example of the close link between tracking and surveillance are AirTags. In 2021, Apple launched the AirTag: a small coin-like device with a speaker, a Bluetooth antenna, and a battery, designed to help people keep track of their items. You can attach an AirTag to your keys and link it to your phone, and if you lose your keys, the device will ping Apple products around it and use Bluetooth to triangulate its location, which you can see on a map on your phone. The AirTag can also beep to let you know where it is.

    Keeping track of your keys seems innocent enough, but the AirTag is designed to track more in general. You can track a wallet instead of keys, or a purse — and not necessarily your purse. Privacy and security experts warned Apple that AirTags would be used for stalking. In response, Apple said it had implemented a notification feature that alerts people with iPhones if there is an AirTag following them. But this measure is insufficient in various ways. First, many people don’t have iPhones, and if you have an Android you have to download an app to be notified through your phone; the vast majority of people have not downloaded it and will likely not download it. You might think that the phone notification is not necessary, because AirTags are meant to start beeping at a random time between eight and twenty-four hours after they have been separated from their paired iPhones, but the beeping is so low that people might not hear it. Moreover, eight hours is plenty of time for a stalker to follow and find his victim. Even if you have an iPhone, my own experience is that there is no guarantee that you will be notified about an AirTag that is tracking you. A few months ago my brother and I rented a car from a peer-to-peer network. After a few hours of driving the car, my brother’s iPhone notified him that there was an AirTag around. The owner of the car had placed it in a locked glove compartment. My iPhone, however, never notified me of the AirTag — even after having been near the car for more than twenty-four hours. We never heard any beeping.

    The New Jersey Regional Operations & Intelligence Center issued a warning to police that AirTags posed an “inherent threat to law enforcement,” as criminals could use them to identify officers’ “sensitive locations” and personal routines. One year after their launch, there were at least 150 police reports in the United States mentioning AirTags, and recently, one murder case. That might not seem like much, but cases are likely in the thousands, given how many people might not notice they are being tracked or might not report it to the police. Not that reporting it to the police is of great help. Police often don’t know what to do about it; sometimes they don’t even take a report, which leaves vulnerable people (women, most often) unprotected. 

    Stalking affects an estimated 7.5 million people in the United States every year and, not surprisingly, it is on the rise. Last year a study by the security company Norton found that “the number of devices reporting stalkerware on a daily basis increased markedly by 63% between September 2020 and May 2021.” We are producing more and more technology to track — of course stalking is on the rise! To expect anything different would be to engage in self-delusion. In the pre-internet age, it was expensive, effortful, and risky to spy on someone. Today, you can buy an AirTag for $29.

    What is most striking about the AirTag example is how foreseeable these issues were. It’s not that the AirTag was misused in any surprising or imaginative way. When an AirTag is used for stalking, it is being used exactly according to its design. Some dual uses of technology are surprising. Gunpowder was originally designed for medicinal purposes — who would have thought it might change war forever? But tracking technologies are designed to track — and tracking is surveillance, and surveillance amounts to control. Human beings are social beings, which means that most of the time what we are most interested in is other people. We should hardly be surprised when tracking technology is employed to track people, the most salient element of most people’s lives. AirTags are the tracking device par excellence. They are designed to track and to do nothing else. Yet smartphones, for all their many uses, are also tracking devices. Your phone can make calls and take photographs, but above all it collects information about you and others.

    Too many people enthusiastic about digital technology are under the impression, as convenient as it is misguided, that if people consent to data collection, and if the data processing happens within our own phone or computer, there is no problem with privacy. If only it were so simple. There are at least two reasons why there are still privacy issues when it comes to the collection of personal data in our devices.

    First, there is no informed consent in data collection. The consent we give is neither consent, because it is not truly voluntary, nor informed, because no one has any idea where that data may end up and what inferences may be drawn from it in the future. We are forced into “consenting” because if we do not consent we cannot be full participants in our society. There is no leeway for negotiation in platforms’ “terms and conditions.” It’s their way or the highway, and their way can change at any time and without warning. But we could not give informed consent even if we had the chance, because data is so abstract and unpredictable in the kinds of uses it may have, and the kind of inferences it will be able to produce, that not even data scientists can give informed consent. No one knows what consequences today’s data collection will have.

    Second, data creation is itself morally significant. The term “data collection” is somewhat misleading, in that it seems to suggest that to collect data is to assemble things that are already there. But data are not natural phenomena, like mushrooms that we find in the forest. We do not find data. We create data. Data collection implies data creation. And that act of creation is a morally significant decision, because data can be dangerous. Data can tell on us: whether we are thinking about changing jobs, whether we are planning to have children, whether we might be thinking of divorcing, whether we might be considering having an abortion. Data can harm people. For this reason, data creation carries with it a moral responsibility and a duty of care towards data subjects. 

    “What privacy problem can there be if the data is on the user’s encrypted phone?,” a tech executive asked me once, assuming that users are in control of their phones, and ignoring the many examples that show otherwise. Our phones have a life of their own. They send data to third parties without us even realizing it, for starters. Every phone connected to the internet is hackable. Domestic abusers can take advantage of technologies to control their partners and their children. If an abuser forces you to share your password, the data that your phone has created without your asking it (where you have been, who you have called, etc.) can work against you. A TSA officer can ask you to unlock your device at the border and can download your data. That can happen even if you are American, and even if it is your work phone, in which you have confidential professional data. The police can ask you to unlock your phone too. And who can guarantee that an insurer will not ask you for access to that data in the future? If you do a commercial DNA test, even if it was only for fun, you are obligated to disclose it to your insurer. Can we be sure insurance companies will not ask for access to our smartwatches or smartphones some day? As soon as personal data has been created and stored, there is a privacy risk for the data subject, which then spills on to be a risk to society. 

    The risks to society are significant and varied. They go from national security (all that personal data can be used to extort public officials and military personnel, for instance) to threats to democracy, which will be my focus here.

    Just like the old colonialism, digitization carries with it a certain ideology that it seeks to impose. It comes with ideas of what progress looks like. Old colonialism imposed a certain language, etiquette, clothing, social institutions, and ways of life. New colonialism imposes code, exposure as etiquette, a weakening of old social institutions, and ways of life that lead to societies of control.

    Technology is never neutral. Tech companies find it convenient to present their products as neutral tools, but marketing bears little relation to truth. Artifacts inevitably embody values. We make artifacts so that they do something for us, and we wouldn’t bother making them if we did not value whatever it is that they do. Since technology is designed with a purpose in mind, artifacts end up having affordances. An affordance is what the artifact invites you to do. It is an implicit relationship between the designer and the user through the object designed. A chair affords you to sit on it. We design things like buttons and handles to match our bodies, perceptive systems, and desires. A gun affords you to use it to threaten, hurt, and possibly kill; it does not afford you to cook with it. Pans and skillets afford you to cook. Surveillance tools afford control; they afford the chance of keeping a close watch on something or someone. A camera allows you to watch anyone who appears in its purview. And a camera is a tool for surveillance irrespective of whether the footage is encrypted and in your phone. This is not to imply that encryption is not important. It certainly is, because it adds very necessary security to sensitive data. But no amount of encryption will detract from a camera the affordance to surveil. 

    Contemporary surveillance tools all too often are a double-sided mirror, which not only enables you to watch others but also enables others to watch you. They are often also camouflaged as some other kind of tool, like a phone or a TV. Before the age of the internet, surveillance tools were mostly one-directional. A Stasi agent monitoring a suspect in East Berlin through a wiretap could listen to her target without thereby opening the possibility of being wiretapped herself. But the internet allows for multiple directional flows of information. You might buy an Amazon Ring camera to watch whoever gets near your door, but that device allows Amazon (and your housemates) to learn things about you. It can track when you leave your home, and when you come home and with whom. It can also be used to inform the police (in some cases without your permission and even without a warrant). And anything that can be online is hackable, so you are enlisting into the risk of criminals accessing your footage, for example, to figure out when you are away so they may rob your home. 

    Your Ring camera is not only surveilling you — it is also watching and listening to your neighbors. Amazon has recently rejected the request made by Senator Ed Markey that the company introduce privacy-preserving changes to its doorbell camera after product testing showed that Ring routinely records audio conversations happening as far away as the opposite sidewalk. Your neighbor could be recording the conversations that you have at your doorstep or driveway and could post them online. If you use a screen door and keep your front door open, a Ring device could be recording the conversations you have in your living room. The potential for blackmail, stalking, and public shaming is immense.

    Other surveillance tools are much less obvious than a camera. Take something like Alexa. It’s a speaker that plays music. It is a timer. It can read you the news. It can allow you to order all kinds of products. It doesn’t look or feel like a surveillance machine, but it is keeping a close watch on you. Amazon wants to turn Alexa into an appliance that can predict what you want. For it to accomplish its task, it has to know you very well. Alexa collects data from what you say and shares it with as many as forty-one advertising partners. If you have not opted out, human beings might be reviewing what you tell Alexa. And, sure, you can have your data periodically deleted and opt out of human review, but your data will still be used to train Alexa, whether you like it or not. 

    In more than one out of ten transcripts analyzed, Alexa “woke up” accidentally and recorded something surreptitiously. The same thing happens to other digital assistants. An Apple whistleblower confessed to have “heard people talking about their cancer, referring to dead relatives, religion, sexuality, pornography, politics, school, relationships, or drugs with no intention to activate Siri whatsoever.” The police might be interested in getting access to that data. Alexa recordings have already been used in all kinds of legal cases, from proving infidelity in a divorce case and identifying drug users in a household to providing evidence in murder cases. If police can access recordings made by our devices at home, how is that different from having the police living under your roof? We would never be at ease having the police living in our homes, so why do we invite Alexa in? Aren’t we uncomfortably close to building a police state, or at the very least building the structure that could support an almost omniscient police state?

    In the 1990s, we owned the objects we bought. Today we still pay for our phones and doorbells, but they work for other people, and often against our own interests. And of course it’s not just AirTags, smart doorbells, smartphones, and Alexas. It’s your smartwatch, and your smart TV, and car, and electricity meter, and kettle, and laundry machine. Everything “smart” is a spy. And while every piece of data may seem uninteresting and innocuous, you would not believe how precise a picture emerges from joining the dots of all those data points. 

    Data creation and data collection will only increase if we continue the trend towards augmented and virtual reality. These technologies will want to collect much more data about everything, from your indoor spaces to the movement of your eyes. Eye-tracking technology will be crucial in creating a rich digital environment. It is likely that virtual reality will mimic human sight, which focuses on something and blurs the background. If everything is equally salient, it is harder to navigate your surroundings and you can easily get motion sickness. To simulate our natural visual experience by offering low-quality images in your peripheral vision and high-quality images on what you focus on, the tech needs to identify what you want to pay attention to. Eye-tracking is the most important source of information for that. Relatedly, eye-tracking can be used to increase the user’s ability to direct and control her experience.

             Unfortunately, your gaze can be incredibly revealing. Your eye movements, iris texture, and pupil size and reactions can inform others about your identity (through iris recognition), state of mind (e.g., if you are distracted), emotions (e.g., if you are afraid), cognitive abilities (based on factors like how long you look at something before acting), your likes and dislikes (including your sexual interests), your level of fatigue (through analyzing your blinking), whether you are intoxicated, and your health status (by looking for patterns of eye movements that might be symptomatic of problems such as Alzheimer’s or schizophrenia). Even if some of these inferences might be scientifically questionable, experience suggests that companies are likely to try their luck with them anyway.

    By creating and collecting so much personal data, it is becoming more difficult to avoid surveillance. Even if you leave your phone at home (I know, a big if), you might still get caught by surveillance through dozens of cameras as you go about your day. If we plaster our cities with sensors of various kinds, there is no opting out or escaping it. The danger is in the long term. Surveillance is a slow-acting poison. Its consequences are not immediately apparent. All of which leads to the surveillance delusion: the mistaken belief that surveillance has many advantages and no significant costs. For every individual decision, surveillance can seem like an attractive solution in the short term, when we imagine that all goes exactly as planned: it seems to keep us safer, it helps us track what we care about. But the long-term and systemic effects of surveillance are often overlooked. Under the surveillance delusion, only the benefits of surveillance are valued, and surveillance is understood to be a convenient solution to problems that could be solved through less intrusive means. But surveillance often creates more weighty problems for democracy in the long run than the ones it can solve.

    Democracy is a complex house with many pillars sustaining it, and it can crumble so slowly that we might not know immediately when we are undermining it. Journalism, for example, “the fourth estate,” has long been considered an important pillar of democracy. Citizens have to be well informed enough about their society to be able to make autonomous democratic decisions, such as whom to vote for. When we reduce privacy, we weaken journalism. In July 2021, a leak revealed that more than fifty thousand human rights activists, academics, lawyers, and journalists around the world had been targeted by authoritarian governments using Pegasus, a hacking software sold by the Israeli surveillance company NSO Group. It is probably not a coincidence that the most represented country among the people who were targeted with spyware, Mexico, is also the deadliest country in the world for journalists, accounting for almost a third of journalists killed worldwide in 2020. When journalists do not have privacy, they cannot keep themselves or their sources safe. As a result, people stop going to journalists to tell their stories, and journalists quit their jobs before they lose their lives, or they focus on safe stories, and investigative journalism slowly dies, thereby gravely hurting democracy. 

    Some people think that if surveillance is done by corporations and not the government, the concern is lessened. Others think the opposite: that if surveillance is done by the government and only by the government, we will be safe. Both views are wrong: corporate surveillance is as dangerous as government surveillance and vice versa, and even peer-to-peer surveillance undermines ways of life that are supportive of freedom and democracy. 

    Giving too much personal data to governments will grant them too much power, which can support authoritarian tendencies. As I have argued, surveillance tools afford control, and when governments hold too much control over the population they become authoritarian. You might happen to trust your current government, but you cannot be sure that you will trust the next government. And you cannot be sure that a foreign power will not hack the data held by your government, or even invade your country. One of the first stops for Nazis in a newly invaded city was the registry, because that is where the personal data was held that would lead them to Jews. The best predictor that something will happen in the future is that it has already happened in the past, and personal data has already been used to perpetrate genocide. A contemporary Nazi regime with access to the kind of fine-grained data we are collecting would be near indefeasible. That alone makes surveillance reckless. China is using its surveillance apparatus against “enemies of the state”: from minorities such as the Uyghurs and the Tibetans to the defenders of democracy in Hong Kong. We must dismantle architectures of surveillance before they get used against us.

    Corporate surveillance is just as much of a problem. First, any data collected by companies can — and often does — end up in the hands of governments, whether through governments purchasing data, legitimately acquiring it (e.g., through a warrant or subpoena), or hacking it. In practical terms, corporate and government surveillance are indistinguishable. Moreover, corporations do not have our best interest at heart, and these days they are certainly not guardians of democracy or the common good. Thanks to corporate surveillance you can be unfairly discriminated against for a job, or insurance, or a loan. And personal data can be used to produce personalized propaganda, pit citizens against one another, and undermine civic friendship and democracy. Companies, after all, think of themselves as answerable only to shareholders. 

    Corporate surveillance is all the more worrying in the case of companies that can become more powerful than entire countries. Once again, this worry gives us reason to learn from old colonialism. At its summit, the East India Company was the largest corporation in the world, and it had twice as many soldiers as the British government. Among its many sins were slave trafficking, facilitating the opium trade, exacerbating rural poverty and famine, and looting India. A senior official of the old Mughal regime in Bengal wrote in his diaries: “Indians were tortured to disclose their treasure; cities, towns and villages ransacked; jaghires and provinces purloined.” So it’s not only that powerful corporations can violate human rights. To some extent, they can also act like states when they are the protagonists of colonialism. As William Dalrymple puts it, 

     

    We still talk about the British conquering India, but that phrase disguises a more sinister reality. It was not the British government that seized India at the end of the 18th century, but a dangerously unregulated private company headquartered in one small office, five windows wide, in London, and managed in India by an unstable sociopath — Clive.

    Just like at the end of the eighteenth century, corporations are leading colonialism in the twenty-first century. This time round it is big tech doing the looting (of our privacy, at the very least). They are the entities setting the agenda and imposing a culture of exposure around the world. Big tech companies benefit from our spending as much time as possible on their devices and platforms, sharing as much personal data as possible — which is why they sell the idea of exposure as a virtue: tell us what you feel, where you go, what you eat, what you think about other people, what worries you, and how we can make money off you. And if you don’t want to tell? Well, that must be because you have something to hide, which in big-tech-speak is not about protecting yourself from wrongdoers but about being a wrongdoer yourself. Big tech colonialism shames us into exposure for their own profit, and in doing so, they poison the public sphere. 

    Cultures of exposure are another good example of how surveillance leads to control. The pressure to overshare encourages social vices such as stalking and witch-hunting. If everyone is pressured into exposing their opinions and habits, it is a matter of time before someone finds some of them objectionable and starts hunting people for their views. It is interesting how something that used to be regarded as inappropriate — exhibitionism — has now morphed into being considered a social imperative — transparency. Some measure of transparency is certainly appropriate when it comes to institutions — but not when applied to individual citizens. Both exhibitionism and social policing cause “either-you’re-with-us-or-against-us” mentalities and thereby jeopardize civic friendship. 

    Liberal democracies aim to allow as much freedom to citizens as possible while ensuring that the rights of all are respected. They enforce only the necessary limits so that citizens can pursue their ideal of the good life without interfering with one another. But for a liberal order to work, it is not only governments and corporations that have to give citizens a space free from unnecessary invasions; citizens have to let one another be as well. Civility requires that citizens exercise restraint in the public sphere, especially regarding what we think of one another. To expect people to be saints is unreasonable. “Everyone is entitled to commit murder in the imagination once in a while,” as Thomas Nagel has remarked. If we push people to share more than they otherwise would, we will end up with a more toxic environment than if we encourage people to edit or curate or limit what they bring into the public sphere. A culture of exposure invites us to share our imaginary acts of murder, needlessly pitting us against each other. Sparing each other from our less palatable facets is not a vice, but a virtue. Protecting privacy — our own and that of others — is a civic duty.

    Totalitarian societies tend to match institutional surveillance with peer-to-peer surveillance to achieve near-total control of the population. During China’s Cultural Revolution, people were encouraged to denounce their neighbors and even their family members. Children sent their parents to their deaths. The same thing happened in Stalin’s Soviet Union. The East German Stasi used an astonishingly high number of informants to infiltrate the general population. When we use social media for trolling, witch-hunting, and publicly shaming others, we behave more like subjects of totalitarian states than as citizens of free societies. 

    We resist the colonialism of digitization partly through culture. We defy digital colonialism when we value the analog, the unrecorded, the untracked. Tibetan Buddhist monks have a tradition of spending days creating beautifully intricate mandalas using colored sand. When they finish their work of art, they sweep it all away in a ceremony. The sand is collected in a jar which is wrapped in silk and taken to a river, where it is scattered. Sand mandalas are a homage to impermanence. Unlike paintings, which strive to resist the passage of time, sand mandalas are there to remind us that there is beauty in the ephemeral. 

    We challenge digital colonialism when we enjoy life without wanting to freeze it into a photograph. We resist totalitarianism when we decline to publicly shame someone for a mistake that anyone could have made. We preserve intimacy when we allow a conversation to go unrecorded. We stand up for democracy when we buy a paper book at a bookshop using cash. 

    Yet culture is not enough. We also need the right technology. Architectures of surveillance afford control over the population. Our current technology — all of it the result of engineering and corporate decisions, and none of it inevitable in its present configurations — is priming society for an authoritarian takeover. Analog technology is more respectful of citizens. We could also make digital technology less intrusive by creating and collecting less personal data, by periodically deleting data, and by improving our cybersecurity standards. In a global context in which a country such as China is exporting surveillance equipment to around one hundred and fifty countries, the job of liberal democracies is to be a counterweight to that authoritarian influence by exporting privacy through culture, technology, and legal standards.

    We need the right regulation to match culture and technology, because collective action problems can only be solved through collective action responses. For starters, we should ban the sale of personal data. As long as personal data can be bought and sold, companies will not resist the double temptation of creating and collecting as much of it as possible, and then selling it to the highest bidder. The trade in personal data is jeopardizing democracy through personalized propaganda. We do not sell votes, and for many of the same reasons we should not sell personal data. 

    We should also limit the purview of the digital. Asking technology companies not to digitize the world is like asking builders to please refrain from paving over natural spaces. Unless society sets legal limits, profit-seeking will reign. Corporations will sell our democracies if it is lucrative enough and we let them. Governments create protected areas to restrain the impulse to build over every square inch. We need similar protected areas from surveillance. It is in the very nature of big tech to turn the analog into digital, but turning everything into a spy is a threat to freedom and democracy. Full digitization equals total surveillance. There is some data that is better not to create. There is some information that is better not to store. There are some experiences that are better left unrecorded. 

    Just over a decade ago, enjoying digital technology was a luxury. Increasingly, luxury is being able to enjoy space and time away from digital technology. Spaces that are free of digital technology stimulate deeper connections between people, more honest conversations, free experimentation, the enjoyment of nature, being grounded in our embodiment, and embracing lived experience. That is why Silicon Valley elites are raising their children without screens. 

    We need urgently to defend the analog world for everyone. If we let virtual reality proliferate without limits, surveillance will be equally limitless. If we do not set some ground rules now on what should not be digitized and augmented, then virtual reality will steamroll privacy, and with it, healthy democracies, freedom, and well-being. It is close to midnight. 

     

    The Autocrat’s War

    The Emperor Nicholas was alone in his accustomed writing-room in the Palace of Czarskoe Selo, when he came to the resolve. He took no counsel. He rang a bell. Presently an officer of his Staff stood before him. To him he gave his orders for the occupation of [the Danubian] Principalities. Afterwards he told Count Orloff what he had done. Count Orloff became grave, and said, “This is war.” 

    Alexander William Kinglake

    The Invasion of the Crimea, 1863 

    Alexander William Kinglake, the nineteenth-century British travel writer and historian who published a history of the Crimean War in eight volumes, could hardly have known how and in what surroundings Nicholas I made the fateful decision that caused the declaration of war by the Ottomans. In the imagination of nineteenth-century historians and writers, wars were the products of high politics, and the Crimean War, one of the most senseless, ridiculous, and tragic defeats in Russian history, was commonly blamed upon the Russian tsar and his abysmal vanity, arrogance, religious fanaticism, and nationalism. Court historiographers spilled a lot of ink trying to exonerate Nicholas I and shift the blame for launching the bloody war onto Russia’s treacherous allies and insidious rivals.

    It is therefore even more surprising that Nikolai Chernyshevsky — Russia’s first revolutionary democrat, who apparently read Kinglake’s volume in his prison cell at the Peter and Paul Fortress in 1863 — also thought that the tsar was not the guilty party: “Who shed these rivers of blood? … Who? Oh, if only conscience and facts had allowed us to think ‘the late sovereign,’ how good this would have been! The late tsar is long dead, and we would not have to worry about Russia’s future…. But, my dear reader, neither the dead tsar nor the government is guilty of the Sevastopol war.” 

    According to Chernyshevsky, the main suspect was the Russian educated “public,” who had laid the blame on the dead tsar and continued to dwell without punishment or remorse: “The public is immortal; it does not resign, and there is no hope that this persona that caused the Crimean war ceases to represent the Russian nation and to have great influence upon its fate.” Without due respect for the greatness of Russian poets and writers, including Pushkin, Chernyshevsky blamed them for impressing on the minds of light-minded Russians the fantasies of taking control over Constantinople and beating the Ottomans on their land.

    Nobody, for sure, wanted the war, and only when they kissed their loved ones farewell did the same people who had carelessly joked about the “Russian Bosporus” understand what the war was about. Russia suffered a humiliating defeat, senselessly wasting thousands of lives and millions of rubles. Yet the horrors of the Crimean war, even if only seen through the eyes of Russian soldiers and not their Turkish (or British and French) counterparts, were soon forgotten. 

    Not long after the shameful debacle, the government approved the establishment of a “Slavic committee” in Moscow that aimed to “prevent” and anticipate Western influence upon the Southern Slavs of the Ottoman Empire. Twenty years later, Nicholas’ son Alexander II waged another war against the Turks, claiming to protect the Christian population of the Ottoman Empire. The second Eastern war in 1877–1878 was a military success, but most importantly, it was a propagandistic triumph that took off the table the question of responsibility for another imperialist adventure. Clearly the government had learned the lesson of the Crimean embarrassment: dealing with the questions of causality and responsibility had to be an integral part of the war effort and strategy.

    The catastrophic war against Ukraine that started in 2014 and entered a bloodier phase in February 2022 has already produced heated debates about its causes. The question of whether this is “Putin’s war,” or “Russia’s war,” or “the Russians’ war” echoes Chernyshevsky’s dilemma, but the answers, usually emotional and spontaneous, express the incomprehensibility of violence rather than a serious attempt to understand the roots of the disaster. Writers habitually compare Putin’s Russia to Hitler’s Germany, drawing parallels between the lethargic character of the Germans’ denial of Nazi crimes and the Russian public’s support of war in Ukraine. While this comparison points to a plausible diagnosis — a peculiar intellectual antibiosis of society — the causes of the disease in its respective settings are most likely different. In any case, current debates about whom to blame often simplify the issue, operating with imprecise categories and ignoring the context. Scholarly analysis will have to frame the problem as broader and punchier, considering the role and responsibility of the autocrat and the ruling clique not only in waging the war but also in turning the majority of the population into his supporters and accomplices. 

             While a cold and dispassionate analysis of the genesis of the current war may seem improbable at the moment, there is one thing that we can do: look back at past conflicts and analyze how Russia’s wars usually began. This comparison suggests that the formulas discussed above — “one man’s war” or a “nation’s war” — are themselves the products of the rhetorical attempts to either celebrate or exonerate rulers and to shift responsibility for waging the conflicts, either successful or failed, onto society. Wars belong to a particular category of events that are always shrouded in mythology: state propaganda doubles its efforts when it deals with armed conflicts. In the panoply of myths, one persistent trope stands out. It describes the archetypal scenario of a war’s outset; and Russia’s failed wars were not only those that Russia lost militarily, but also those that did not follow the prescribed scenario, the ones that laid bare the ruler’s personal role. To deal with the problem of causality and responsibility, however, it is important to distinguish the rituals of launching wars from the actual political mechanisms of their enactment. 

    As the war in Ukraine grinds on, it is illuminating to consider the precedents of Russia’s imperial wars of the nineteenth and early twentieth centuries, so as to trace how the wars began, how those beginnings were described, and what those beginnings tell us about the range of responsibility for unleashing violence. Despite the time lapse, the comparison between the politics of war in imperial Russia and in contemporary Russia is useful and legitimate: as Putin’s persistent references to the Russian imperial legacy demonstrate, he intentionally and unintentionally emulates the old mechanisms of autocratic governance. Wars, and not domestic reforms, however “great” they may have been, represented the main mechanism of legitimation in autocracies. Almost all the rulers of the Romanov dynasty fought at least one war during their rule. It is reasonable, therefore, to suggest that autocracies do not merely share a general inclination toward violence, but also display similar mechanisms of geopolitical decision-making. At the outset of war, the key moment of every monarch’s rule, an autocrat claims a complete authority that in peaceful times may appear limited and constrained.

             This complete authority, the way in which war is used to strengthen dictatorial power, may not be put fully on display. To justify war, an autocrat may cite an alleged provocation from below or a popular demand to which he responds. He may shift the burden of responsibility for human losses onto his advisers while accruing to himself the political benefits of victories. For this reason, the real mechanisms of war politics should be critically examined. And there is the additional question of the role of society. Does it bear responsibility for the violence, as Chernyshevsky thought? Does society have agency in an autocratic state, and does the autocrat take “public opinion” into account? Additionally, is collective responsibility a useful category, or should only individual perpetrators or groups and organizations take the stand in the makeshift court of history? 

    Let us begin with the role of the autocrat. In Timothy Frye’s wise and counter-intuitive words, “Recognizing Putin as an autocrat … brings into sharp focus the inherent limits of his power that are common to autocratic rule.” Historians of Russian autocracy concur with this observation: at no point in Russian history, they say, was a Muscovite tsar, an empress, or an emperor fully “autocratic” in making their choices and decisions. Boyar clans, unofficial parties at court, groups of ministers, court favorites, and lobbyists all worked toward forming the sovereign’s will and making him or her deliver the right decision at the right moment. Mikhail Dolbilov describes this process as “divining,” that is, “constructing” the ruler’s will and couching it in the language of laws and orders. 

    The interactions between the tsars and the advisers, however, were never one-sided: monarchs manipulated people masterfully, exploiting contradictions and conflicts among their favorites and courtiers, artificially sharpening disagreements, and shifting moral and political responsibility for crucial decisions onto representatives of the elites. In addition to these informal networks of power, autocrats also relied on a variety of political bodies. In both monarchical Russia and Putin’s autocracy, legislative chambers and political offices have existed mainly to legitimize the rulers’ decisions and to bind political elites by shared responsibility. There is also the class of technocrats and bureaucrats who bear the burden of governance and execute the monarch’s orders. We may conclude, therefore, that “autocratic will” is a complex set of mechanisms based on the preponderance of informal practices, customs, and rituals over rules and laws.

    But when it comes to wars, the traditional rituals and practices of decision-making prove moot. The role of government usually recedes into the background, and the autocrat surrounds himself with unofficial advisers, often shifting gears in motion, dismissing trusted politicians and bringing forward new people and favorites from the inner circle. Such famous bureaucrat reformers as Mikhail Speransky and Sergei Witte both lost their leading positions on the eve of wars, in 1812 and 1902 respectively. Speransky’s fall was staged as a tragedy: sending off his State Secretary to Siberia, Alexander I cried and lamented that he was sacrificing his adviser for the safety of the empire in view of Napoleon’s imminent invasion. The replacement of Speransky with nationalist conservative politicians represented a part of the pre-war drama, but in reality it reflected the tsar’s efforts to strengthen his absolute authority. 

    Witte’s story is also remarkable: a powerful minister of finance and de facto first person in the imperial government, he lost the political battle against a handful of unofficial advisers to the tsar, who pushed the emperor toward the more aggressive policy in the Far East that ultimately led to a war with Japan in 1904–1905. Describing this episode in his memoir, Witte portrayed the poor gullible tsar as a weak-willed child, easily manipulated by a group of unscrupulous and militant politicians. The story, fairly accurate in details, nevertheless looks like the traditional scenario about wars’ beginnings that features competition between pro-war and anti-war factions at court fighting for the tsar’s attention. In these competitions, the only winner was usually the monarch: launching wars was a way to get rid of importunate reformers, to consolidate supporters, to shake up the political establishment, and to refresh the absoluteness of the tsar’s authority. In the case of the Russo-Japanese War, the trick failed, leading to revolution and the constitutional reform of 1905–1906 that stripped the tsar of some monarchical prerogatives.

    Until the end of the tsarist regime, war and foreign policy remained within the protected sphere of the tsar’s personal rule. The narratives of the wars’ origins, considered without the long preambular part of diplomatic negotiations, were consequently staged as the dramas of the tsar’s choice between different camps, actors, and opinions. Even though unleashing the war was always the tsar’s personal choice and decision, the rhetoric and rituals of war dramas required the presence of others — noble defenders of the empire’s honor, faint-hearted bureaucrats, or evil instigators of violence. The scenarios of wars were designed in such a way that the autocrat was always at the center — and yet never alone. A typical plot of a “good” war as portrayed in the official myths always included 1) attempts at reconciliation and the ruler’s patient search for peace; 2) the people’s demand, and the advisers’ suggestion, to act more decisively; 3) the tsar’s reluctance to shed the blood of his soldiers; and, ultimately, 4) his determination to make the sacrifice for the sake of the empire’s honor and peace. 

    It is important to keep in mind, however, that the conventional plot of the war drama differed from the real politics of autocratic decision-making. Consider the example of the Russo-Turkish war of 1877–1878. Although Putin has never referred to it (perhaps because Turkey remains one of Russia’s somewhat infidel allies), the official narrative of that war, as well the model of interactions between its political actors, eerily resembles the situation on the eve of Russia’s invasion of Ukraine this year. The Russo-Turkish war is usually portrayed as a war for the liberation of the Slavs of the Ottoman Empire, a reaction to Turkish atrocities in Bulgaria and Herzegovina. According to the traditional narrative, Alexander II reluctantly agreed to step in after Russia’s diplomatic efforts to resolve the Eastern crisis had brought no results, while a collective “Europe” demonstrated a cold indifference to the fate of Christians in the Ottoman Empire. The lofty rhetoric of liberation was meant to hide the fact that Russia was ultimately the aggressor; and although it did not plan to incorporate Slavic lands into its territorial domain — it “only” wanted to create dependent satellite states in the Balkans — Russia ended up seizing a portion of Ottoman territory in Eastern Anatolia. 

    The official narrative of the outbreak of the Russo-Turkish war resembles the  libretto of a nineteenth-century opera with a plot developed on two levels — the crowd scenes (the Russian public cheering on the Slavs) and the main drama at the tsar’s court and within the imperial family. The crowd is stirred by the news about the Turks’ atrocities and it demands justice; and promptly produced paintings of pale-skinned Slavic women tortured by dark-skinned barbarous Turks provided the perfect scenery. Troops of volunteers march to the Balkans, and peasants and poor folk send in their modest donations to help their Slavic brethren. Meanwhile, in the imperial palace, the tsar is tragically torn between his human compassion and his duty as the Russian monarch to put the interests of his people above all. At court, there are two forces pulling in different directions: one, exemplified by the tsar’s top bureaucrats, argues for caution and restraint; another, represented by the Empress Maria Aleksandrovna and the heir to the throne, the future Alexander III, is fully on the side of the bellicose public and the champions of the Slavs. The defense minister Dmitry Miliutin listens to the “outpourings” of the sovereign’s heart and records them in his diary. The tsar is sad and alone. His “hollow cheeks” and his eyes swollen with tears betray his sufferings; his health is deteriorating. He stoically withstands unfair criticism for indecisiveness and passivity, yet he is tormented by doubts. The tsar feels for the poor Slavs, yet he knows that all blame for the losses and the casualties of war always “fall on those who make the first step.” And the poignancy of the tsar’s dilemma contrasts with the coldness of Russia’s European counterparts, especially Austria-Hungary and Britain, who cynically pursue their political interests, faking support of the Balkan Slavs. And after a few months of honest attempts to make the Turks change their policy, Alexander II concludes that Russia cannot avoid the war and resolves to act.

    The Russo-Turkish war became a turning point in Russian politics, marking the end of the era of the Great Reforms and prompting the reorientation of Russian domestic and international policies. Even if Alexander’s trepidations were sincere and he came to believe in Russia’s mission to liberate the Slavs, there is no doubt that he used the split of the elites to his political advantage and manipulated the groups at court as well as his family. Wars are almost never the outcomes of external factors alone: to understand their sources, one must also look inside and analyze the internal domestic tensions between the elites, the ruler, and the groups of interests. 

    When it comes to the current war in Ukraine, we do not yet have the luxury of first-hand accounts, but political analysts and intelligence reports suggest that Putin, just like Nicholas I, made this decision in solitude. Putin turned his obsession with Ukraine’s resilience in the face of Russia’s pressure into a state matter, a creed that keeps his close friends and allies together. Little is known about Putin’s inner circle; but the public appearances — and disappearances — of certain statesmen and politicians allow us to deduce that since the beginning of the invasion in February the narrow group of trusted friends and advisers has become smaller and tighter, while the role of technocrats has become entirely subsidiary. The government’s influence has been significantly reduced, and the role of the Security Council, chaired by the president but unofficially led by Nikolai Patrushev, Putin’s old friend and a former head of the FSB, has increased. All those who remained in power were compelled to publicly express their support of the “special operation in Ukraine.” 

    In this case, as in multiple episodes from the war history of the Russian Empire, the decision came from the autocrat who, as the ritual prescribes, solicited advice from the people and the elites. The meeting of the Security Council, broadcast on Russian state television, showed a handful of top officials, who, shaking and in trembling voice, gave their consent to the invasion. Yet if we look beyond the ritual, wars in autocracies are always the ruler’s wars. When it comes to the decision to fight a war, the “inherently limited” power of autocrats becomes, in fact, unlimited. Wars represent a way to build and maintain autocracies, even if they can also lead to their collapse.

    Let us now return to Chernyshevsky’s question: does a people, or just the educated part of it known as “the public,” bear responsibility for unleashing the war? In the aftermath of the Crimean War, people considered themselves the victims of Nicholas I’s regime. Yet in 1877 the situation looked different. The second Eastern war was portrayed as a war by popular demand that was almost forcefully imposed on the tsar. True, the ideas of cultural and political patronage over Balkan Slavs in the 1870s had gained popularity in Russian society. The so-called Slavic Committees in Moscow, St. Petersburg, and provincial cities initially focused on strengthening cultural unity and on humanitarian help, but after the suppression of the rebellion in Herzegovina in 1875 they switched to more active support of the “insurgents,” sending supplies and recruiting volunteers to fight for the freedom of Slavic “brethren.” The flip side of this activity was, indeed, the rise of anti-Turkish sentiments. The government publicly demonstrated its neutrality and non-involvement; it also quietly tried to lean on the Slavic committees and to channel the outpour of pro-Slav emotions in the right direction.

    But did these pan-Slavic circles — allegedly grassroot organizations supported and patronized by the ruling elite — represent “society”? A closer look at Russia’s political landscape of the 1870s shows that it is almost impossible to draw a line separating the “state” from the “public.” Russia did not have legal political parties until 1905, and the public sphere was closely policed by the government. As a result, a handful of conservative journalists and writers — Mikhail Katkov, Vladimir Meshcherskii, Ivan Aksakov — dominated the public mind, controlled the flows of information, and formed the language of public debates. Their influence was not, however, limited to the public. Katkov and Meshcherskii were privy to ruling circles: along with the tsar and other members of the elite, they were the main stakeholders in the campaign against the Ottomans. In contrast, liberal and democratic proponents of the Slavic cause were repressed, silenced, and exiled. The sad irony of the pro-Slavic campaign of 1876 lay in the fact that in the same year when the tsar resolved to support the autonomy of Slavs in the Ottoman Empire, he signed the ill-famed Ems edict prohibiting both the publication of books and theatrical production in the Ukrainian language, scornfully called a “dialect” in this law. As the Ukrainian historian and politician Mykhailo Drahomanov long ago pointed out, the “liberation” of Ottoman Slavs by the anti-liberal Russian Empire, where Slavic peoples, including Ukrainians and Poles, were deprived of even basic elements of autonomy, was a misnomer. Moreover, as Drahomanov observed, all talk about public initiative in support of the Slavs made no sense: the “unofficial Russia” that championed the campaign was undistinguishable from the “official” one. 

    Drahomanov’s words could easily be used to describe the situation in contemporary Russia. Those who are now allowed to speak on behalf of society are closely linked to the state; those who disagree with the state’s policy have been silenced and jailed, or they have had to emigrate or go underground. Most of the millions of people who support Putin and his plans of imperial revival know little about the world outside Russia, or even outside their town; they have been raised on state propaganda and are unwilling to question the veracity of the myths that it produces. They are excited about military victories because different ideas have never been inculcated in their minds, neither by the schools nor by the Orthodox Church. Many of them live in misery and abandonment, and they seek emotional comfort not in kindness and compassion but in an illusionary victory in a “special operation.”

    Wars have always been portrayed as the moments of unification between the autocrat and the masses — a kind of political communion, a shared national epiphany. The war consensus transcends the bureaucratic buffer that, in peaceful times, stands between the ruler and his subjects. When, in the 1870s, nationalists celebrated these consummations of unity, others saw the attempt to drag simple folks into war politics as cynical and dangerous. As Prince Petr Viazemskii remarked, “The people cannot wish the war but inadvertently push toward it … The government silently lures the people into this political chaos, and they may pay for it dearly.” Putin justified the invasion by the sufferings of the Russian-speaking population in eastern Ukraine, which was allegedly vying for autonomy and aspiring to strengthen ties with Russia. The orchestrated pro-Russian demonstrations and marches in Donetsk and Luhansk replicated almost verbatim the process of building up pro-war, pro-Slav, and anti-Turkish sentiments in the 1870s, as did the public euphoria in Russia in response to the annexation of Crimea in 2014. In lieu of the “barbarous” Turks, there are the Ukrainian “Nazis,” who, according to Putin, tormented the population in eastern Ukraine. The people — duped by state propaganda — may express nationalist sentiments, but the autocrat never really takes them into consideration when he gives the order to attack. 

    It is important, therefore, to make a distinction between the rhetorical references to public support and the reality of the decision-making process. The historian David McDonald, commenting on certain assumptions about the state’s responsiveness to nationalist sentiments in pre-revolutionary Russia, has rightly observed that these assumptions “neglect the finer mechanisms of causation and overlook the fact that imperial statesmen were highly reluctant to cede any voice at all to society in matters of foreign policy. While public opinion played an episodic role in the discussion of foreign policy, as a state matter, such issues could be considered only by professional officials responsible to His Imperial Majesty.” In autocratic orders, the notion of a war by popular demand is nonsense. 

    The ruler, in other words, does not care what an average Russian, or Russian society as a whole, thinks about Ukraine or the Ottoman Empire. Although a significant part of the Russian population supports the war now, it did not cause the war, and the majority had been opposed to the idea of armed conflict before the invasion. Of course, there were Russian writers, most notably Dostoyevsky, who penned pan-Slavic articles, and journalists who created the racist images of Turks, and there have been Russian politicians and intellectuals who have haughtily refused to recognize the cultural and political sovereignty of Ukraine; and they are all responsible for endorsing violence. Every soldier who has pulled a trigger, launched a missile, or thrown a grenade is complicit; every governor or theater director who has voiced support for the “special operation” bears the guilt for the lost lives of innocent Ukrainians. But all the individual responsibilities for these actions do not add up to the collective responsibility of “the Russians.” The notion of collective responsibility allows war criminals and the outspoken supporters of violence to escape judgment. The “responsibility of nations” often means no one’s guilt. 

    Does this suggest that Chernyshevsky was wrong in blaming “the public” and not the tsar for the horrors of the Crimean War? Not exactly. He was right in predicting that Russian educated society would fail to comprehend the simple thought that any war, victorious or not, is hideous, that war cannot be a source of glory and dignity, neither for a man nor for an empire. This thought remained alien to the Russian nobility, which continued to seek honor on the battlefield, and, with the exception of Tolstoy, the holy fool of Russian literature, the thought did not find expression in literary works. It is therefore very important to understand how and why a hostility to war, or an aversion to it, failed to develop in a country where every single family has lost at least one member in one of the many wars fought in the last hundred years. Chernyshevsky was also right in pointing out the cultural hauteur of the Russian literary elite who inculcated in their “public” a sense of imperial superiority — over Turks, Europe, Ukrainians, and others. This sense of superiority now fuels support for the Ukraine war among contemporary Russians. As many commentators have already pointed out, Russia has as yet failed to go through the process of de-imperialization and reckoning with its imperial (pre-revolutionary and Soviet) past. 

    Another theme that emerges with surprising persistence in the “who is to blame?” debates concerns the responsibility of the West for “provoking” Putin. The West must not repent for offending Putin and injuring his self-esteem, because it would only play into the autocrat’s hands. Putin’s propaganda openly justifies its aggression by alleging the hostility of Western powers who have turned Ukraine into a playground for their military operations against Russia. The motive of this putative Western threat is another cliché, copy-pasted from a typical scenario of Russian imperial warfare, in which almost every war, wherever it took place, was seen as a war against a collective “West.” Putin’s remarks last June about the world order as divided into two camps, namely sovereign states and their colonies, and his attempts to present this war as Russia’s defense of its sovereignty against the West, repeat almost verbatim the ideas of Russian nineteenth-century nationalists. 

    Mikhail Katkov, one of the main proponents of war against the Ottomans, thought that only by isolating itself from the West could Russia avoid the sad fate of falling into economic and political dependence on Europe. Isolationism — the rejection of shared cultural, legal, financial, and political standards and values — appeared to be a way of regaining and strengthening Russian independence. Indeed, the war with Turkey in 1877-1878 eventually turned into a civilizational clash with Europe and ended two decades of Russia’s fine attempts at Westernization and reforms. The most visible manifestation of Russia’s anti-Westernism was the reactionary reign of Alexander III, his official Slavophilism and imperialist policy. For its part, the de-Westernization of Russia in 2022 will be remembered by the disappearance of McDonald’s, empty shopping malls, and the deficit of imported consumer goods — but there have been less visible and more profound changes in the systems of education, industry, and finance. Russian universities and academic institutions have been cut off from networks of international cooperation, investors have walked away from the country’s economy, and Russian producers have to learn from scratch how to replace imported parts and machines. 

    The Russian invasion of Ukraine seemed improbable until the last moment, because it defied rationality and threatened to ruin the Russian economy and inflict unthinkable losses on the Russian population. Yet its economic irrationality has been twisted by the autocrat to prove the unselfishness of the war’s goals, and to demonstrate Russia’s uniqueness and difference from the obnoxiously pragmatic and materialistic West. Prince Dmitrii Obolenskii expressed this mood on the eve of the Russo-Turkish war: “I know that we have no money. I know that the generals are bad…. But this does not matter, because the main question is, What are we?” As in 1877, Russian authorities in 2022 high-mindedly boast about their altruism, although the main burden of war, as always, falls on the shoulders of the poor. No one can predict the human and material costs — for Russia, Ukraine, and the entire world — of the current war, but we must make sure that this accounting is made and all the losses are tallied, and that the people who inflicted the losses bear the responsibility.

    The invasion in Ukraine has had profound effects not only on the physical dimensions of people’s existences, but it has also affected the way they experience time, place, and history. Historical planes have shifted, dumping Russia into a temporal pit without a future and with a questionable past. Putin, who directs this bloody drama, suspends the historical specificity of these events by constantly referring to his crowned predecessors and following the imperial scenarios of war, as if an atemporal pattern, a Russian destiny, is simply being re-enacted. This suspension of time is not accidental — for Putin’s regime, war has turned into a mode of existence, an endless present, an eschatological battle without a strategy and a timeline. Some of Putin’s critics have inadvertently fallen into his trap, mistaking his rhetoric for reality. Instead of studying Russia’s imperial past to understand the precise mechanisms of autocratic power and thereby untangle the jumble and mess of Putin’s ideas, they look back to the past in order to revert to meta-historical stereotypes and cliches so as to judge and accuse. The discourse about “the Russians’ war” is often built on poorly understood historical parallels and assumptions regarding Russians’ genetic propensity for violence and their inability to develop an inner sense of freedom. This invidious essentializing is the mirror image of the pro-war Slavophile nonsense about the mystical singularity of Holy Russia. 

    The analysis of the causes of this terrible war should look beyond the rhetorical fog of Putin’s propaganda and include the serious treatment of the politics of war and the structure — the logic — of autocratic power. At what point, and why, does an autocrat resolve to initiate a war? Which elements and factors in the internal dynamics of an autocratic system trigger aggression? Why do the mechanisms of restraint not work? We must also begin a careful historical inquiry into how (and whether) Russian society has dealt with the problems of violence and responsibility. Ritual repentance on Facebook pages on behalf of the Russian nation will remain useless until we understand the actual causes of war. And when the time comes, the people responsible for the horrors of the current war will (I hope) face judgment, and courts will establish the guilt of individuals complicit in encouraging, supporting, financing, or justifying the war. There is a significant nexus, analytical and moral, between causality and culpability. 

     

    Taste, Bad Taste, and Franz Liszt

    I

    My title may appear provocative, but I doubt whether anyone is likely to disagree that of all the great composers Liszt is the one most frequently accused of bad taste, and also that the accusation has never threatened his status among the great. Indeed, as Charles Rosen once suggested, the accusation in some sense actually identifies Liszt’s particular position in the pantheon.

    Rosen put it in the form of a trumped-up paradox, saying of Liszt that his “early works are vulgar and great; the late works are admirable and minor.” Very cagey, this: Liszt’s most-admired works, say the Faust-Symphonie or the B-minor Sonata, came in between. Take away the invidious comparison, and take away the sophistry, and Rosen’s point still resonates. But take away the vulgarity, and Liszt is no longer Liszt. Reviewing the first volume of Alan Walker’s biography of Liszt in the New York Review of Books, Rosen went even further in his baiting, asserting that “to comprehend Liszt’s greatness one needs a suspension of distaste, a momentary renunciation of musical scruples.” And then, for good measure: “Only a view of Liszt that places the Second Hungarian Rhapsody in the center of his work will do him justice.” 

    That was not an endorsement of the Rhapsody, which Rosen, along with Hanslick and Bartók, thought “trivial and second-rate.” What made the provocation doubly surefire was the racial innuendo that tainted not only Liszt and the Rhapsody, but all who came in contact with them. Did not Pierre Boulez say of Bartók that his “most admired works are often the least good, the ones which come closest to the dubious-taste, Liszt-gypsy tradition”? And does that not go a long way toward accounting for Bartók’s overt hostility toward a tradition, that of the so-called verbunkos, on which he remained covertly dependent? The taint even tainted the tainter ​​— all of which was simply too much for Alfred Brendel, who, exasperated, took Rosen’s bait:

    Though enjoying, once in a while, some of the Hungarian Rhapsodies and operatic paraphrases, I wince at Charles Rosen’s assertion [that] in the matter of taste, no composer could be more vulnerable than Liszt. . . . In contrast to Charles Rosen, I consider it a principal task of the Liszt player to cultivate such scruples [as Rosen bids us renounce], and distil the essence of Liszt’s nobility. This obligation is linked to the privilege of choosing from Liszt’s enormous output works that offer both originality and finish, generosity and control, dignity and fire. 

    I sympathize with Brendel’s aversion to Rosen’s deliberately annoying formulations, but I find Brendel’s fastidiousness insufficiently generous toward Liszt and the impulses that his work embodies, which, though not always noble, are undoubtedly great. Rosen came closer than Brendel did to pinpointing the fascination that Liszt exerted over his times, and continues to exert over us. Especially worthy of pursuit is Rosen’s most irritating pronouncement of all: “Good taste,” he teased, “is a barrier to an understanding and appreciation of the nineteenth century.” 

    If the remark grates, it is because of the aspersion it seems to cast on the century that now looms in retrospect as the greatest century of all for music — or at least as the century in which music was accorded the greatest value. But suppose we read the aspersion the other way — as a critique of good taste? Ever since reading the Rosen-Brendel exchange a quarter of a century ago, I have had an itch to use Liszt and his reception as a tool to situate good taste (along with greatness) in social and intellectual history, and to fathom the profound ambivalence with which virtuosity has always been regarded.

    So let me begin again, with another quotation — something that has been rattling in my head even longer, more than half a century now. When I was an undergraduate, I read Thomas Mann’s last novel, The Confessions of Felix Krull, Confidence Man. At one point the social-climbing title character receives guidance from a nobleman, the Marquis de Venosta, whose world he wants to crash. Among the many insights that the Marquis offers him is this: “You come, as one now sees, of a good family — with us members of the nobility, one simply says ‘of family’; only the bourgeois can come of a good family.” 

    What does this mean? What is the difference between “family” and “good family”? What it seems to come down to is that “family” is an existential category, while “good family” is an aspirational one. The bourgeoisie is the aspiring class. The aristocracy simply is. And so it is with “taste” and “good taste.” “Taste” is something the elect possess and exercise without calculation or necessary self-awareness. “Good taste” is exhibited rather than exercised: it is something attributed to the maker of deliberate and calculated choices in recognition of their correctness, as a mark of social approval. “Taste” is a matter of predilection, “good taste” is a matter of profession. A display of good taste is a mark of aspiration to social approbation, and the standard to which exhibitors of good taste must aspire is never their own. To show good taste is to seek admission to an elite station which the possessor of “taste” occupies as an entitlement. A show of good taste is thus never a mark of election; rather, it marks one as an outsider wanting in. It implies submission as well as aspiration, hence inhibition. Like Felix Krull, people who display their good taste are trying to crash a social world.

    Recall now the famous words that Haydn spoke to Leopold Mozart in February 1785:

    Before God, and as an honest man, I tell you that your son is the greatest composer known to me either in person or by name. He has taste, and, what is more, the most profound knowledge of composition. 

    Imagine for a moment that Haydn had said to Leopold not that Wolfgang “has taste,” but that “he has good taste.” The compliment would have crumbled. “Taste” (Geschmack), in the sense that Haydn used the word, was an existential category. Either you were of the elect or you weren’t; and if you did not have taste as a birthright you could not acquire it, even though you had “the most profound knowledge of composition.”

    But what did it consist of? In this context, clearly enough, “taste” was an unerring and intuitive insider awareness of what was fitting. The closest any musician came to enunciating such a definition may have been Johann Mattheson in 1744, at the outset of a chapter entitled “Vom musikalischen Geschmack” in a book devoted to the aesthetics of opera:

    Taste, figuratively speaking, is the inner awareness, preference, and judgment by which our intellect impinges upon sensory matters. If, as Pliny would have it, the tongue has a mind of its own, so the mind can be said to have its own tongue, with which it tastes and evaluates the objects of its attention. 

    In that figurative sense, “taste” was comparable to the securely inculcated breeding that the Marquis de Venosta had in mind when he distinguished “family” from “good family.” 

    Mattheson’s ingenious, opportunistic inversion of a dimly remembered Pliny provides a link between the gustatory and the derivative or conceptual meanings of the term, while also giving off an echo of its social history; for as soon as the word “taste” was elevated beyond its purely sensory meaning in the seventeenth century, it connoted an attribute of aristocracy. The sociologist Stephen Menell locates that origin at the French court, where members of the old noblesse d’épée, threatened by the ever-aspiring, ever-rising bourgeoisie, secured positions at court as “specialists in the art of consumption” (at first of food), developing hierarchies of taste and codes of behavior that stressed the restraint of gluttony and refinement of table manners. Taste had become a metaphor for discrimination. 

    The turn from food to art as the arena for the exercise of taste can be traced first in Italy. Giulio Mancini, the personal physician to Pope Urban VIII and a famous collector of fine painting, equated gusto and giudizio (taste and judgment) in his Considerazioni sulla pittura, an essay published in 1623. Half a century later, in 1670, the attempt to acquire taste without breeding was satirized for all time in Molière’s Le bourgeois gentilhomme. The butt of the satire could be described, long avant la lettre, as “good taste,” which was the quality or attainment to which Monsieur Jourdain aspired. Good taste, in effect, was imitation taste, not the real thing.

    The notion of taste as an absolute standard, sanctioned by a consensus of the capable (“men of sentiment”) and associated in the first instance with one of David Hume’s most famous essays, has persisted since the eighteenth century despite the rise of less intransigent definitions. Its staying power is attributable to the conviction, among the politically conservative, that (to quote Wye J. Allanbrook) “the agreement of cultivated people about what is good and beautiful was a force for the political cohesion of the community” and a support, or occasional pinch-hitter, for hereditary aristocracy. As Schiller emphasized in On the Aesthetic Education of Man in 1794, “No privilege, no autocracy of any kind, is tolerated where taste rules”; but that is because taste itself offered an alternative standard of excellence, working through positive rather than negative reinforcement (the promise of esteem replacing the threat of coercion) to internalize the pressure. Where its autonomy and universality are believed in, spontaneous fellow-feeling and disinterested fraternity can seem to rule. But such belief, far from spontaneous, must be cultivated, or rather, instilled. 

    A century and more after Schiller, T. S. Eliot echoed his sentiments when he defined “the function of criticism” as “roughly speaking, . . . the elucidation of works of art and the correction of taste.” This was the formulation of a man who would shortly declare himself to be “classicist in literature, royalist in politics, and Anglo-Catholic in religion.” The word for it, and it has become a fighting word, is elitism.

    Where Eliot went, Stravinsky tagged dependably behind. In the Poétique musicale, his own pinnacle of intransigence delivered at Harvard a decade later, in 1939–1940, Stravinsky devoted the last of his six leçons ostensibly to musical performance, but in fact he made it clear from the outset that the subject matter of the lecture, which outwardly took the form of a diatribe against virtuosos expressly intended as a correction of taste, was in fact d’ordre éthique plutôt que d’ordre esthétique — “of an ethical rather than of an aesthetic order.” At the height of his dudgeon, Stravinsky declared: “Whereas all social activities are regulated by rules of etiquette and good breeding, performers are still in most cases entirely unaware of the elementary precepts of musical civility, that is to say of musical good breeding — a matter of common decency that a child may learn.” And yet, when invoking the grand thème de la soumission, the “great principle of submission,” that runs like a thread through all six lessons, Stravinsky contradicts himself, proclaiming instead that “this submission demands a flexibility that itself requires, along with technical mastery, a sense of tradition and, commanding the whole, an aristocratic culture that is not merely a question of acquired learning.” There is your existential taste: something that one possesses as a birthright, as an aristocrat possesses (and is possessed by) “family.”

    How far this is, we are apt to think, from our colloquial concept of taste as mere personal preference, the thing that is proverbially beyond dispute. That definition, too, has a long history, going back to the anonymous Latin maxim — De gustibus non est disputandum — that everybody knows. That maxim, however, is less ancient than it might appear. It is by no classical author. Its origin, rather, is presumed to be medieval and scholastic by virtue of its concern to distinguish between matters open to reason and persuasion and those which philosophers, or at least scholastics, had better leave alone. As the economists George J. Stigler and Gary S. Becker put it, at the outset of a famous article in which they broke the old taboo and embarked on a path that led, for one of them, to the Nobel Prize:

    The venerable admonition not to quarrel over tastes is commonly interpreted as advice to terminate a dispute when it has been resolved into a difference of tastes, presumably because there is no further room for rational persuasion. Tastes are the unchallengeable axioms of a man’s behavior. 

    Taste as axiomatic (and professed) personal preference seems a bulwark of personal autonomy, a democratic or egalitarian notion. As Liszt himself once said, “It is a matter of taste whether the old or the new is more charming. Taste is quite certainly a personal thing.” But consider this story, which will bring us back to music. It comes from a famous pamphlet, Comparaison de la musique italienne et de la musique française, issued in 1704 by Jean Laurent Lecerf de la Viéville, Lord of Freneuse, in answer to a like-named pamphlet, Paralèle des Italiens et des Français, issued in 1702 by another French aristocrat, Abbé François Raguenet. As Lecerf relates, a courtier fond of the brilliance and grandeur of Italian music brought before King Louis XIV a young violinist who had studied under the finest Italian masters for several years, and bade him play the most dazzling piece he knew. When he was finished, the king sent for one of his own violinists and asked the man for a simple air from Cadmus et Hermione, an opera by his own court composer, Jean-Baptiste Lully. The violinist was mediocre, the air was plain, nor was Cadmus by any means one of Lully’s most impressive works. But when the air was finished, the king turned to the courtier and said, “All I can say, sir, is that that is my taste.” 

    The king did effectively put an end to the argument by invoking his taste, but was that because there can be no disputing tastes or because there can be no talking back to a king? Lecerf’s argument with Raguenet, who had waxed rapturous about the voices of castrati, was really all about authority, not taste. In disputes or assertions regarding tastes, authority has many surrogates. Among professionals, including musical professionals, the chief surrogate is experience. Consider this famous footnote from Johann David Heinichen’s thoroughbass treatise of 1725.

    If experience is needed in any art or science, it is certainly needed in music. . . . But why must we seek experience? I will give you one little word that encompasses the three basic requirements in music (talent, knowledge, and experience), and its heart and its outer limits as well, and all in four letters: Goût. Through application, talent, and experience, a composer needs to acquire above all an exceptional sense of taste in music. The distinguishing feature of a composer with well-developed taste is simply the skill with which he makes music pleasing to and beloved by the general, educated public; in other words, the skills by which he pleases our ear and moves our sensibilities. . . . An exceptional sense of taste is the philosopher’s stone and principal musical mystery by means of which the emotions are unlocked and the senses won over. 

    This is the kind of taste — something acquirable through labor and application (provided one has good instruction), hence available not only to the aristocracy of birth but also to an aristocracy of talent and training — to which Francesco Geminiani referred in the title of A Treatise of Good Taste in the Art of Musick (c. 1749), a title that on the surface might seem to offer a counterexample to the distinction between “taste” and “good taste.” In the body of the treatise, however, Geminiani (who had lived in London since 1714 and was writing in idiomatic English) usually inserts the indefinite article before “good taste.” Thus, at the beginning of the preface: “The Envy that generally attends every new Discovery in the Arts and Sciences, has hitherto deferr’d my publishing these rules of Singing and Playing in a good Taste”; and, at the end: “Thus I have collected and explain’d all the Ingredients of a good Taste.

    That indefinite article does a lot of work: it is incompatible with both of the categories of taste with which we are concerned, whether with “taste” as the superior existential endowment Haydn attributed to Mozart, or with the “good taste” in which Liszt was held by Rosen and Brendel to be deficient. When you put Geminiani’s odd usage together with the title of his previous treatise, to which A Treatise of Good Taste was a supplement and on which it was dependent — that is, Rules for Playing in a True Taste on the Violin, German Flute, Violoncello, and Harpsichord (London, c. 1745) — it is clear that the two expressions “a good taste” and “a true taste” are interchangeable equivalents of “correct (or elegant) style.” And indeed, it turns out that the Treatise on Good Taste is merely a manual on embellishment, consisting of a table of ornaments followed by models for application, chiefly to familiar Scots airs furnished with a thoroughbass. As Robert Donington comments in his foreword to the facsimile edition:

    “Good taste” was almost a technical term of the period. It was used not merely for a refined and cultured attitude toward music in general; it was used for a refined and cultured ability to invent more or less improvised ornamentation for melodies often notated in plain outline, but requiring such ornamentation in order to be given a complete performance. 

    Corroboration of this usage in eighteenth-century English comes from Dr. Burney, who in his musical travelogue of 1771 defined “taste” as “the adding, diminishing, or changing [of] a melody, or passage, with judgement and propriety, and in such a manner as to improve it.” In short, therefore, and ironically, Geminiani’s brand of “good taste,” insofar as it implies the addition of impromptu passagework to written compositions, virtually coincides with the “bad taste” of which Liszt and his contemporaries would be accused a century after Geminiani’s time, and up to the present day. It did not take long for fashions to start changing. At the very end of his General History of Music, in the twelfth chapter of the fourth volume, published in 1789, devoted to the “General State of Music in England at our National Theatres, Public Gardens, and Concerts, during the Present Century,” the same Dr. Burney wrote off Geminiani’s guides to “a good taste” as having appeared “too soon for the present times. Indeed, a treatise on good taste in dress, during the reign of Queen Elizabeth, would now be as useful to a tailor or milliner, as the rules of taste in Music, forty years ago, to a modern musician.” 

    II

    Yet insofar as Geminiani offered instruction in correct practice, his good taste did imply submission to a standard, a matter of meeting expectations. The taste or ability about which Heinichen and Geminiani wrote was not the personal preference of any particular performer or composer, nor of the authors themselves, nor even the consciously formulated demand of the “general, educated public.” Effort and education can give us all equal access to correct style: the taste of one is (or ought to be) the taste of all. It is on the promise to impart that universal taste, which all successful composers must master, that the authority of Heinichen’s or Geminiani’s manuals depended. It was an authority that, in the guise of classicism, could become authoritarian. 

    Take, for example, Voltaire’s article on Goût in the seventh volume of Diderot and d’Alembert’s Encyclopédie, issued in 1757 — the same year as Hume’s seminal essay, but expressing what seems to be a pre-Humean formulation, in which l’homme du goût, “the man of taste” (compare Hume’s “men of sentiment” or the Marquis de Venosta’s “person of family”) is expressly equated with le connoisseur, the one who knows the rules of style as the gourmet knows the rules of the kitchen and the dining table. “If the gourmet immediately perceives and recognizes a mixture of two liqueurs, so the man of taste, the connoisseur, will see at a glance any mixture of styles” — and, of course, disapprove. The standard is one of purism, and failure to meet it constitutes le goût dépravé, debased taste, otherwise known more simply as bad taste. When Voltaire admits the phrase un bon goût, it is as the back-formed opposite of un mauvais gout. Only the latter can be personal. As an idiosyncrasy it is tantamount to a flaw that one must eliminate so as to restore the universal norm, which is simply le goût, with no qualifier. “They say there is no point disputing tastes,” Voltaire concedes:

    and this is right enough when it is only a matter of sensory taste, . . . because one cannot correct defective organs. It is different with the arts; as their beauties are real, there is a good taste that discerns them and a bad taste that does not; and the mental defect that gives rise to a wayward taste can often be corrected. 

    Here Voltaire anticipates Eliot: taste, for him, is no mere matter of fallible individual preference, but one of conformity to an established criterion, hence subject to correction. From there, Voltaire connects “good taste” to the idea of perfected style, or what literary historians would eventually christen “classicism”:

    The taste of an entire nation can be corrupted. This misfortune usually comes about after periods of perfection. Artists, for fear of being imitators, seek untraveled paths; they flee the natural beauty that their predecessors had embraced; there is some merit in their efforts; this merit covers their faults; the novelty-besotted public runs after them; it soon loses interest, however, and others appear who make new efforts to please; they flee even further from nature; taste disappears amid a welter of novelties that quickly give way one to another; the public no longer knows where it is, and it longs in vain for the age of good taste that will never return. It has become a relic that a few sound minds now safeguard far from the crowd.

    This wholly aristocratic, existential notion of “good taste,” ever resistant to destabilizing innovation, is a decreed taste, sanctioned by tradition. Still a child of the seventeenth century, Voltaire locates its source dogmatically in “nature.” D’Alembert, the editor of the Encyclopédie, in an appendix to Voltaire’s article, somewhat modernizes (that is, relativizes) Voltaire’s position by vesting the power of decree in “philosophy,” which at least implies human agency:

    In matters of taste, a smattering of philosophy can lead us astray, while philosophy better understood can bring us back. It is an insult to literature and philosophy alike to think that they could harm or exclude one another. Everything that pertains, not only to our way of thinking, but also to our way of feeling, is philosophy’s true domain. . . . How could the true spirit of philosophy be opposed to good taste? On the contrary, it is its strongest support, because this spirit consists in returning everything to its true principles, in recognizing that every art has its own particular nature, each condition of the soul its own character, each thing its own particular tint—in one word, that one should never transgress the limits of a given genre. 

    These extracts exhaust references to le bon goût (rather than the more usual goût, unmodified) in the Encyclopédie. The addition of the adjective does not change the meaning; “good taste” here does not differ from “taste” tout simple, the sense of suitability that Haydn recognized as Mozart’s mark of election. And that is because the philosophes located the criterion of correct discrimination not in the perceiving subject but in the object perceived, rightly apprehended according to “its own particular nature,” of which philosophy is the arbiter. To acquire taste, on the encyclopedists’ terms, one had to submit to their authority. It became a task for a new cohort of eighteenth-century thinkers to emancipate the notion of taste from that of external authority, while nevertheless remaining faithful to the idea of its universality or its status as what Kant called a sensus communis, a “common sense,” meaning “a sense shared by all.” This required some fancy skating.

    Kant’s solution was to posit that taste was subjective in that it concerned not the properties of objects but the pleasure or displeasure of contemplating subjects. Hence “it is absolutely impossible to give a definite objective principle of taste . . . for then the judgment would not be one of taste at all.” And yet such reactions were ideally universal because they derived from a faculty possessed by humans, only by humans, and by all humans. Within Kant’s careful definitions, all have taste, and all have the same taste. It must, therefore, enjoy “a title to subjective universality,” or what we now somewhat less paradoxically call intersubjectivity.

    Evidence of universality is to be sought in consensus, which must be discernible despite the great variety in subjective preference that strikes the casual observer. For Hume, this made it all the more imperative to seek, or establish, “a Standard of Taste: a rule, by which the various sentiments of men may be reconciled; at least, a decision, afforded, confirming one sentiment, and condemning another.” The problem for Enlightened theories of universal taste was that of outliers, people of ostensibly normal endowment who nevertheless diverged from the intersubjective consensus. Is it possible to speak of “wrong” taste, even if, as Kant maintained (and as everyone beginning with Hume seems to agree), “the judgment of taste is . . . not a judgment of cognition,” and therefore cannot be considered factual? If there can be wrong taste, then there can be bad taste; and if there is bad taste, then there can be normative good taste — something that can be aspired to. We are approaching the crux of our problem.

    The most ingenious attempt to account for wrong taste within a universalist theory of taste is found in the introduction to Edmund Burke’s famous Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful, first published in 1757, the same bumper year that saw the publication of both the seventh volume of the Encyclopédie and Hume’s essay on taste. Having defined taste as “that faculty or those faculties of the mind, which are affected with, or which form a judgment of, the works of imagination and the elegant arts,” Burke invoked John Locke’s distinction between wit and judgment. “Mr. Locke,” he writes, “very justly and finely observes of wit, that it is chiefly conversant in tracing resemblances: he remarks, at the same time, that the business of judgment is rather in finding differences.” As we know from experience, wit is much the more pleasurable function, as the perception of resemblances is a matter of immediate sensibility, whereas the discrimination of differences requires expertise and mental effort. Thus, Burke argues, taste being a judgment, its exercise is more or less correct depending not upon what he calls “a superior principle in men,” but rather “upon superior knowledge,” in the sense of wide acquaintance.

    That is the crucial move. Once we postulate that taste is not a simple idea but a compound of sensibility and knowledge, it follows that a deficiency of taste can be the result of a deficiency in either category. “From a defect in [sensibility],” Burke writes, “arises a want of taste,” which is to say an inability or disinclination to render any judgment at all; whereas “a weakness in [knowledge] constitutes a wrong or a bad [taste].” This passage, coeval with Voltaire’s Encyclopédie entry but the work of a newer breed of thinker, constitutes, to my knowledge, the earliest recognition that there can be such a thing as bad taste, as distinct from a want of taste. The latter can only be deplored or pitied, as it was by Voltaire and by Mann’s Marquis de Venosta, whereas one can aspire, with Burke or Eliot, to correct the former.

    The consequences of this distinction are far-reaching, and baleful; and Burke, to his credit, did not flinch from them. If “the cause of a wrong taste” is “a defect of judgment,” he allowed, then the mis-evaluation of works of art

    may arise from a natural weakness of understanding, . . . or, which is much more commonly the case, it may arise from . . . ignorance, inattention, prejudice, rashness, levity, obstinacy, in short, all those passions, and all those vices, which pervert the judgment in other matters, prejudice it no less in this its more refined and elegant province. 

    But if “bad or wrong taste” can be taken as a symptom of vice or perversion, the door has been opened wide to abuse. Burke recognizes this in an especially pregnant passage that enlarges upon an earlier point — that discrimination diminishes rather than enhances pleasure because it lessens the number of objects from which we can naively derive satisfaction.

    The judgment is for the greater part employed in throwing stumbling-blocks in the way of the imagination, in dissipating the scenes of its enchantment, and in tying us down to the disagreeable yoke of our reason: for almost the only pleasure that men have in judging better than others, consists in a sort of conscious pride and superiority, which arises from thinking rightly; but then, this is an indirect pleasure, a pleasure which does not immediately result from the object which is under contemplation.

    What we are witnessing here is the birth, or at least the christening, of aesthetic snobbery, which is always and only social snobbery in disguise. An indirect or even perverse pleasure it may be, but snobbery is a powerful pleasure; and Burke’s explanation of snobbery, as the sole compensation we receive for the loss of immediacy and naive pleasure that our critical judgment exacts from us, is the best account I have ever encountered of its value to snobs (a category that at times — let’s admit it — tempts us all). It amounts also to an account and critique of aspirational “good taste,” which arises alongside and in response to aesthetic snobbery, the most quintessentially bourgeois of all snobberies, and might even be deemed tantamount to it.

    It is not taste (pace Stravinsky) but “good taste” that conflates aesthetic and moral quality, and sits in judgment over them conjointly. Since it is the bastard child of snobbery, “good taste” requires the ever more exacting exercise of negative judgment. Forgetting, or affecting to reject, the Kantian proviso that taste is a property not of contemplated objects but of contemplating subjects, “good taste” constructs spurious existential categories such as “kitsch,” a term that arose in the course of the emergence we are now tracing (and Google can tell you how often it is attached to Liszt). As snobbery’s surrogate, aspirational “good taste” easily turns competitive. Critics who earn followings do so (as Louis Menand smirked about Pauline Kael) because they have recognized, and pander to, “the truth” that “people, at least educated people, like not to like movies, especially movies other people like, even more than they like to like them.” 

    The conjoint promise of safety and self-congratulation gives one an incentive to expand the range of objects one can consign to the outer darkness, so as to maximize one’s “conscious pride and superiority,” to recall Burke’s more elegant expression. Hence such impressive works of pseudo-scholarship as Gillo Dorfles’ extravagant compendium Il Kitsch: Antologia del cattivo gusto, published in Milan in 1968 and translated into English as Kitsch: The World of Bad Taste, which contains, alongside what anyone might expect (Nazi and Soviet poster art, eroticized religious images, the Mona Lisa imprinted on bath towels and eyeglass cases), several items that can only have been calculated to shock the reader by their inclusion, such as New York’s Cloisters, the museum of medieval art endowed by John D. Rockefeller in 1938. A caption explains that “The structure is entirely modern but incorporates authentic architectural features from the cloisters of medieval monasteries. Authentic objects and works of art are displayed in the halls, which are always full of tourists.” We are left in little doubt as to what — or rather, who — the aspersion is meant to degrade. 

    The inevitable race to the limit in the fastidious exercise of captious “good taste” was well captured by Joseph Wood Krutch in 1956 when reviewing a book by Mary McCarthy, an especially exigent arbiter. “Her method is one of the safest,” he remarked.

    If you deny permanent significance to every new book or play time will prove you right in much more than nine cases out of ten. If you damn what others praise there is always the possibility that your intelligence and taste are superior. But if you permit yourself to praise something then some other superior person can always take you down by saying “So that is the sort of thing you like.” 

    That fear afflicts performers as well as critics. There is a coruscating passage on taste in the treatise Du chant, from 1920, by Reynaldo Hahn, the singer, composer, and voice teacher who perhaps better than any other musician — and not only because he was Marcel Proust’s lover — embodies the spirit of the belle époque, a time synonymous with elegance, as elegance may be thought synonymous with taste. But the writing drips with sarcasm:

    When singing is not directed by the heart (and you know that one cannot lightly command the service of the heart), when singing is not guided by feeling, by understanding, by the direct outpourings of the heart, it is taste that assumes control, directing and presiding over everything. Then it must be everywhere at once, acting in a hundred different ways. Think of it! Every detail of the vocal offering must be submitted to the dictates of taste.

    Let me be precise. By taste, I do not mean that superior and transcendent ability to comprehend what is beautiful which leads to good esthetic judgment. In fact, we cannot ask all singers to be people of superior taste, since such a requirement would reduce still further the very limited number of possible singers. By taste, I mean a wide-ranging instinct, a sure and rapid perception of even the smallest matters, a particular sensitivity of the spirit which prompts us to reject spontaneously whatever would appear as a blemish in a given context, would alter or weaken a feeling, distort a meaning, accentuate an error, run counter to the purposes of art.

    I repeat: A particular sensitivity of the spirit is necessary in this sort of taste, as well as emotion and a certain fear of ridicule. It is no doubt for this reason that women display a better sense of taste in singing than men. 

    A certain fear of ridicule. It is obvious that Hahn is speaking not of existential but aspirational taste; taste that hedges against the depredations of snobs, who censor idiosyncrasy along with sincerity and force artists (and especially, in Hahn’s bigoted view, those of the weaker sex) to retreat into what Russell Lynes, the social historian of art, in a famous article in 1949 that proclaimed a new social order based not on “wealth or family” but on “high thinking,” derided as the “entirely inoffensive and essentially characterless” precincts of “good taste.”

    Of course, Lynes was writing in the age of Rosen and Brendel, and describes a late stage in the socio-aesthetic process whose beginnings Edmund Burke had charted long before the stultifying category of “good taste” had gained momentum, although he may be said to have predicted it. At the end of his discussion of (universal) taste, Burke notes optimistically that “the taste . . . is improved exactly as we improve our judgment, by extending our knowledge, by a steady attention to our object, and by frequent exercise.” To boil it down to a formula, he proposes that taste = judgment = knowledge, and he who knows most judges best. Appeals to the ignorant, therefore, are subversive of taste, because they thwart the advancement of knowledge. Those who seek, or gain, the applause of the ignorant are threats to the maturation of taste.

    III

    The stage has been set for our hero. But before he enters, there remains one last matter to broach, namely the ambiguous character of virtuosity and the ambivalent attitude toward it in Liszt’s day on the part, not of audiences, surely, but of the newly professionalized class of tastemakers — what Liszt, in exasperation, called “the aristocracy of mediocrity.” 

    Gillen D’Arcy Wood, a social historian of literature and music and their interrelations under romanticism, identifies Liszt’s wry phrase with “an increasingly influential middle-class cultural regime that wished to be purified of virtuosic display,” an aspiration he calls, straightforwardly enough, virtuosophobia. Virtuosophobia is obviously akin to what the literary historian Jonas Barish called “the antitheatrical prejudice,” in a book that traced — from ancient Greece to the middle of the twentieth century -— the curious inconsistency whereby “most epithets derived from the arts” — words such as poetic or epic or lyric or musical or graphic or sculptural — “are laudatory when applied to the other arts, or to life,” with the conspicuous exception of terms derived from the theater, such as theatrical or operatic or melodramatic or stagey, which, by contrast, “tend to be hostile or belittling.” One reason for the antitheatrical prejudice is that theatrical acting, being by definition an act of dissembling, transgresses against ideals of sincerity; and virtuosos are often similarly accused, the terrific effect of their performances being unrelated, or not necessarily related, to genuine feeling.

    This was an observation constantly made about Liszt during his lifetime, and not always invidiously. His American pupil Amy Fay, who attended his master classes in Weimar1873, wrote in her memoir, Music Study in Germany, that

    when Liszt plays anything pathetic, it sounds as if he had been through everything, and opens all one’s wounds afresh. . . . [He] knows the influence he has on people, for he always fixes his eyes on some one of us when he plays, and I believe he tries to wring our hearts. . . . But I doubt if he feels any particular emotion himself when he is piercing you through with his rendering. He is simply hearing every tone, knowing exactly what effect he wishes to produce and how to do it. 

    To Liszt’s manner, Fay contrasted that of Joseph Joachim (once Liszt’s protégé, later his most zealous detractor) who exemplified the submissive and antitheatrical attitude later associated with Werktreue. Where the one was “a complete actor who intends to carry away the public,” the other was (that is, acted) “totally oblivious of it.” Where the one “subdues the people to him by the very way he walks on the stage,” the other is “‘the quiet gentleman artist’ who advances in the most unpretentious way, but as he adjusts his violin he looks his audience over with the calm air of a musical monarch, as much as to say, ‘I repose wholly on my art, and I’ve no need of any “ways or manners.”’” 

    Which of course is also a means of taking possession of one’s public. What Fay described were two species of charismatic — that is, histrionic — “ways or manners,” as she surely knew. (And Liszt was well aware of the alternative species. Describing the charismatic playing of John Field, he showed the same subtle irony as Fay describing Joachim: “It would be impossible to imagine a more unabashed indifference to the public. . . . He enchanted the public without knowing it or wishing it. . . . His calm was all but sleepy, and could be neither disturbed nor affected by thoughts of the impression his playing made on his hearers [since] art was for him in itself sufficient reward.”) The affectation of quiet absorption was the truly romantic (“disinterested”) attitude, as was the antitheatrical prejudice itself and the virtuosophobia that was its musical outlet; for it was romanticism that made a fetish of sincerity. As early as 1855, in a famous letter to Clara Schumann explaining his defection from Liszt’s orchestra in Weimar, Joachim broadened the antitheatrical, virtuosophobic rhetoric to encompass Liszt’s compositions as well, focusing on the sacred works as especially flagrant breaches of propriety. By the end of the passage, it is impossible to separate the bad taste of Liszt the composer from that of Liszt the performer as the butt of Joachim’s righteous indignation.

    For a long time now I have not seen such bitter deception as in Liszt’s compositions; I must admit that the vulgar misuse of sacred forms, that a disgusting coquetterie with the loftiest feelings in the service of effect was never intended — the mood of despair, the emotion of sorrow, with which the truly devout man is raised up to God, Liszt mixes with saccharine sentimentality and the look of a martyr at the conductor’s podium, so that one hears the falseness of every note and sees the falseness of every action. 

    Most explicit of all was Nietzsche. In Der Fall Wagner he asked, rhetorically, where Wagner belonged, and his answer went beyond Wagner to indict Wagner’s father-in-law as well. Wagner belongs “not in the history of music. What does he signify nevertheless in that history? The emergence of the actor in music: a capital event that invites thought, perhaps also fear. In a formula: ‘Wagner and Liszt.’” But at least Wagner did his acting in the theater. About Liszt, who turned instrumental performance into a branch of theater, one can only think worse. Nietzsche’s peroration, in three italicized “demands,” points the final finger at the musician, not the actor, for music is brought down as the theatrical is elevated. “What are the three demands for which my wrath, my concern, my love of art has this time opened my mouth?” thunders Nietzsche. They are these:

    That the theater should not lord it over the arts.

    That the actor should not seduce those who are authentic.

    That music should not become an art of lying. 

    Nor can virtuosos ever be “disinterested,” to invoke Kant’s principal aesthetic yardstick. Like other theatrical performers, they are never without a Zweck, an ulterior purpose, namely to impress us into thunderous vanity-stroking applause and exorbitant pocket-lining expenditures; and our interest in their overcoming obstacles is a human, rather than an aesthetic, interest — the sort of interest that attends to the performances of athletes and prestidigitators as well as musicians. D’Arcy Wood gave this a social twist when writing of the “antagonism,” so evident in Georgian England, and especially when Liszt tried to storm its aesthetic barricades with so much less success than he had enjoyed on the continent, “between literary (and academic) culture and the sociable practices of music, between Romantic middle-class ‘virtues’ and aristocratic virtuosity.” 

    We are back again to the Marquis de Venosta, and the distinction between “family” and “good family.” The former is an unearned status; the latter, a reputation earned through the exercise of virtue — which demanded vigilance against virtue’s false cognate. Though etymologically descended from virtue, virtuosity, in the middle-class view, was sheer vice, inextricably associated with all the other vices, and that remains our incorrigibly Romantic, middle-class view today. The author of a serious scholarly book on Paganini, published in 2012, wanted to know, for example, whether “the greed, lust, pride, and vainglory that [were] manifested in multiple aspects of the virtuoso’s life [can] be viewed any longer as separate from the aesthetic of virtuoso performance.” 

    Hence one of the paradoxes of nineteenth-century musical reception that continues to haunt us in the twenty-first century is the simultaneous denigration of virtuosity and fetishizing of difficulty. To unpack it we might begin by returning to Edmund Burke and his famous treatise. The section on the sublime contains a short paragraph, seemingly an afterthought, on difficulty as a “source of greatness”:

    When any work seems to have required immense force and labor to effect it, the idea is grand. Stonehenge, neither for disposition nor ornament, has anything admirable; but those huge rude masses of stone, set on end, and piled each on other, turn the mind on the immense force necessary for such a work. Nay, the rudeness of the work increases this cause of grandeur, as it excludes the idea of art and contrivance; for dexterity produces another sort of effect, which is different enough from this. 

    Thus, difficulty overcome too dexterously is not sublime; or rather, the dexterous overcoming of difficulty destroys the sublime effect and vitiates the awe that it inspires. Substitute “virtuosity” for Burke’s “dexterity” and the reason will become apparent why the English critics who wrote about Liszt in the 1840s so belittled or even deplored his “transcendent” virtuosity, associating it with triviality rather than with grandeur. The very act of transcendence was virtuosity’s transgression — a transgression against the virtue of difficulty.

    The works of Beethoven were, in Burke’s sense, the Stonehenge of music. Even before his sketchbooks exposed “the immense force necessary for such a work” to the inquisitive eye, his labor was a proverbial struggle per aspera ad astra. And performing his music was likewise a proverbial struggle it became a sacrilege to appear to transcend. The approved attitude toward Beethoven — the tasteful attitude — was Stravinsky’s grand thème de la soumission, epitomized in Artur Schnabel’s famous remark that “I am attracted only to music which I consider to be better than it can be performed. Therefore I feel (rightly or wrongly) that unless a piece of music presents a problem to me, a never-ending problem, it doesn’t interest me too much.” And if Schnabel’s piety represents the epitome, way beyond epitome was the British conductor Colin Davis, who said of Beethoven’s Missa solemnis, “It’s such a great work, it should never be performed.” 

    Beethoven’s unique social situation was bound up equally with the new attitude toward works and difficulty — or rather, the new valuation placed on old attitudes toward them — and with his removal from society as a result of deafness. It put Beethoven at the opposite social extreme from the virtuoso, who (like Beethoven himself in the earlier stages of his career) was sociability personified. Beethoven’s vaunted difficulty was abetted by his aristocratic patrons, while the virtuoso was seen as playing to the common crowd. The newly reified concept of artwork that Beethoven’s talent and fate so abetted is our concept still. It is what made possible the notion of “classical music,” which is to say, music conceptualized as a permanent and immutable object, at the same level of reification as the products of other artistic media like painting or sculpture: a concrete entity deserving the designation “work.” From something that elapses in time, music was thus reconceptualized as something that exists ontologically in an “imaginary museum,” as Lydia Goehr put it in the title of her celebrated book — a kind of notional space. 

    So let us imagine a reified musical work that way — as an article somehow located in a curated space. The humility so demonstratively voiced by Schnabel or Davis (whether or not we accept it at face value) is located below it. It looks up, like anything aspirational. But the attitude of the virtuoso — who transcends all difficulties, makes light of them, and makes everything seem easy (as the commonplace accolade would have it) — is located, like anything transcendent, above the work. It looks down. And therefore it is an arrogant crossing of an ethical line, a hubristic affront to aspiration; a fortiori, it is an affront to “good taste.” A London critic’s review of Liszt’s rendition of the Emperor Concerto, which casts him in the role of a bad curator, is a perfect summation of these strictures: “The many liberties he took with the text were evidence of no reverential feeling for the composer. The entire concerto seemed rather a glorification of self, than the heart-feeling of the loving disciple.” 

    And yet — as always — one man’s transgression is another man’s transcendence. There is always a more “spiritual” way of viewing virtuosity: as a literal triumph over the physical. Heine wrote that where others “shine by the dexterity with which they manipulate the stringed wood, . . . with Liszt one no longer thinks of difficulty overcome — the instrument disappears and the music reveals itself.” But then he immediately turns around and contradicts himself in his fascination, all but universally shared by those who experienced Liszt in the flesh, with the pianist’s physical presence, obsessing over his way of “brush[ing] his hair back over his brow several times,” turning his listeners into viewers, or rather voyeurs, who feel “at once anxious and blessed, but still more anxious.” The phobia, repressed, returns.

    The strongest avowal of virtuosophobia, the censorious distinction between virtuosity and difficulty, comes from Liszt himself, in the second of his so-called Baccalaureate Letters, published in the Parisian Gazette musicale on February 12, 1837, with a dedication to George Sand. The relevant passage runs as follows:

    In concert halls as well as in private drawing rooms . . . , I often played works of Beethoven, Weber, and Hummel, and I am ashamed to say that for the sake of winning the applause of a public which was slow in appreciating the sublime and beautiful, I did not scruple to change the pace and the ideas of the compositions; nay, I went so far in my frivolity as to interpolate runs and cadenzas which, to be sure, brought me the applause of the musically uneducated, but led me into paths which I fortunately soon abandoned. I cannot tell you how deeply I regret having thus made concessions to bad taste, which violated the spirit as well as the letter of the music. Since that time absolute reverence for the masterworks of our great men of genius has completely replaced that craving for originality and personal success which I had in the days too near my childhood. 

    Thus, with a presumed literary assist from Marie d’Agoult, Liszt accuses himself of mauvais goût, a locution that was still a novel one at the time of writing. But confessions can also be a form of boasting, and self-abasement a form of self-promotion. I think it pretty clear that Liszt, at that moment engaged in a very public rivalry with Sigismund Thalberg, was using the rhetoric of penitence and contrition in this way, as part of a campaign to show that he, and not his challengers, had become (to quote a famous passage from a letter he had written several years before) “an artist such as is required today.” That is to say, an artist who was abreast of the latest intellectual fashions, who was prepared to use the press to establish good public relations, and who was therefore able to maintain preeminence in the new era of publicity. Unlike his rivals, he was displaying himself as an artist who possessed both taste and “good taste,” who cultivated the aspirational posture, who looked up, not down, at “the masterworks of our great men of genius.”

    There is no reason to doubt the sincerity of Liszt’s aspirations. But as Kenneth Hamilton has observed, “numerous reviews of his concert tours of the 1840s indicate that [as of 1837], he cultivated an attitude akin to St. Augustine’s famous exhortation, ‘Oh Lord, grant me chastity — but not yet!’” He was still ready and able, in the words of Carl Reinecke, to “dazzle the ignorant throng.” Still, the social animus in that charge should caution us against too readily slapping a “populist” label on Liszt. Dana Gooley reminds us that some of Liszt’s concert practices suggest the opposite. He imposed higher ticket prices than did any of his contemporaries, which Gooley interprets as an attempt “to siphon out the middle bourgeoisie” and ensure that his recitals remained high-prestige events, not popular entertainments. The Baccalaureate Letters themselves show him striving to found his reputation on “his nearness to the intellectual and political elites of Paris,” the “cultural trendsetters.” 

    One of the most revealing portraits of Liszt the composer-performer in all the glorious inconsistency of his behavior, accurately reflecting the ambivalences of mores in transition, is the recollection of Vladimir Vasilievich Stasov, first published in 1889, of the great pianist’s debut in St. Petersburg forty-seven years earlier, in 1842:

    Everything about this concert was unusual. First of all, Liszt appeared alone on the stage throughout the entire concert: there were no other performers — no orchestra, singers or any other instrumental soloists whatsoever. This was something unheard of, utterly novel, even somewhat brazen. What conceit! What vanity! As if to say, “all you need is me. Listen only to me — you don’t need anyone else.” Then, this idea of having a small stage erected in the very center of the hall like an islet in the middle of an ocean, a throne high above the heads of the crowd, from which to pour forth his mighty torrents of sound. And then, what music he chose for his programs: not just piano pieces, his own, his true métier — no, this could not satisfy his boundless conceit — he had to be both an orchestra and human voices. He took Beethoven’s “Adelaïde,” Schubert’s songs — and dared to replace male and female voices, to play them on the piano alone! He took large orchestral works, overtures, symphonies — and played them too, all alone, in place of a whole orchestra, without any assistance, without the sound of a single violin, French horn, kettledrum! And in such an immense hall! What a strange fellow! 

    In a somewhat earlier memoir, “The Imperial School of Jurisprudence Some Forty Years Ago,” Stasov recalled that, after the first item on the program, Rossini’s William Tell overture, Liszt “moved swiftly to a second piano facing in the opposite direction. Throughout the concert he used these pianos alternately for each piece, facing first one, then the other half of the hall.” Stasov was seated near the composer Glinka, and overheard his conversation before the concert. When one noble lady, Mme. Palibina, asked Glinka whether he had already heard Liszt, Glinka replied that he had heard him the previous evening, at an aristocratic salon.

    “Well, then, what did you think of him?” inquired Glinka’s importunate friend. To my astonishment and indignation, Glinka replied, without the slightest hesitation, that sometimes Liszt played magnificently, like no one else in the world, but other times intolerably, in a highly affected manner, dragging tempi and adding to the works of others, even to those of Chopin, Beethoven, Weber, and Bach, a lot of embellishments of his own that were often tasteless, worthless, and meaningless. I was absolutely scandalized! What! How dare some “mediocre” Russian musician, who had not yet done anything in particular himself [by that time, Glinka had written both his operas!], talk like this about Liszt, the great genius over whom all Europe had gone mad! I was incensed. It seemed that Mme. Palibina did not fully share Glinka’s opinion either, for she remarked, laughingly, “Allons donc, allons donc, tout cela ce n’est que rivalité de métier!” [Come now, come now, all this is nothing but professional rivalry!] Glinka chuckled and, shrugging his shoulders, replied, “Perhaps so.” 

    So if Liszt knew enough to pay tribute, or at least lip service, to the new Romantic ideals, his public acclaim and his consummate, irrepressible virtuosity continued to threaten them. Even after his true capitulation to good taste, when he withdrew from the concert stage to devote himself to what was considered at the time a particularly high-minded species of modern composition, he was regarded as threatening by musicians with a different notion of high-mindedness. Liszt came to symbolize the danger of the mass audience and those who catered to it — a danger that his composing may have posed even more drastically, in the eyes of some, than his piano playing.

    In the later nineteenth century the chief threat to musical idealists was no longer exercised by virtuosos, but by composers who subordinated musical values to mixed media: opera composers, to be sure, who as always commanded the largest and least discriminating audiences, but also — and worse — those who tried to turn their instrumental music into wordless operas, as Liszt did in his symphonic poems and programmatic symphonies. Whether embodied in the corruption of texts or in the corruption of media, the corruption that the fastidious really feared was the corruption of taste and mores, which looked to guardians of good taste like corruption of the flesh. In the early correspondence of Brahms and Joseph Joachim, the adjective Lisztisch was already a code word. In one letter, Joachim writes to Brahms of a certain passage that Brahms had written: “Es bleibt mir häßlich — ja verzeih’s — sogar Lisztisch!” (“I think it’s awful, even — forgive me — Lisztish!”). Or consider Brahms writing to Clara Schumann in 1869:

    Yesterday Otten [G. D. Otten, conductor of the Hamburg Philharmonic] was the first to introduce works by Liszt into a decent concert: “Loreley,” a song, and “Leonore” by Bürger, with melodramatic accompaniment. I was perfectly furious. I expect that he will bring out yet another symphonic poem before the winter is over. The disease spreads more and more and in any event extends the ass’ ears of the public and young composers alike. 

    This diagnosis of social pathology became quite explicit among the Brahmins, among whom Theodor Billroth, the famous surgeon, was the exemplary figure. Writing to the composer after a performance of Brahms’ First Symphony, Billroth gave voice to a new aristocracy of Bildung, of education, taste, and culture — or was it just Liszt’s old aristocracy of mediocrity?

    I wished I could hear it all by myself, in the dark, and began to understand [the Bavarian] King Ludwig’s private concerts. All the silly, everyday people who surround you in the concert hall and of whom in the best case maybe fifty have enough intellect and artistic feeling to grasp the essence of such a work at the first hearing — not to speak of understanding; all that upsets me in advance. 

    Billroth stands in a resistant line that gathered strength as it moved into the twentieth century: the modernist line that helped create the storied Great Divide between art and mass culture. It passes through Schoenberg — for whom “if it is art, it is not for all, and if it is for all, it is not art” — on its way to the likes of Adorno, Dwight Macdonald, and others who insisted that art identify itself in the twentieth century by creating elite occasions, which is to say occasions for exclusion. Liszt, with his generous and inclusive impulse, created many problems for that project.

    As the line of social resistance passed through the twentieth century it got ever shriller, culminating in the pronouncements we have sampled by Rosen and Brendel, allies in snobbery despite their disagreement over Liszt. Charles Rosen never claimed to be a historian (as anyone knows who has read the introduction to The Classical Style), but it takes a singular disregard of history to assert, as he did, that “‘good taste’ is a barrier to an understanding and appreciation of the nineteenth century,” when in fact good taste was the invention of the nineteenth century. It was the invention of nineteenth-century bourgeoises who aspired to the condition of royalty — Billroths who wanted to be Ludwigs, surgeons who wanted to be kings.

     

    In its present state of devolution, the line of good taste has descended to the likes of Jack Sullivan, whose Wikipedia entry identifies him as “an American literary scholar, professor, essayist, author, editor, musicologist, concert annotator, and short story writer,” and who was quoted in the New York Times, on the very day I was drafting the paragraph you are now reading, as complaining in the Carnegie Hall program book about the standard performing version of Chaikovsky’s Variations on a Rococo Theme for cello and orchestra, as revised after its premiere in 1877 by the original performer and dedicatee, Wilhelm Fitzenhagen, at the composer’s request. Under the impression that the original version was to be performed by Yo-Yo Ma, with Valeriy Gergiyev and the Mariyinsky Orchestra, and paraphrasing a letter to Chaikovsky from his publisher, Sullivan grumbled that Fitzenhagen had taken Chaikovsky’s “cannily constructed Neo-Classical piece and ‘celloed it up’ for his own grandstanding purposes.” Thrice-familiar strictures, these; as is the tone of social derision that the phrase “celloed up” (compare “gussied up” or “lawyered up”) is calculated to convey.

    In fact, like every self-respecting virtuoso, Yo-Yo Ma had played the Fitzenhagen version, which includes all the passages (like the famous octaves at the end) that have made the Variations a concert perennial instead of the rarity it remained during Chaikovsky’s lifetime. “Well, who better than Mr. Ma to play something celloed up,” wrote the sharp reviewer for the Times, James Oestreich, exposing the obtuseness of the class warriors with a well-aimed shaft of contrarian bad taste. As I chuckled, I thought of Baudelaire and his immortal sally, “Ce qu’il y a d’enivrant dans le mauvais goût, c’est le plaisir aristocratique de déplaire,” “the heady thing about bad taste is the aristocratic pleasure of giving offense.” And I recalled the bravura defiance of William Gass, novelist and critic and curmudgeon supreme, in his immortal essay “What Freedom of Expression Means, Especially in Times Like These”:

    It is a tough life, living free, but it is a life that lets life be. It is choice and the cost of choosing: to live where I am able, to dress as I please, to pick my spouse and collect my own companions, to take pride and pleasure in my opinions and pursuits, to wear my rue with a difference, to enjoy my own bad taste and the smoke of my cooking fires, to tell you where to go while inspecting the ticket you have, in turn, sent me. 

    What makes this story and its attendant ruminations more than a digression is the letter in which Fitzenhagen reported to Chaikovsky about his first performance, in Wiesbaden in 1879, of the celloed-up version. “I produced a furor,” he assured the composer. “I was recalled three times.” And then he describes the reaction of one particular member of the audience: “Liszt said to me: ‘You carried me away! You played splendidly.’ And regarding your piece he observed: ‘Now there, at last, is real music!’” Mark that it was the sixty-eight-year-old Liszt who was encouraging Fitzenhagen to cello up, thirty years after his retirement from the concert stage and almost forty years since the baccalaureate letter in which he recanted “runs and cadenzas which [bring] the applause of the musically uneducated, but violated the spirit as well as the letter of the music.” Now at peace, the venerable abbé was declaring his solidarity with the applauders.

    In closing, a few words about the Second Hungarian Rhapsody. Yes, of course it is a central work for Liszt; without it, he would not be what he is in our imaginations. But what do those who object to it find objectionable? Why does Brendel exclude it from the category of “works that offer both originality and finish, generosity and control, dignity and fire”? When I hear it well played, I am amazed at the originality with which Liszt imitated the cimbalom, and I marvel at the beautifully realized (and “finished”) form and pacing of the piece, and I fail to see where it is deficient either in control or in dignity. The derision with which it is treated, even by those (like Brendel) who have put in the time and effort to master it, seems a particularly crisp instance of the antitheatrical prejudice as applied to a composition that has become the test par excellence of a pianist’s ability to enact the role of virtuoso, an enactment that achieves its zenith with those special performers, such as Rachmaninoff or Horowitz or Marc André Hamelin, who can top the piece off with their own nonchalant cadenzas, the nonchalance signifying the truly Lisztisch transgressive transcendence that drives aspirational musicians mad.

    And there is more: like a gas (and of course it is a gas), the Second Hungarian Rhapsody has escaped its container and leeched out into the popular culture — which is only meet, after all, since that is where its inspiration had come from. (That, of course, is what objectors object to.) Many other works by nineteenth-century masters had a similar source in restaurant and recruitment music; one need think only of all those Brahms finales — to concertos for piano, for violin, and for cello plus violin, or to his piano quartets. Like Liszt’s Rhapsody, they adapted the sounds of environmental music to the special precinct of the concert hall. But unlike Liszt’s Rhapsody, they were never reabsorbed into the environment. Liszt’s Rhapsody inhabits animated cartoons: Mickey Mouse, Bugs Bunny, and Tom and Jerry have all played it, not to mention (arranged by King Ross) a whole animal orchestra, courtesy of Max Fleischer. It was heard, and used, in dance halls; it was in the repertoire of every swing band. It even haunts sports arenas: I am informed by Wikipedia that in the 1970s the St. Louis Cardinals’ organist Ernie Hays played Hungarian Rhapsody No. 2 to signal that pitcher Al Hrabosky (nicknamed “The Mad Hungarian”) was warming up before appearing as a relief pitcher. It is everywhere. There is even an LP recording of the Rhapsody by a Communist-era Hungarian fakelore ensemble, purporting to return it to an “authentic” environment from which it had never come. 

    Is this something to condemn, something to resist? Or is this interpenetration of the artistic and the vulgar worlds an ineluctable mark, perhaps the defining mark, of Liszt’s greatness? To attempt, like Brendel, to purge Liszt of these impolite associations is indeed to misunderstand his place in our world; but Rosen, too, beholds the vulgar Liszt with distaste. Far better, in the words of Kenneth Hamilton, is to “embrace our own inner Second Hungarian Rhapsody.” We’ve all got one, and Liszt knew it. To accept his invitation to flout snobbish “good taste” might help us reassert, or recover, taste — which is to say, Mozart’s taste as defined by Haydn: namely, a reliable sense of what is fitting, and when.

    The Earth, stuffed to the gills with burning coals

    *   *   *

    The Earth, stuffed to the gills with burning coals

    and consuming itself from its birth

    bristling with folds that sharpen into peaks, sometimes of short hairs

    sometimes forming dark dense beards

    and hollowed out with giant cavities filled with restless water

    from which emerged the grand debris of its genesis

    and full of shallow holes where other waters drowse

     

    Despite the gravity, everything pushes upward

    springs toward its Creator

     

    The sun, a rival, pulls everything toward itself, pulls dangerously

    To avoid catching fire

    the grass clings to the soil, the tree spreads its foliage

    In its shadow the man stretches out, then one day

    lies down forever some few feet underground

    Over our heads masses are moving, whitish

    *   *   *

    Over our heads masses are moving, whitish

    cottony, ghosts on the weather maps

    Windings, swirls, languid scrolls

    under the sting of the wind, wandering herds

     

    Floating bodies. Appearing. Disappearing. In our own image.

     

    We, more unstable than plants fixed to the ground

    or the fish sheltered in water

    We, unable to go back to the ancestral birds

    and flee into the stratosphere

     

    Dominant, dominator winds, chubby sons of Aelous 

    yesterday pushing or smashing Ulysses’ fleet as they wish

    We, more destitute as we progress, our soul

    eaten away by matter, at the mercy of an incendiary night

     

    The Unjust Fate of Man

    On the sandy path that goes by my door

    and leads to the station of dreams,

    where I had just walked, a muffled cry 

    reached my ear.

    I stopped walking and saw a clump 

    of dry, drowsy grass.

    The cry came from the ground.  A root deplored

    being without news from the stem up there with its boughs,

    its flowers and its fruit, maybe even 

    its trunk in full maturity.

    “Why was I, newly born, 

    the ancestor, thrown into a dungeon,

    my maternal task complete,

    like a convict, or worse, a useless being,

    without my having seen, known or recognized the world—

    and what mouth pronounced the injustice of my fate.”

    Before Nightfall 

    Leaning in summer tuxes across the balcony

     

    or reclining like nudes with their hair thrown back,

     

    some trees, after high conversation, complained

     

    about having to go back to the deaf earth again.

     

    The leaves pulled on their arms to keep them

     

    from going and to get even closer to whom?

     

    To what? Which essential truth?

     

    As if the human shadows inside the rooms

     

    would give them some clarification,

     

    some formula against the faceless barking . . .

     

    And what did they sense in me but a trembling?

     

    Night stood in the background.

     

    A flame flew into the grass’ eyes.

     

    I did not move, no longer knowing who I was

     

    or if dawn would also come for me.

    Translated from the French by Henri Cole

     

    Mother death

    *   *   *

    Mother death

    you came to him

    so mildly

    so cruelly

    alternating authority with seduction

     

    He out-of-breath following you or fleeing you

     

    In the end

    you wore the features of Morphine

    and clasped her to you

    cruelly

    mildly

     

    I gave his body to flames

    married his ashes to the sea

    and I alone burn

    in the fire of absence   

     

    The Cult of Carl Schmitt

             I

             As a political thinker, the German philosopher Carl Schmitt was enamored of symbols and myths. His biographer has shown that during the 1930s Schmitt was convinced that providing National Socialism with a rational justification was self-contradictory and self-defeating. The alternative that was conceived by Schmitt, a conservative who was an eminent member of the Nazi Party, was to establish the Third Reich’s legitimacy by means of symbolism and imagery culled from the realms of religion and myth. Schmitt’s attraction to symbols and myths stemmed from his skepticism about the value of “concepts,” which he viewed only instrumentally, as Kampfbegriffe or weapons of struggle. As Schmitt explained, about reading Hobbes’ Leviathan, “we learn how concepts can become weapons.” “Every political concept,” he claimed, “is a polemical concept,” a statement that reflects the essential bellicosity of his thought.

    When it came to fathoming the mysteries of human existence, Schmitt insisted that the cognitive value of symbols and myths was far superior to the meager results of conceptual knowledge. This deep mistrust of reason was related to his veneration of “political theology,” which Schmitt introduced into the mainstream of modern political thought. Schmitt’s devaluation of secular knowledge was exemplified by his well-known dictum that “all modern political concepts are secularized theological concepts,” an assertion that reflected his disdain for the legacy of Enlightenment rationalism. That disdain is what has given Schmitt’s thought new life in our own bleak and inflamed times.

             Schmitt was born in 1888 and died in 1985. He was a constitutional theorist who wrote brilliant polemics against parliamentary democracy and on behalf of dictatorial rule. He played a prominent role in providing a pseudo-legal justification for the Nazi seizure of power and was a virulent anti-Semite. During the early 1920s, the myth that captured Schmitt’s imagination was the “myth of the nation,” a trope that Mussolini had refashioned into a core precept of Italian fascism. Schmitt explored this theme in 1923 in the concluding pages of The Crisis of Parliamentary Democracy. His unabashedly enthusiastic treatment of unreason offers an important clue with respect to his future political allegiances.

    Echoing the phraseology of the proto-fascist and anti-Dreyfusard Maurice Barrès, who died in that year, Schmitt extolled Mussolini’s March on Rome as a triumph of “national energy.” He thereby acknowledged fascism’s capacity to infuse modern politics with a vitality that was absent from the stolid proceduralism of political liberalism. Schmitt was present at the University of Munich in 1921 when Max Weber delivered his celebrated lecture on “Science as a Vocation.” Schmitt agreed wholeheartedly with Weber’s characterization of modernity as an “iron cage:” a world in which the corrosive powers of “rationalization” and “disenchantment” had precipitated a crisis of “meaninglessness.”

    Schmitt’s antidote to this malaise, and to the intellectual maturity of liberalism, was his “decisionism” — his reconceptualization of sovereignty as the right to decide on the “state of exception.” The ruler was the one, the only one, who had the power to decree a state of exception, and to enforce it. Schmitt attributed a cultural and even epistemological superiority to the “exception” as opposed to the “norm” and the “rule.” It was the antithesis of Weberian disenchantment. As Schmitt exulted, “In the exception, the power of real life breaks through the crust of a mechanism that has become torpid by repetition.” In keeping with the discourse of political theology, Schmitt stressed the parallels between the “state of exception” in jurisprudence and the “miracle” in theology.

    Schmitt exalted the fascist coup as a historical and philosophical turning point in the struggle to surmount the straitjacket of rule-guided bourgeois “normativism,” a legacy of the Enlightenment that Kulturkritiker such as Nietzsche and Spengler deemed responsible for modernity’s precipitous descent into “nihilism.” For Schmitt, the March on Rome was the state of exception come to life. The scholarly nature of his treatise notwithstanding, Schmitt was unable to conceal his prodigious pro-fascist fervor. “Until now,” he wrote, “the democracy of mankind and parliamentarism has only once been contemptuously pushed aside through the conscious appeal to myth, and that was an example of the irrational power of the national myth. In his famous speech of October 1922 in Naples before the March on Rome, Mussolini said, ‘We have created a myth, this myth is a belief, a noble enthusiasm, it does not need to be reality, it is a striving and a hope, belief and courage. Our myth is the nation, the great nation which we want to make into a concrete reality for ourselves’.”

    Schmitt celebrated Mussolini’s mobilization of the “national myth” as “the most powerful symptom of the decline of the… rationalism of parliamentary thought … [and the] ideology of Anglo-Saxon liberalism.” (We would now call Schmitt’s position “post-liberalism.”) Italian fascism was the harbinger of a brave new world of conservative revolutionary political ascendancy: a glorious form of Herrschaft predicated on the values of “order, discipline, and hierarchy.” Mussolini’s putsch represented much more than a simple “regime change.” It signified a qualitative setback for the “ideas of 1789” and a resounding triumph of the counterrevolutionary ethos, as represented by the “Catholic philosophers of state” — Joseph de Maistre (1753-1821), Louis de Bonald (1754-1840), and the relatively unknown Spaniard Juan Donoso Cortés (1809-1853) — whom Schmitt revered.

    Following the Great War, the political challenge that Schmitt confronted was how to “actualize” the tenets of counterrevolutionary thought in a godless secular age whose assault on the twin pillars of traditional political authority, throne and altar, had eliminated absolute monarchy as a viable political option. Schmitt’s French doppelgänger, Charles Maurras, the leader of the Action Française, grappled with this dilemma as well. Schmitt was an avid reader of Action Française, which Maurras edited, and he regarded Maurras as France’s most interesting thinker. Maurras, despite his counterrevolutionary revulsion against the legacy of 1789, remained anachronistically wedded to monarchism. Schmitt, by contrast, opted for a Flucht nach vorne, what we would call a forward defense, which is to say, he went on the attack. At the dawn of what he believed would be a new post-liberal era, Schmitt made a definitive break with all forms of traditionalism with his particular doctrine of dictatorship.

    Schmitt found ample ideological support for his authoritarian credo in the counterrevolutionary doctrines of Maistre and Donoso Cortés, both of whom occupied a privileged position in Schmitt’s pantheon of esteemed intellectual precursors. In 1821, in Les Soirées de St. Petersbourg, Maistre — like Schmitt a proponent of “political theology” — exalted the figure of the Executioner as God’s emissary on earth and an agent of Divine justice. Maistre apotheosized the Executioner as a puissance créatrice, a creative force, and an être extraordinaire, an extraordinary being. Maistre maintained that, in view of humanity’s innate propensity for evil, the Executioner was the ultimate guarantor of secular order. As such, he alone separated human society from a headlong descent into anarchy and chaos.

    Yet it was Donoso Cortés’ unmatched political clairvoyance that expanded Schmitt’s horizons, thereby allowing Schmitt to transcend the constraints of traditional conservatism, whose gaze — as the case of Charles Maurras demonstrated — was obsessively and counter-productively fixated on the past. In Political Theology, Schmitt praised Donoso Cortés as a paragon of “decisionistic thinking and a Catholic philosopher of state who was intensely conscious of the metaphysical kernel of all politics.” According to Schmitt, Donoso Cortés was the only counterrevolutionary thinker who drew the proper conclusion from the “Scythian fury” of the revolutions of 1848, which “godless anarchists” such as Bakunin and Proudhon had directed against the forces of the ancien régime: that absolute monarchy had indisputably become a thing of the past. It was over. As Schmitt put it in Political Theology, insofar as “there were no more kings, the epoch of royalism had reached its end.” The conclusion that Donoso Cortés drew was that because “[monarchical] legitimacy no longer existed in the traditional sense… there was only one solution: dictatorship.”

    Donoso Cortés exalted dictatorship as an inviolable and sacrosanct decision — a decision, that, as Schmitt explained, is “independent of argumentative substantiation” and that “terminates any further discussion about whether there may still be some doubt.” Schmitt’s encounter with Donoso Cortés’ “decisionism” was the “primal scene” of his political philosophy. It determined what Schmitt described in Roman Catholicism as Political Form in 1925 as the “complex oppositorum” between political theology and secular political rule. Schmitt embraced Donoso Cortés’ Christological understanding of anarchists and socialist as agents of the Antichrist, as political actors whose goal it was to “disseminate Satan.” For Schmitt, Donoso Cortés had correctly understood that the momentous battle between “absolute monarchy” and “godless anarchism” was not merely another profane political conflict. Instead it was a struggle that anticipated the Last Judgment.

    Donoso Cortés’ apocalyptic view of politics as a final struggle between Good and Evil became the cornerstone of Schmitt’s “decisionism.” Schmitt praised decision as a force that “frees itself from all normative ties and thereby becomes absolute.” Hence, according to Schmitt, it was the “royal road” to dictatorship. As he explained in Political Theology:

    The true significance of counterrevolutionary philosophers of state [such as Maistre, de Bonald, and Donoso Cortés] lies in the consistency with which they decide. They heightened the moment of the decision to such an extent that the notion of legitimacy . . . was finally dissolved. As soon as Donoso Cortés realized that the period of monarchy had come to an end because there no longer were kings . . . he brought his decisionism to its logical conclusion: he demanded a political dictatorship. . . Donoso Cortés was convinced that the final battle had arrived. In the face of radical evil, the only solution is dictatorship.

    Donoso Cortés’ epiphany concerning the political-theological significance of dictatorship anticipated the Grand Inquisitor episode of The Brothers Karamazov. The important parallels between Schmitt’s views on dictatorship and Dostoevsky’s allegorical treatment of it were not lost on the renegade Jewish theologian Jacob Taubes. Following World War II, Taubes wrestled profoundly with Schmitt, whom he called the “apocalyptic prophet of the counterrevolution,” and with whom he had an interesting correspondence. Taubes’ reflections concerning the totemic significance that Schmitt attributed to Dostoevsky’s parable about the political consequences of human sinfulness are worth citing:

    I had quickly come to see Carl Schmitt as an incarnation of Dostoevsky’s Grand Inquisitor. During a stormy conversation at Plettenberg in 1980, Schmitt told me that anyone who failed to see that the Grand Inquisitor was right about the sentimentality of Jesuitical piety had grasped neither what a Church was for, nor what Dostoevsky—contrary to his own conviction—had “really conveyed, compelled by the sheer force of the way in which he posed the problem.” I always read Carl Schmitt with interest, often captivated by his intellectual brilliance and pithy style. But in every word I sensed something alien to me, the kind of fear and anxiety one has before a storm, an anxiety that lies concealed in the secularized messianic art of Marxism. Carl Schmitt seemed to me to be the Grand Inquisitor of all heretics.

    Support for Taubes’ intuition about Schmitt and the Grand Inquisitor as fraternal spirits is provided by a friend from Schmitt’s Munich days. In a letter in February 1922, Hermann Merk suggested to Schmitt, half-seriously, that, “if someone were to establish a Lehrstuhl at the University of Munich for the justification of the Spanish Inquisition, you would be the ideal person to occupy it, and I would be your most devoted student!”

    Schmitt’s glorification of dictatorship as a sovereign decision that “terminates any further discussion” resurfaced in his landmark debate in 1931 with Hans Kelsen, the eminent jurist and legal philosopher who was forced to leave Germany two years later because he was a Jew, about who is the “Guardian of the Constitution.” Schmitt’s numerous champions have portrayed his defense of executive sovereignty as a last-ditch attempt to safeguard the Weimar Republic against the encroachments of political extremism, both left and right. They have neglected to consider the political-theological underpinnings of Schmitt’s worldview. In the colloquy with Schmitt, Kelsen, a vociferous champion of Rechtsstaatlichkeit (rule of law), advocated strengthening the federal constitutional court as the instance of last resort. Schmitt, conversely, basing himself on Article 48, the Weimar Constitution’s notorious emergency powers proviso, argued in favor of a “sovereign” presidential dictatorship. In light of Schmitt’s strong commitment to the paradigm of political theology, it is difficult to avoid the conclusion that, in the debate with Kelsen, he favored “saving” democracy by destroying its institutional and normative guarantees. As Jürgen Habermas has aptly commented, “Anyone who would want to replace a constitutional court by appointing the head of the executive branch as the “Guardian of the Constitution” — as Carl Schmitt wanted to do in his day with the German president — twists the meaning of the separation of powers in the constitutional state into its very opposite.”

    A year later, in July 1932, Schmitt played a key role in the infamous Preussenschlag controversy, Chancellor Franz von Papen’s constitutional coup against Prussia’s Social Democratic government. Schmitt vigorously argued the case on behalf of the Reich before the federal court in Leipzig. As chancellor, von Papen had contributed significantly to Prussia’s civic disarray by lifting the federal ban against the SA, in an ill-conceived attempt to curry favor with the Nazis. In January 1933, von Papen was named as Hitler’s vice-chancellor. In April, he appointed Schmitt to draft legislation that merged the Länder with the federal government in Berlin. By eliminating the last vestiges of provincial legal autonomy, the Gleichschaltung (synchronization) measures promulgated by Schmitt effectively sounded the death knell of the Weimar Republic. They marked a point of no return on the way to the Nazis’ consolidation of totalitarian rule.

    In light of Schmitt’s concerted efforts to undermine the Weimar Republic’s constitutional stability, as well as the significant role that he played in providing the nascent Hitler-Staat with a veneer of juridical legitimacy, it is not surprising that after the war he was known as the “gravedigger of the Weimar Republic.” The pivotal role that Schmitt played in contributing to the Weimar Republic’s demise cannot be understood apart from his underlying commitment to the counterrevolutionary political theology of Maistre, de Bonald, and Donoso Cortés. In keeping with their visceral aversion to the heritage of “1789,” Schmitt viewed political liberalism’s fitful ascent during the nineteenth century with similar contempt. Following the revolutions of 1848, Donoso Cortés — an intrepid defender of monarchism and an ideological precursor of the “clerico-fascism” of Franco and Salazar — condemned the heretical strivings of a new generation of political radicals as “Satanism” pure and simple.

    True to his counterrevolutionary lineage, Schmitt’s lifelong ideological animus against parliamentarism and the rule of law was motivated by a similar set of political-theological concerns. Schmitt, too, displayed a visceral aversion to the precepts of modern secularism and its political corollaries: humanism, liberalism, constitutionalism, and social democracy. Consequently, following the Nazi seizure of power, Schmitt had no compunction about glorifying the Hitler-Diktatur as a “Katechon,” a “restrainer” or “bulwark” who staves off the advent of the Antichrist, whose contemporary “agents” were the godless and heretical representatives of the political left: liberals, socialists, Bolshevists, communists, anarchists, and of course Jews.

     

              Schmitt’s glorification of the “national myth” comes in a chapter of The Crisis of Parliamentary Democracy devoted to “Irrationalist Theories of the Direct Use of Force.” His treatment of this theme was nothing if not timely. In October 1917, the Bolsheviks, led by Lenin, overthrew Alexander Kerensky’s Provisional Government and set the stage for seventy-four years of murderous dictatorial rule. The events in Russia had a significant ripple effect. In 1919, Bolshevik-inspired “council republics” were proclaimed in Bavaria and Hungary. Within months, however, both regimes were ruthlessly suppressed by counterrevolutionary militias that reveled in profligate acts of “White Terror.”

    Among paramilitary veterans groups, such as the German Freikorps and the Italian squadre d’azione, violence was elevated to the level of a secular religion. In Central and Eastern Europe, right-wing forces frequently targeted Jews, whom they associated with the “Bolshevik menace,” notwithstanding the fact that the vast majority of Jews were steadfastly opposed to communism. In Germany, antisemitism was the ideological catalyst behind the assassination of prominent Jewish politicians such as the Bavarian Prime Minister Kurt Eisner in 1919 and Foreign Minister Walther Rathenau in 1922. In Russia and Eastern Europe, heightened antisemitism incited indiscriminate and bloody pogroms. During the Russian Civil War, from 1918 to 1921, counterrevolutionary armies in Ukraine murdered an estimated thirty thousand Jews.

    During the war Schmitt was stationed in Munich, where he worked in the intelligence services of the German General Staff. His primary assignment was to monitor contacts between left-wing politicians and pacifists in neighboring Switzerland. The White Terror in Munich — much of which Schmitt witnessed first-hand — was especially bloody. Over six hundred people lost their lives, numerous sympathizers of the “council-republics” were summarily executed after the hostilities had ceased. Similarly, in Hungary, when Béla Kun’s Soviet Republic imploded in August 1919, 1,500 persons were killed, over three times the number of those who perished at the hands of the “Reds.”

    The political tumult that rocked Germany following the left-wing revolution in November 1918, when workers’ and soldiers’ councils proliferated in the wake of the Kaiserreich’s collapse, might accurately be described as a permanent “state of emergency.” Both the war years — when civilian rule was de facto suspended in favor of the Ludendorff-Hindenburg dictatorship — and the prolongation of martial law during the postwar period conditioned Schmitt to accept the Ausnahmezustand, or state of emergency, as the new normal. It became one of his contributions to the vocabulary of modern political philosophy. It reinforced his commitment to authoritarian rule as well as his innate mistrust of civilian interference in politics. Schmitt’s inaugural lecture at the University of Strasbourg, in 1916, had examined the constitutional (staatsrechtlich) parameters of “Dictatorship and State of Siege.” The superiority of dictatorship over “constitutionalism” and “legalism” — both of which hampered the political sovereign’s ability to act forcefully and decisively in a state of emergency — became the defining theme of Schmitt’s work. It was not by chance, therefore, that in 1921 Schmitt selected Dictatorship as the theme, and the stark title, of one of his first major scholarly works.

    The apotheosis of political violence that accompanied the Great War and the spate of civil wars that followed conditioned Schmitt’s famous reconceptualization of politics in The Concept of the Political, in 1927, as the capacity to distinguish “friends” from “enemies.” That was it: the essence, indeed the entirety, of politics. By seeking to ground sovereignty through war as the ultima ratio of politics, Schmitt sought to oppose the growing consensus in favor of international cooperation that followed the League of Nation’s founding in 1919. Following the precedent set in that year by Spengler’s Prussianism and Socialism, Schmitt furnished an urgent brief in support of the values of Prussian militarism. “The concepts of friend, enemy, and struggle [Kampf],” Schmitt insisted, “receive their real meaning insofar as they relate to and preserve the real possibility of physical annihilation. War follows from enmity, [from] the existential negation of another being.” “The political enemy,” he continued, “is the other, the alien, and it suffices that in his essence he is something existentially other and alien in an especially intensive sense . . . War, the readiness for death of fighting men, the physical annihilation of other men who stand on the side of the enemy, all that has no normative, only an existential meaning.” Those must be some of the most chilling words written in modernity. Schmitt’s account of politics wished to replace a rational world of norms and rules with a pre-rational order of visceral ruthlessness in which tolerance was inimical to survival and war was eternal.

    Another one of Schmitt’s main goals in The Concept of the Political was to perpetuate the bellicist ethos of the Frontgeneration. It was an objective that was shared by other conservative revolutionary intellectuals: for example, Ernst Jünger, a conservative and a remarkable writer, whose fifty-year correspondence with Schmitt began in 1930 and ended in 1983. As Habermas has noted, “Schmitt was fascinated by the First World War’s Storms of Steel, to use the title of Ernst Jünger’s war diary…A people welded together in a battle for life and death asserts its uniqueness against both external enemies and traitors within its own ranks.” At one point Schmitt, invoking a metaphor taken from marksmanship, proclaimed that “the zenith of Great Politics is the moment when the enemy comes clearly into view as the enemy.” In The Concept of the Political, Schmitt also sought to combat the spirit of anti-militarism and international comity that, in response to the unprecedented carnage of World War I, had encouraged the expansion of international law in order to ensure a peaceful resolution of regional disputes — a movement that culminated in 1928 in the Kellogg-Briand Pact, which quixotically sought to outlaw war as an instrument of national policy.

    The Social Darwinist undercurrent of The Concept of the Political — Schmitt’s insinuation that preparation for war is the raison d’être of “the political” — anticipated his controversial Grossraum doctrine of the early 1940s, which brazenly redefined “natural right” as the “right of the strongest.” Although Schmitt’s champions have sought to portray him as nothing more than a political realist in the tradition of Machiavelli and Hobbes, Schmitt’s “existential” glorification of “war” as the “readiness for death of fighting men, the physical annihilation of men who stand on the side of the enemy … [hence] the existential negation of another being” is significantly at odds with that tradition. After all, the point of Hobbes’ Leviathan was to transcend the war of all against all by means of a civil compact, not to celebrate and expand it.

    The ideological and political turmoil that convulsed Europe following World War I left Schmitt with a permanent fear of political instability. It also inculcated in Schmitt a hypertrophic and abiding fear of “Jewish Bolshevism.” As Paul Hanebrink observes in The Myth of Judeo-Bolshevism, “From the Vatican to Paris salons to paramilitary barracks in the south of Hungary, the history of the Munich Republic of Councils seemed proof of a Jewish plot to overthrow civilization and impose foreign rule on the nations of Europe.” Schmitt’s diaries from the 1910s are suffused with antisemitic invective. They betray a preoccupation with Jews that borders on the clinical. His Judeophobia was especially acute in the case of the assimilated Jews whom he encountered regularly during his student years and his career as a university professor. According to Schmitt, the major problem with assimilated Jews was that they made it nearly impossible to establish a clear divide between “friends” and “enemies.”

    In a diary entry on October 13, 1914, Schmitt spoke about his “Jewish complex,” the confusing amalgam of fascination and revulsion that he felt toward Jews. Although German Jews superficially resembled “normal Germans,” Schmitt held that, on a more profound level, the differences that separated these two peoples were vast. Ultimately, Schmitt’s Judeophobia — which intensified during the “Judeo-Bolshevist” hysteria that coincided with the suppression of the Bavarian Räterepublik in April 1919 — metamorphosed into one of the defining features of his work. Schmitt’s lifelong animus against political liberalism, which culminated in his confrontation withs Kelsen’s “normativism,” was inseparable from his fears concerning the “disintegrative” and “corrosive” character of Jewish influence. His conservative revolutionary allies excoriated the Weimar Republic as a Judenrepublik; it was, they claimed, undeutsch. In The Crisis of Parliamentary Democracy, Schmitt asserted that Artgleichheit, or “racial sameness,” was one of the indispensable hallmarks of the “leader-democracy” (Führerdemokratie) that he envisioned as parliamentary democracy’s successor.

    As Schmitt’s diaries amply attest, he viewed the Jews as the Drahtzieher, or “string pullers,” who were secretly orchestrating these fateful developments from behind the scenes. Already in the 1920s, Schmitt’s sweeping critique of “political liberalism” and “total mechanization” flirted with the idea of a “Jewish world conspiracy” — a notion that was, among conservative revolutionary intellectuals, a truism. Schmitt’s indictment of modernity as an “age of neutralizations and depoliticizations” overlapped with the ascendancy of what the historian Shulamit Volkov has called “antisemitism as a cultural code.” In the discourse of Central European Zivilisationskritik, the agenda of antisemitism was often advanced under the semantic camouflage of a critique of “modernity,” “capitalism,” “technology,” and “liberalism.” Antisemites alleged that in all of these domains Jews played a deleterious and outsized role. A watershed in this line of attack was Werner Sombart’s well-known treatise The Jews in Modern Capitalism, which appeared in 1911, in which he highlighted the affinities between the Jews as a “nomadic desert people” and the “extraterritoriality” of contemporary international finance, and attributed the Jews’ economic success to their “rootlessness,” which, he claimed, engendered a mentality that was averse to firm conviction and conducive to abstract calculation.

    After 1933, when the political situation became more propitious, Schmitt was free to propound his antisemitic views unabashedly and without fear of reprisal. He wasted no time. The semantic violence that was implicit in Schmitt’s disdain for Kelsen’s “legal positivism” now left nothing to the imagination. In 1934, in an essay called “National Socialist Legal Thinking,” Schmitt explicitly celebrated the Nazi legal revolution as a victory of the German over the tyranny of Jewish “legalism.” According to Schmitt, the Volk’s triumph was abetted by its return to “the natural forms of order that emerge from Blut und Boden [blood and soil].” Schmitt added that “normativism’s” predominance under Weimar was due to the “influx of the alien Jewish Volk.” A corrosive infatuation with “legalism,” claimed Schmitt, was “one of the peculiarities of the Jewish people, who for thousands of years have lived not as a state on a piece of land, but solely in the law and norm, which in the true sense of the word are ‘existentially normativistic.’” With Hitler in power, the antisemitic animus that was implicit in Schmitt’s critique of parliamentarism in the 1920s emerged in all its hatefulness.

    II

    Not only was Schmitt enamored of political myths. He was also an adept self-mythologizer. After the war, this talent proved invaluable in the course of his struggle for rehabilitation.

    During the initial years of Nazi rule, Schmitt’s influence was omnipresent. In the words of his former student Waldemar Gurian, who fled Germany and became an important scholar of totalitarianism and a Catholic political theorist in the United States, Schmitt was the de facto “Crown Jurist of the Third Reich.” Following the Nazi seizure of power, Schmitt accumulated, with astonishing speed, an impressive array of offices and titles. In July 1933, Hermann Goering appointed Schmitt to the Prussian State Council. Schmitt was also named to the presidium of Hans Frank’s Academy of German Law. In 1934, Schmitt accepted a prestigious appointment to the faculty of law at the University of Berlin. He served on the executive committee of the Association of National Socialist German Jurists and was editor-in-chief of the Association’s journal, the Deutsche Juristen-Zeitung.

    In July 1934, Schmitt furnished a legal brief justifying Hitler’s bloody purge of the SA on June 30, 1934 — the Night of the Long Knives. It was called “The Führer Protects the Law,” which became a famous slogan. Schmitt’s opinion was a resounding endorsement of the Führerprinzip as the wellspring of legitimacy. It is difficult to construe Schmitt’s article other than as a writ for unrestrained autocratic lawlessness.

    Already in Political Theology, twelve years earlier, Schmitt had asserted that the sovereign must operate from a position outside of the constitution, at a permanent remove from the constraints of “legality.” One of the reasons that, after 1945, Schmitt found it difficult to shake the “gravedigger of the Weimar Republic” epithet was the widely shared view that, in his capacity as “Crown Jurist of the Third Reich,” Schmitt had merely transposed his earlier glorification of the “state of emergencyto the post-1933 circumstances.

    Schmitt’s talent for self-mythologization became evident with the publication of Ex Captivitate Salus, a memoir, or more precisely an apologia pro vita sua, in 1950. Invoking Herman Melville’s novella Benito Cereno — the tale of a ship captain who, in the aftermath of a mutiny, must do the rebellious crew’s dastardly bidding — Schmitt confabulated the legend that his cooperation with the Nazis merely reflected a desperate struggle for survival. He insisted that his support for the regime had been, from start to finish, involuntary: the actions of someone who, to all intents and purposes, had a gun pointed at his head. Schmitt’s self-exculpatory claims are factually unsustainable. But the facts have not dissuaded a devout coterie of loyalists from accepting the Benito Cereno conceit. In the English-speaking world, the cult of Carl Schmitt was first orchestrated by a clique of postmodern Salon-Bolshevists, and more recently by a little but loud movement of “post-liberals.”

    The legend of Schmitt’s innocence derives from two articles that were published in the SS weekly Das Schwarze Korps in December 1936, which questioned Schmitt’s National Socialist bona fides. The articles portrayed Schmitt as an opportunist who had belatedly joined the party in order to advance his career and to camouflage his pro-Catholic loyalties. Schmitt’s detractors — one of whom, Reinhard Höhn, was a colleague of Schmitt’s at the University of Berlin — were political rivals who resented his meteoric rise to prominence in Nazi legal circles. Moreover, since Schmitt was widely regarded as a protégé of Hans Frank — the politician who eventually headed the Nazi occupation of Poland, oversaw four extermination camps, and was convicted for crimes against humanity at Nuremberg and executed — his adversaries hoped that, by attacking Schmitt, they could also interfere with Frank’s political ambitions. Following the attacks, Schmitt was stripped of his party offices. Thanks to Göring’s patronage, he was permitted to keep his University of Berlin professorship and his position as Prussian state counselor. When viewed through the lens of the unending intraparty squabbles that were endemic to Nazi rule, however, the temporary setback that Schmitt experienced was hardly proof of heterodoxy. Moreover, following his rehabilitation by the SS, Schmitt was permitted to travel and lecture freely. Later Schmitt’s opponents were themselves ignominiously sacked.

    Ever resourceful, with one eye trained on the impending outbreak of war, Schmitt reinvented himself as a specialist in geopolitics. Schmitt’s doctrine of Grossraum relied on Social Darwinist arguments concerning the “natural right” of so-called “large space nations” (Grossraum Völker) to subsume “small space nations” (Kleinraum Völker), thereby making a mockery of existing international law. In essence, Schmitt’s geopolitical thought underwrote the Third Reich’s draconian plans for Eastern European hegemony, the Drang nach Osten. Schmitt outlined his geopolitical theories in “Raum and Grossraum in International Law,” a lecture that he presented in Kiel on April 1, 1939, a fortnight after the German invasion of Czechoslovakia. In his lecture, Schmitt invoked the precedent of the Monroe Doctrine to justify the supremacy of the Grossdeutsches Reich or “Greater Germany” in Central Europe. (Hitler was so enamored of Schmitt’s Monroe Doctrine analogy that he immediately included it in a speech in the Reichstag, warning President Roosevelt to refrain from intervention in the event of a future European war, which was in fact only four months away.) Schmitt’s arguments summarily disqualified existing international law and traditional claims to state sovereignty on the part of so-called “small space nations.” As the refugee scholar Franz Neumann observed in Behemoth, one of the first great studies of National Socialism, Schmitt’s Grossraum doctrine underwrote Hitler’s “Grossdeutsches Reich [as] the creator of its own international law for its own Raum or space.” Neumann aptly denounced Schmitt’s concept as little more than pseudo-scientific cover for the Third Reich’s geopolitical ambitions: “It offers a fine illustration of the perversion of genuine scientific considerations in the interest of National Socialist imperialism.”

    The fact that Schmitt’s doctrine of Grossraum seemed to lack the customary obeisances to Nazi race doctrine — one of the main arguments brandished by Schmitt’s defenders to downplay his contribution to Nazi foreign policy doctrine — is immaterial, since this omission also had a tactical side: it imparted a measure of credibility to Schmitt’s theories in international law circles that they would have otherwise lacked.

    Nor was the idiolect of Nazi race thinking entirely absent from Schmitt’s arguments. In “Grossraum and International Law,” Schmitt’s disparagement of Jews as an “artfremde Volksgruppe” — a “racially alien people” — was tantamount to a death warrant, since, according to the tenets of Grossraum, “racially alien” groups were devoid of legal standing. Nazi Grossraum doctrine — Schmitt’s included — was predicated on the twofold imperatives of Raum and Boden, “space” and “soil.” Since Jews were deemed a “rootless” or bodenloses people, they were denied the legal protections that accrued to “rooted” or bodenständige Völker.

    With the publication of Schmitt’s Grossraum essays, and the adoption of his Monroe Doctrine analogy by the Führer, Schmitt’s “comeback” was virtually assured. As a reporter for The Times of London remarked about Schmitt’s address in Kiel in April 1939: “Hitherto, no German statesman has given a precise definition of Hitler’s aims in Eastern Europe. But perhaps a recent statement by Prof. Carl Schmitt, a Nazi expert on constitutional law, may be taken as a trustworthy guide. Schmitt’s Grossraum concept was rapidly embraced by a cadre of high-ranking SS officers attached to the Reich Security Main Office (RSHA) in Berlin. Infusing Schmitt’s approach with a more explicit völkisch-ideological orientation, they proceeded to invoke Grossraum as a pseudo-legal justification for a Nazi-dominated Europe, for German continental hegemony — a strategy that was predicated on the idea of German racial supremacy, in keeping with Nazism’s understanding of Deutschtum, or Germanness, as Herrenrasse, or the master race.

     

              Schmitt’s postwar apologetics suffered a posthumous blow in 2011, when his diaries from the early 1930s were published. They meticulously document Schmitt’s reactions to National Socialism’s political ascent. In an entry in February 1932, for example, Schmitt avowed that, in the upcoming presidential elections, he planned on voting for Hitler. On January 30, 1933, the day of the Nazi seizure of power, Schmitt remarked: “At the Café Kutschera [in Berlin], where I learned that Hitler had become chancellor and Papen vice-chancellor. Excited, happy, satisfied.” The reasons for Schmitt’s “excitement” at the café are not hard to fathom. He realized that Hitler’s rise to power guaranteed the demise of the Weimar “system,” an entity that Schmitt viewed with contempt and whose downfall he had sought to hasten. Whatever reservations Schmitt may have harbored concerning the advent of Nazi rule prior to January 30, 1933 dissipated rather quickly.

    This conclusion is supported by Schmitt’s reaction to the Reichstag’s approval of the Enabling Act of March 23, which allowed Hitler to legislate by decree. In his comments, which were published in the Deutsche-Juristen Zeitung, not only did Schmitt hail the Act’s passage, he went so far as to attribute constitutional status to the emergency decrees that had been promulgated by the nascent Hitler-state. Thereby, he added, these decrees superseded the legal provisions of the Weimar Republic, whose constitution technically remained in effect. In a follow-up article that was published on May 12 in the Westdeutscher Beobachter, called “The Good Law of the German Revolution,” Schmitt reaffirmed, unequivocally and emphatically, that “the good law of the German Revolution is not dependent on respecting the legality of the Weimar ‘System’ and its constitution.” Gone was the distinction that he had established in his book Dictatorship between “commissarial” (temporary) and “sovereign” (permanent) dictatorship. If ever there was a “sovereign” dictatorship, it was Hitler’s.

    The numerous political and legal commentaries that Schmitt penned in support of the Nazi dictatorship — many of which appeared in official Nazi publications such as the Völkischer Beobachter and the Westdeutscher Beobachter — are extremely revealing with respect to Schmitt’s attitudes at the time. They demonstrate that Schmitt’s accommodation to Nazi rule was speedy, seamless, and unstinting. It was as though, with Hitler’s Machtantritt, a dam had burst, and the new political circumstances allowed Schmitt to freely express political views that during Weimar he had been forced to suppress. The republication last year of Schmitt’s Nazi writings helped to resolve a major controversy that beset Schmitt scholarship for decades: whether January 30, 1933 marked a break with or a continuation of Schmitt’s previous political self-understanding.

    One important measure of the continuities in Schmitt’s worldview is the persistence of race thinking. Prior to the publication of Schmitt’s diaries (the most recent installment, Tagebücher 1924–1929, appeared in 2018), Schmitt’s champions often appealed for a “pluralistic” and “differentiated” understanding of his legacy, an interpretive tack that dissuaded scholars from focusing too much on Schmitt’s anti-Semitism. Yet as evidence of Schmitt’s Judaeophobia began to mount, such appeals rapidly devolved into repression and denial. The publication of Schmitt’s diaries has demonstrated that the “Jewish complex” to which Schmitt alluded in 1914 was merely the tip of the iceberg, the harbinger of a fevered anti-Judaism that crested during the Nazi period.

    In his diary in November 1931, Schmitt excoriated the left-wing Romanian poet and historian Valeriu Marcu as a “horrible Jew, of the dumb and superficial variety.” A month later, on Christmas Eve, Schmitt recounted having sung Christmas songs in his Berlin apartment and being overwhelmed by the “shame and scandal of living in a Judenstadt [Jew-city], insulted and shamed by Jews.” Schmitt’s wrath was often directed against assimilated Jews. In his eyes, by trying to pass themselves off as authentically German, they were doubly guilty. As he wrote on March 19, 1933: “Hopeful because of the Nazis, rage at the Jew [Erich] Kaufmann and the imposture of these assimilated Jews.”

    Kaufmann was one of Schmitt’s colleagues on the law faculty of the University of Berlin. His name surfaced in a letter of denunciation that Schmitt wrote to the Minister of Education on December 14, 1934. In his missive, Schmitt claimed that Kaufmann’s presence was a “slap in the face [to the] National Socialist students.” It was not Kaufmann’s pedagogical abilities, continued Schmitt, that were in question. Instead, it was Kaufmann’s status as an assimilated Jew that mattered; or, as Schmitt put it, Kaufmann’s deleterious “influence on German spiritual life and German youth.” As Schmitt urged in conclusion: “Especially today, when the German Volk and German students are being educated through a process of National Socialist schooling, this type of Jewish infiltration and influence must be rigorously avoided.” Kaufmann was promptly dismissed.

    After the war, Schmitt remained unrepentant and defiant. His journals from the years 1947-1951 are suffused with crude antisemitism. Schmitt derogated Jews as “Isra-Elites,” arguing that they were the only “elites” to have survived the war. And in a classic case of “Holocaust inversion” — transforming victims into perpetrators and perpetrators into victims — he claimed that the Jews had been World War II’s real victors. In September 1945, Schmitt was arrested by the Allies in a “general sweep” and interned as a possible “security threat.” Following his release, Schmitt was re-arrested in March 1947. He was transferred to Nuremberg, where he was interrogated by the American prosecutor Robert Kempner as a “potential defendant.” The indictment centered on Schmitt’s Grossraum articles, which Allied prosecutors regarded as a blueprint for Nazi Germany’s “war of annihilation” in the East. Schmitt avoided prosecution — a direct link between his theories and Nazi policy was not legally demonstrable — and was released two months later.

    The experience left Schmitt embittered. He regarded himself and his fellow Germans as the victims of the Allies’ “discriminatory concept of war” and their indefensible “moralization of punishment.” Schmitt’s objections were consistent with his earlier fulminations against “just war” doctrine and the Versailles Treaty’s “war guilt” clause. In 1958, in his foreword to the Spanish edition of his memoir, Schmitt lamented that the Allied legal proceedings had resulted in the unjustifiable “criminalization of an entire people.” He continued: “As Germany lay on the ground, defeated… the Russians and the Americans undertook mass internments and defamed entire categories of the German population. The Americans termed their method ‘automatic arrest.’ This means that thousands and hundreds of thousands of members of certain demographic groups — for example all high-level civil servants — were summarily stripped of their rights and taken to a camp.” The un-self-awareness — or sheer mendacity — of such passages is breathtaking.

    Schmitt’s exclusive focus on German suffering was characteristic of the mood of “repression” and “silence” that prevailed in postwar Germany. Schmitt excoriated the Nuremberg Tribunal as a violation of the time-honored legal maxim nulla poena sine lege — one cannot be punished for doing something that is not forbidden by law. By the same token, he gave little thought to the question of what form of punishment would be appropriate for the unprecedented criminality and mass atrocities that had been perpetrated by the Third Reich and its functionaries. Nor did Schmitt display a modicum of sympathy for the victims of Nazi Bevölkerungspolitik: the six million Jews who perished in Nazi death camps; the three million Soviet POWs who died in German captivity; the twelve million slave laborers who were dragooned to toil in German armaments factories; and so forth. Instead, he callously rationalized these misdeeds as unavoidable “casualties of war.” In Schmitt’s account, they were victims without perpetrators. Schmitt also liked to attribute the war’s tragic outcome to the “all-conquering progress of modern technology,” whose “dislocations” he proceeded to enumerate, mocking the liberal idea of “progress” along the way: “’Progress’ in the appropriation of the human individual, ‘progress’ in mass criminalization and mass automation. A giant apparatus indiscriminately swallows up hundreds of thousands of people. The old Leviathan appears almost cozy by comparison.”

    Following the war — and before Schmitt’s own apologetics had time to take root — some observers recognized the significant contribution that Schmitt had made to consolidating Nazi rule. In Deutsche Daseinsverfehlung in 1946, Ernst Niekisch accused Schmitt’s “friend-enemy” distinction of having furnished the “algorithm of bestiality” that was ruthlessly put into practice by the SA and SS. Similarly, Rudolf Smend, a former colleague at the law faculty of the University of Berlin, denounced Schmitt as a legal “pioneer of the National Socialist system of violence.” But Schmitt himself systematically eschewed questions of responsibility, personal as well as collective. Like the majority of his countrymen, he demonstrated little enthusiasm for probing the historical origins of the “German catastrophe.”

    From a legal and constitutional standpoint, the Federal Republic of Germany — whose Grundgesetz, or Basic Law, was codified in 1949 — was Schmitt’s worst nightmare. In stark contrast to Weimar, the Bonn Republic was intentionally conceived as a parliamentary system. Its architects expressly sought to forestall the temptations of executive overreach that, under von Hindenburg’s presidency between 1925 and 1934, had plagued German democracy, thereby paving the way for Hitler. The entire project was anathema to Schmitt. He disparaged defenders of the Grundgesetz as “Grundgesetzler” (human rights-lings) and mocked Grundrechte or “basic freedoms” as the “inalienable rights of donkeys.” For Schmitt, the Federal Republic represented a double abnegation of politics, insofar as it elevated the “anti-political” institutions of “parliament” and “judicial review” above the prerogatives state sovereignty. Schmitt’s own bête noire was the federal constitutional court, whose seat was in Karlsruhe. Schmitt composed a sophomoric satirical poem, which he circulated among friends, comparing the justices to lemurs. (“In Karlsruhe there grows a rubber tree/Lemurs scurry around/They append the ‘value’ of ‘freedom’ to the rubber tree.”) Schmitt excoriated the Bonn Republic as a “Justizstaat” — implying that it was not a “real state” — which elevated abstract “values” such as “human dignity” over “authority.” One of Schmitt’s final works was called The Tyranny of Values.

    III

    Following Schmitt’s death in 1985, the German right leaped into action to popularize Schmitt’s critique of German democracy. As the Junge Freiheit, the flagship publication of the Neue Rechte (New Right), put it: “Whoever sleeps with the Grundgesetz under his pillow has no need of Carl Schmitt. Conversely, whoever recognizes that the Grundgesetz is a prison in which the German res publica has been interned reaches for his work.” During the European refugee crisis of 2015-2016, the right-wing ideologue Götz Kubitschek, co-founder of the Institut für Staatspolitik — a conservative revolutionary think tank allied with the far-right political party Alternative for Germany (AfD) — cited Schmitt’s “state of exception” as an argument for implementing emergency measures to rebuff the influx of Syrian immigrants. Alluding also to Schmitt’s “friend-enemy” dichotomy, Kubitschek declared:

    I am convinced that in a “state of exception” … as the threats to one’s own group along ethnic, cultural, and civic lines become clear, so does the question of who ‘We are’ and who ‘We are not’… In other words, when people in this land have had enough, the question of [political] loyalty is bound to arise, as it does already when it is a question of customs, values, and the legal statutes that Islam places on the conduct of everyday life.

    The New Right regarded the refugee crisis, which rocked Chancellor Angela Merkel’s governing coalition to its foundations, as a classical Schmittian “state of emergency” — as a situation that, like the Algeria crisis in France in 1958 that paved the way for General Charles de Gaulle’s coup, portended the Bundesrepublik’s abolition and its replacement by an ethno-populist dictatorship. That Kubitschek’s advocacy of an executive decree banning asylum-seekers violated the Grundgesetz, as well as the tenets of European Union immigration law, seemed a matter of little concern.

    Schmitt’s posthumous influence on German political culture has been enormous. During the 1950s, Schmitt’s site of exile in Plettenburg became a favored pilgrimage destination among radical conservative jurists who were disaffected with the Federal Republic’s Verwestlichung (turn to the West) under Konrad Adenauer’s chancellorship in 1949-1962. Among Schmitt’s numerous acolytes was the jurist and future member of the Karlsruhe Constitutional Court, Ernst-Wolfgang Böckenförde, who applied Schmittian maxims in rulings that involved purported “social welfare” encroachments on state autonomy. And following German reunification in 1990, Schmitt’s intellectual currency skyrocketed. German conservatives asserted that the time had come to replace the “de-politicizations” and “neutralizations” of the liberal Bonn Republic with the prerogatives of a “self-confident nation” (selbstbewusste Nation), in keeping with the Bismarck-era traditions of étatisme and Machtpolitik. Who better to guide the Berlin Republic’s transformation in accordance with these precepts than Carl Schmitt?

    During the 1990s, a contingent of radical conservative intellectuals undertook a public campaign to rehabilitate Schmitt, along with the reputations of like-minded conservative revolutionary thinkers such as Ernst Jünger and Martin Heidegger. Prior to reunification, it had been difficult for Schmitt to escape the taint of his earlier career as the Third Reich’s “Crown Jurist.” Following the collapse of the Berlin Wall, however, a chorus of national conservatives argued that, after forty years of democratic stability, the time had come to lift the taboo. Schmitt was resurrected as a deutscher Klassiker, a “German classic.” Notwithstanding the objections that were raised by a handful of intellectuals, his rehabilitation seemed complete.

    Schmitt’s rehabilitation in Germany was merely the prelude to a multifaceted international revival of his work. Already during the 1990s, those who were disillusioned with neoliberal triumphalism and the “end of history” ransacked Schmitt’s corpus in search of political alternatives. And the ranks of the disillusioned were not confined to the right. Left-wing critics of TINA — the acronym for “there is no alternative,” derived from Herbert Spencer and popularized by Margaret Thatcher, to indicate an acceptance of the liberal order — thought that they had found the support they needed in Schmitt’s claim in The Crisis of Parliamentary Democracy that liberalism and democracy were mutually exclusive political forms. As Alan Wolfe noted in The Future of Liberalism, “To the extent that there is a revival of Schmitt’s ideas taking place in Europe and the United States, it is not because of what is happening on the right. It is because Schmitt has become something of a hero to the postmodern left.”

    Schmitt’s arguments about the endemic corruptions of Western liberalism became increasingly popular among former Marxists who, following the collapse of communism and the discrediting of Marx’s “metaphysics of class struggle,” sought out alternative paradigms of contestation among non-Marxist sources. In light of the fact that the proletariat, the putative “gravedigger of capitalism,” was now comfortably ensconced amid the mind-numbing blandishments of bourgeois consumerism, the prospects of realizing the utopia of a “classless society” seemed more distant than ever.

    Nominally, these self-styled “left Schmittians” embraced Schmitt’s no-holds-barred critique of liberalism in the name of “radical democracy.” Ultimately, however, their animus against the normative safeguards of liberalism proved so powerful and all-consuming that, much like Schmitt, they ended up countenancing brazenly authoritarian political solutions. In their haste to transcend the liberal democratic status quo, the left Schmittians were not averse to flirting with the temptations that Jacob Talmon long ago described as “totalitarian democracy.” They reprised an authoritarian political lineage that stretched from the Jacobin dictatorship of 1793-1794 to Lenin’s What is to Be Done? (1902) to the Chavismo that, since the late 1990s, has made state socialist autocracy a permanent feature of the Latin American political landscape.

    To restate Schmitt’s critique of liberal democracy in Rousseauian terms: whereas democracy strove to realize the “general will” or “universality,” liberalism, which was predicated on “interests,” was incapable of rising above “particularism,” or the mere “will of all,” which never rose to a higher unanimity. Schmitt claimed that “parliamentarism,” as a sphere of “representation” in which “interests” reigned supreme, inherently subverted the universalist strivings of democracy qua popular sovereignty. Hence Schmitt’s conclusion that liberalism and democracy inherently operated at cross purposes. (This is one of Victor Orban’s favorite refrains.) On the basis of these criticisms, Schmitt cynically dismissed parliament as little more than a Schwatzbude or “gossip chamber.” Following the lead of Donoso Cortés, he disparaged the bourgeoisie as spineless and effete, a class that was prone to endless discussion but incapable of a sovereign decision. During the 1920s Schmitt’s political hopes centered on the prospects of a Führerdemokratie, or leader-democracy, a term that for Schmitt and Schmittians is not at all oxymoronic: a form of political authoritarianism that was shorn of pluralism and constitutional interferences, a political system that replaced the liberal idea of “representation” with the “identity” between Führer and Volk, “leader” and “people.” Recall that Schmitt held that political obligation was grounded in “faith” and “myth” as opposed to rational consent. The identity between “leader” and “people” would be reinforced by a emotional emotional bond.

    For ex-Marxists, Schmitt’s critique of political liberalism possessed numerous advantages. Unlike Marxism, it was not tied to an outmoded Hegelian philosophy of history that naively culminated in the grand soir of socialism. Nor was it wedded to an equally anachronistic understanding of the proletariat as the “universal class”: a class that, as Marx had claimed, epitomized all of the injustices of bourgeois society, while being systematically deprived of its benefits. From an empirical standpoint, the “laboring society” of nineteenth-century industrialism on which Marx had predicated his “critique of political economy” had, to all intents and purposes, disappeared. The demise of the factory system meant that the ideas of “class” and “class struggle” had likewise forfeited their centrality. Instead, as sociologists never tired of pointing out, “social stratification” and “status differentiation” had replaced “class” as the interpretive keys to understanding modern society. It was not hard to see that, shorn of “class struggle,” Marx’s theory of revolution had become obsolete.

    The 1960s confirmed that the locus and the nature of political struggle had fundamentally shifted. Conflict was no longer confined to the shop floor or the workplace. Instead, the “new social movements” demonstrated that political contestation had been pluralized. Feminism, gay liberation, the civil rights movement, and environmentalism had exposed the analytical inadequacies of “class analysis.” The new sites of struggle centered on “post-material values” and cultural themes that transcended the economistic focus of traditional Marxism. Among post-Marxists, Gramsci’s notion of “hegemony” played a crucial role, insofar as it directly addressed the cultural dimension that Marx’s critique of political economy had neglected.

    For left Schmittians searching for new forms of contestation in order to combat the “Washington consensus,” Schmitt’s rejection of political liberalism seemed to offer possibilities of radical struggle that the parliamentary left had long abandoned. Hence, Schmitt’s left-wing disciples enthusiastically embraced his “friend-enemy” opposition for infusing radical politics with an ethos of permanent conflict. As Chantal Mouffe argued in The Challenge of Carl Schmitt in 1999, Schmitt’s “concept of the political” anticipated a new era of “political agonism,” in which the consensual politics of liberal-democratic parliamentarism was swept away by a rising tide of dissent and conflict. Mouffe explained:

    In spite of [Schmitt’s] moral flaws… ignoring his views would deprive us of many insights that can be used to rethink liberal democracy… Schmitt’s thought serves as a warning against the dangers of complacency that a triumphant liberalism entails. His conception of the political brings the crucial deficiencies of the dominant liberal approach to the fore. It should shatter the illusions of all those who believe that the blurring of frontiers between Left and Right, and the steady moralization of political discourse, constitute progress in the enlightened march of humanity toward a New World Order and a cosmopolitan democracy.

    By trivializing Schmitt’s rather spectacular failings as “moral flaws,” Mouffe conveniently sidestepped the interpretive question that has preoccupied Schmitt scholarship for decades: the extent to which Schmitt’s celebration of dictatorship and Führerdemokratie during the 1920s presaged his conversion to Hitlerism in 1933. Moreover, was it not Mouffe herself who “blurred the frontiers between Left and Right” by enlisting the support of a conservative revolutionary thinker like Schmitt for the ends of radical democracy? Finally, in an era marked by unprecedented political polarization — “blue states” versus “red states” and so on — and ever-expanding ideological divisions, should not the construction of a common political discourse take priority over a “political agonistics” that would merely widen existing antagonisms?

    Although Schmitt’s “concept of the political” may have liberated neo-Marxists from the straitjacket of historical materialism, it left them fully exposed to the dangers of Schmitt’s own extremely dubious political choices. In fact, by embracing Schmitt’s “decisionism,” as well as his inflexible “anti-normativism” — “the exception is more interesting than the norm,” Schmitt proclaimed; “the norm is destroyed in the exception” — Schmitt’s left-wing partisans opened themselves up to the excesses of “left fascism”: a quasi-aesthetic celebration of “struggle for struggle’s sake” and “conflict for conflict’s sake”; a glorification of endless war that blithely scorned institutional constraints and guardrails. Finally, in keeping with Schmitt’s decisionism, his left-wing disciples turned a blind eye to the content and the ends of struggle.

    A decisionistic refusal to specify the ends of struggle has also been one of the hallmarks of the Schmittianism of the Argentinian political theorist Ernesto Laclau (who was Mouffe’s partner). In On Populist Reason, which appeared in 2007, Laclau described the content of political struggle as an “empty signifier.” According to Laclau, the meaning of struggle would be provided by the populist “leader,” who is tasked with aggregating the conflicting demands of the vox populi in order to achieve a new “hegemonic unity.” Whereas Mouffe’s neo-Marxism still felt obligated to pay lip service to the formal trappings of liberal democracy, Laclau held that the rituals of “parliamentarism” must be simply abolished for the sake of realizing the ever-elusive “general will.” Laclau uncritically adopted Schmitt’s argument in The Crisis of Parliamentary Democracy that “representative democracy” must be replaced by a plebiscitarian “leader-democracy”; and the Schmittian derivation of Laclau’s position may help to explain his neo-Leninism, his view that only the “leader” or “party” can provide the “people” with a unified, revolutionary consciousness, thereby raising it from its “fallen” condition as an inchoate, disaggregated mass. By constructing the “real people” against the “enemies” who have betrayed it — enemies that Laclau defined as the “oligarchy” or “elites” — the leader, so to speak, “extracts” the (real) people from its oppressors.

    In The Crisis of Parliamentary Democracy, Schmitt claimed that genuine democracy was predicated on a series of “homologies” or “identities”: “the identity of governed and governing, sovereign and subject, the identity of the subject and object of state authority.” If, in Schmitt’s teaching, “democracy is not antithetical to dictatorship,” it was in part because dictatorship preserved the identity between the ruled and the ruler. According to Schmitt, dictatorship realized this identity through a process of mass “acclamation” as opposed to parliamentary “representation.” In sum: “Dictatorial and Caesaristic methods [embody] the direct expression of democratic substance and power.” Laclau endorsed Schmitt’s condemnation of representative democracy as a liberal subterfuge that sublimated and distorted popular will, thereby obstructing the identity between “leaders” and “people.” He also substituted Schmitt’s proto-fascist notion of “symbolic representation” — a leftover from Schmitt’s earlier political Catholicism — for the liberal democratic conception of “mandate representation.” Thus, in keeping with Schmitt’s framework, Laclau exalted the leader as the “symbolic representative” of the “general will.” The problem with the fascist glorification of leadership, for Laclau, was only that it was too extreme, though it is hard to know what extreme means for such a proponent of unity in tyranny.

    Laclau’s repudiation of liberal democratic “proceduralism” — whose foremost twentieth-century representatives have been Ronald Dworkin, John Rawls, and Jürgen Habermas — was an important component of the left Schmittians’ struggle against Enlightenment rationalism. This resolutely anti-Enlightenment disposition helps to explain the theoretical alliance between post-Marxists such as Mouffe and Laclau, on the one hand, and poststructuralists such as Derrida and Foucault, on the other. It also provides support for Habermas’ wise suspicion, voiced during the 1980s, that “postmodernity definitely presents itself as antimodernity.” “This statement,” Habermas continued, “describes an emotional current of our times that has penetrated all spheres of intellectual life.”

    The left-wing cult of Schmitt often displayed a self-marginalizing, sectarian quality, which explains its difficulties in gaining acceptance outside of the insular confines of academe. The same cannot be said about the reception of his work among neoconservative policy circles following the 9/11 terrorist attacks, when Schmitt’s pronouncements about the imperatives of emergency governance and the fecklessness of liberal democratic “legalism” assumed canonical status.

    Even civil libertarians, while disagreeing sharply with Schmitt’s conclusions, begrudgingly acknowledged his diagnostic prescience as well as the timeliness of his legal-juridical insights. “Mr. Schmitt Goes to Washington” was the shrewd title of Alan Wolfe’s discussion of the hypertrophy of executive authority under George W. Bush’s presidency. As the political theorist William E. Scheuerman conceded in The End of Law: Carl Schmitt in the Twenty-First Century, “Like no other political or legal thinker in the last century, [Schmitt] placed the problem of emergency government on the intellectual front burner, and he consistently did so as to unsettle those of us committed to liberal and democratic legal ideals. At the very least, his ideas about emergency rule call out for a response from those hoping to preserve the rule of law.” And in 2006, in an article on “Preserving Constitutional Norms in Times of Permanent Emergencies,” the legal theorist Sanford Levinson acknowledged that, in light of the Bush administration’s sovereign disregard for juridical accountability, America was experiencing a “Schmittian moment.” As Levinson put it, “The single legal philosopher who provides the best understanding of the legal theory of the Bush administration is Carl Schmitt, a brilliant German theorist of Weimar, who became, not all together coincidently, the leading apologist for Hitler’s takeover of what Schmitt viewed . . . as a hopelessly dysfunctional German polity.” Elsewhere he noted Schmitt’s status “as the jurisprudential guru of the post-9/11 world, a world in which the state of exception itself had become the new norm; in other words, as many analysts and observers assumed at the time, a permanent state of exception.”

    Schmitt’s philosophy was indeed a great gift to the advocates of what John Yoo, a legal scholar and deputy assistant attorney general in the Office of Legal Counsel in the White House, called “the Unitary Executive.” It was an idea that enjoyed a good deal of political support in the Bush years. “All law,” Schmitt wrote, “is ‘situation law.’ The sovereign creates and guarantees the situation as a whole in its totality. He has the monopoly on this ultimate decision.” The “sovereign” has carte blanche to respond to the changing situation as he sees fit, unimpeded by prior constitutional norms. Of course the West Wing Schmittians either misunderstood Schmitt or were using him dishonestly. Schmitt was not defending a constitutional status quo ante — as his champions continued to claim, despite mounting evidence to the contrary — but facilitating the transition to a “sovereign dictatorship,” in keeping with his glorification of the “Age of Absolutism” as the archetype of political excellence. What on earth was his ghost doing in the White House?

    Constitutionalists such as Levinson and Scheuerman reacted to the “Schmittian moment” in American governance with dismay and alarm, but not all observers were equally troubled. In 2006, in Terror in the Balance: Security, Liberty, and the Courts, Eric Posner and Adrian Vermeule claimed that Schmitt’s arguments in favor of emergency powers unconstrained by congressional oversight and judicial review were exactly what the war on terror demanded. Posner and Vermeule derided opponents of torture for their “self-absorbed moral preciosity.” They described their goal in Terror in the Balance as “extracting the marrow from Schmitt and then throwing away the bones” — a rather infelicitous choice of metaphor. They sought also to combat the moral outrage of conscience-stricken civil libertarians: for example, the seven hundred law professors who, in December 2001, published a petition criticizing the Bush administration’s plan to employ military tribunals to try the Guantanamo Bay detainees. By arbitrarily reclassifying the Guantanamo captives as “unlawful enemy combatants,” the Department of Justice sought to strip them of the legal protections they would be entitled to under the Geneva Accords as prisoners of war.

    The political challenges that the United States faced following the September 11 attacks were undeniably exceptional. For a democratic polity committed to the rule of law, however, the key to addressing the exception lies in insuring that the response is, from a constitutional standpoint, proportionate to the emergency in question. In other words, the response must be calibrated with a view toward returning to the legal-constitutional status quo ante. In 1861, between the attack on Fort Sumter in April and the return of Congress in July, Lincoln acted — to employ Schmitt’s terminology — as a “commissarial” (or limited) dictator. But Lincoln relied on emergency powers in order to safeguard the Republic; his actions always presupposed a return to constitutional normalcy; and so his “exceptional” conduct exemplified the responsible use of executive authority. Moreover, Schmitt’s attempt to define “the political” in terms of the “friend-foe” distinction, his notion of politics as war, is fundamentally at odds with central aspects of the Western political tradition, for which “justice” and “virtue,” rather than “enmity,” are the raison d’être of politics. In American terms, certainly, Schmitt’s “concept of the political” is really a concept of the anti-political, of the breakdown of politics, as in 1861. Our system of government was designed for conflict, which Madison regarded as a permanent feature of human affairs, but there is nothing Darwinian — or Schmittian — about it.

              Meanwhile the left Schmittians viewed the Bush administration’s proto-Schmittian apotheosis of executive authority as a welcome confirmation of their own longstanding antiliberal prejudices. Already during the 1990s, they regarded Schmitt’s arguments about the bankruptcy of political liberalism as received wisdom. Under the influence of French theory, the left Schmittians enthusiastically accepted Schmitt’s claim that liberal “norms” were little more than a swindle. They held that, insofar as norms were prescriptive — hence “normalizing” — they predetermined the parameters of socially permissible behavior. By pathologizing deviance and non-conformity, norms were an essential component of disciplinary society’s implacable social control. The illusion of “autonomy” was merely one of the ruses that “power-knowledge” employed to deceive us into thinking that we were free. This is what happened when Foucault was added to Schmitt. As Foucault wrote, “[the] will to knowledge reveals that all knowledge rest upon injustice, that there is no right … to truth or foundation for truth … The instinct for knowledge is malicious, something murderous, opposed to the happiness of mankind.” In the eyes of Schmitt and Foucault, the dialectic of enlightenment culminated not in emancipation, but in catastrophe.

    Giorgio Agamben’s State of Exception, which appeared in 2004, was conceived as a response to Abu Ghraib and Guantanamo, and represented the consummate synthesis of Schmitt and poststructuralism. By fusing these currents, it raised anti-Enlightenment cynicism to new heights. Agamben maintained that there was nothing “exceptional” about the “state of exception” that, following the September 11 attacks, was declared by the Bush administration. It merely exposed the hidden capacity for violence that lurked beneath the peaceable façade of liberal democratic “legalism.” For Agamben, the state of exception already was “the dominant paradigm of government in contemporary politics.” He praised Schmitt effusively as the theorist who, more than any other, had exposed the hidden link between the state of exception and bourgeois legal convention: “The specific contribution of Schmitt’s theory is precisely to have made such an articulation between state of exception and juridical order possible. It is a paradoxical articulation, for what must be inscribed within the law is something that is essentially exterior to it: nothing less than the suspension of the juridical order itself.” According to Agamben, however, the corrective to the state of exception is not to return to the rule of law, or to make law genuinely effective, since Agamben held, following Schmitt, that the state of exception is — in ways that are never clearly specified — inscribed in the rule of law itself. The vaporousness of all this was a formula for political impotence. The left Schmittians have little to offer apart from an abstract populism, an ill-defined theory of “direct action” and “agonistic struggle.” But will the people follow Agamben into the streets? The notion is a little comic — though the comedy disappears in the face of Agamben’s claim that there are no essential historical differences between Guantanamo Bay and Auschwitz. They both express the hidden telos of “modernity.”

    And now the cult of Carl Schmitt has been again updated. Recently a new authoritarian ethos has emerged on the right, a political current proclaiming that liberal democracy has entered into a state of terminal crisis. The proponents of this ethos claim that the signs of liberalism’s morbidity are omnipresent and undeniable; only fools mired in anachronistic ways of thinking would deny their obviousness. The slogan that has been widely brandished as a cure-all or panacea for liberalism-in-crisis — and an adage that has been enthusiastically embraced by autocrats and their apologists — is “illiberal democracy.” Unsurprisingly, the political philosopher whose doctrines have been repeatedly invoked both to explicate the reasons for liberalism’s irreversible demise and to justify the transition to a new form of authoritarian rule that will save us from its insolvency is Carl Schmitt.

    Schmitt has been canonized by a new generation of “post-liberal” conservatives who have been heartened by the political “successes” of Donald Trump and the Hungarian strongman Victor Orbán. Following Schmitt, they have concluded that constitutional guarantees of civic freedom and legal equality must be jettisoned, since, as enablers of an anarchic centrifugal individualism, they bear primary responsibility for Western civilization’s precipitous unraveling. Last summer Orban received an enthusiastic reception at the CPAC convention in Texas, which provoked a reporter in Washington Post to described the “Orbánization” of American conservatism: “to right-wingers steeped in anti-liberal grievance, Hungary offers a glimpse of culture war victory and a template for action.”

    The uncritical reliance on Schmitt’s positions attests to a revolting lapse of historical memory. Equally troubling, when pressed to identify a political successor to liberalism, the post-liberal responses track Schmitt’s ill-advised endorsement of “leader-democracy.” Schmitt’s argument that liberalism is incoherent and self-negating, since, as a form of political rule it is inherently averse to “authority” and “order,” in lieu of which “political rule” itself becomes meaningless, suffuses Patrick Deneen’s influential anti-liberal broadside, Why Liberalism Failed. As Deneen observes, parroting Schmitt, “democracy, in fact, cannot survive under liberalism.” The shameful lineage is clear. As Robert Kuttner noted appositely in the New York Review of Books, “Deneen’s thinking echoes an older line of reactionary argument on the folly and perversity of liberal democracy that extends back from twentieth-century anti-liberal intellectuals, like Leo Strauss and fascist theorists Carl Schmitt and Giovanni Gentile, to monarchic critics of liberalism, like Joseph de Maistre.” The fundamental flaw of Why Liberalism Failed is that its understanding of liberalism’s inadequacies derives from the worldview of a thinker who was rabidly opposed to liberalism and everything it stood for. Faulty premises yield faulty results. Schmitt’s representation of liberalism’s deficiencies contains a kernel of truth, though he was hardly the first or the last to worry about the labyrinthine tendencies of liberal governance; but his portrait of liberalism was a grotesque caricature — partial, self-serving, hysterical, and exaggerated, just like Deneen’s.

    Deneen is not alone in his Schmittian amen corner. In A World After Liberalism: Philosophers of the Radical Right, Matthew Rose, a contributor to First Things, sounds the death-knell not only of liberalism but also of traditional conservatism. According to Rose, in its place there will now arise “a new conservatism, unlike any in recent memory… Ideas once thought taboo are being reconsidered; authors once banished are being rehabilitated.” The inspiring doctrines of the “Radical Right,” whose ideas Rose seeks to retrofit for contemporary political use, emerged in Germany after World War I. “Known as the ‘Conservative Revolution’… its chief figures included Carl Schmitt, Ernst Jünger, Arthur Moeller van den Bruck, and Oswald Spengler.” Rose exalts these thinkers for their willingness to explore themes of “cultural difference, human inequality, religious authority, and racial biopolitics,” notwithstanding the fact that their approaches “were widely viewed as invitations to xenophobia and even violence.” That view was not only widely held, it was also true.

    In a similar vein, Adrian Vermeule, the Harvard legal scholar and Catholic “integralist,” has invoked Schmitt’s adage that “all political concepts are secularized theological concepts” to indict the theological ferocity with which liberalism unrelentingly (in his account) has advanced its agenda at the expense of traditional communities, belief-systems, and lifeworlds. As with Deneen, what Vermeule’s arguments lack in precision and subtlety, they compensate for in sheer force of rhetorical will. Like Schmitt, Vermeule prioritizes voluntas over ratio. Schmitt’s adage was part of his more general assault on the mentality of modernity. Never mind that, in contrast to theology, modernity relies for its intellectual methods on evidentiary criteria that are “public” and “generalizable.” Its proposals and claims are open to discussion and criticism. Unlike the precepts of “political Catholicism” with which Vermeule strongly identifies, they are, as a matter of principle, fallible and non-dogmatic. Does Vermeule leave his faith at the seminar door or his reason at the church door?

    There is no surer sign of intellectual and moral bankruptcy than an association with the thought of Carl Schmitt. The persistence of his cult into the present day is yet another of our time’s many unhappy omens. But as long as the hard work of a free and fair society feels too onerous for some of its intellectuals, the repulsive Schmitt will live; live again and be repudiated again. 

     

    Surrealism’s Children

         Back when I was an idealistic young soul, I enrolled in a PhD program in French and Comparative Literature, intent on making a career in academia. Those were the days when New Criticism and Semiotics held sway, and texts were to be read without interference from outside influences. The approach we were taught, boiled down, was that all a reader needed to know about a poem or a work of prose could be found on the page, without reference to historical context, authorial biography, or any other distractions. In class after class, we dissected poems by Ronsard and Rimbaud, the Symbolists and the Surrealists, peeling back layer upon layer of manifest and latent meaning. It was intoxicating stuff, but I couldn’t escape a nagging question: What was the point of it? Wasn’t it all a bit too removed from life? Wasn’t literature supposed to tell us about more than just its own internal machinery? Unable to resolve these questions, I handed in my Master’s thesis and said goodbye to all that.

         One effect of having left academia prematurely is that I spent the following decades still grappling with the appropriate balance between art and life, and the role that literature, literary studies, and the humanities in general have to play in our dealings with this fraught and confusing world — a world that, increasingly, seems resistant to the kinds of challenges and provocations that art and literature are best suited to pose. Is literature meant to reinforce our convictions, or to destabilize them? Should art be a safe space or a dangerous space, and what does that mean? What is the role of the off-putting, the upsetting, the offensive, and the shocking in our study and consumption of the humanities? Can art still be shocking in this day and age? And who, exactly, is being shocked?

         The answers used to be fairly straightforward, or so it seemed. The progressive avant-garde duly épaté’d the bourgeois, who duly responded with howls of outrage as their cherished shibboleths — God, king, country, the army, the Establishment, what have you — were dragged through the slime — often in language and aesthetic forms that were themselves a provocation. Provocation was even a kind of social role, an expected feature of the societal landscape. But things are no longer so simple.

         These days we find ourselves in a situation in which supposedly contradictory viewpoints circle each other, ouroboros-like — and become virtually impossible to distinguish. Conservatives vent their offense by banning an increasing number of books in schools and libraries, while college professors are actively discouraged from teaching material that might ruffle student sensibilities and provost’s offices disinvite speakers deemed too hot to handle. Yet what better time than in college to have sensibilities ruffled? When will students ever have a more free and insulated space in which to rub shoulders with controversial ideas, and to develop the skills needed to confront those ideas in the world — that is, to view them with greater insight and deeper understanding, if only to then refute them? College is, or should be, an instruction in controversy and its skills. For this reason, the curricular exclusions on current-day campuses not only curtail what the educational experience has to offer, but, particularly in a study of the humanities, the establishment of these guardrails undermines what is most valuable about the discipline: its challenge to comfort and certainty, its impetus to make us think harder and more independently.

         We are all familiar with the Golden Age of Bourgeois Indignation, in incidents ranging from theatergoers howling at the premiere of Victor Hugo’s Hernani in 1830, to attendees at the Paris Salon in 1863 trying to slash Manet’s Déjeuner sur l’herbe, to audiences throwing tomatoes and raw steaks at Dada performers in the 1920s. Flash forward a century and the dynamic has reversed: as Laura Kipnis has observed in these pages, it’s not the rubes and the philistines who get rattled now, but rather the progressives and the illuminati who find it hard to stomach the provocations. “At some point,” she writes, “offendability moved its offices to the hip side of town.” Nor, even, is outrage the exclusive privilege of the avant-garde: in the current climate, mainstream art and literature can just as likely get dinged as the cutting-edge stuff, in a free-for-all of offense.

         The question is, what do we sacrifice by avoiding such offense? It’s not always pleasant to be rattled out of one’s complacencies — the entire history of the avant-garde banked on it — but in losing the displeasure of injury, are we also losing the pleasures of discovery, and of self-discovery, that can accompany it? The price of comfort is often stagnation.

         And there’s a more immediate concern as well: in this time of anonymous reputation-bashing and swift retaliation against unwelcome opinion — the so-called “cancel culture” — the danger is not so much that people’s ratings will suffer and their speaking engagements will be revoked, but that they will stop saying anything at all for fear of being boycotted or “shamed.” We have too many crises to confront, none of which can be meaningfully addressed in 280-character soundbites, for those who can see beyond partisanship to refrain from making valid contributions. Trying to avoid offense in every instance is a fool’s errand — you can’t please all of the people all of the time — and holding back consequential and constructive insights, even if unpopular, impedes the free exchange of ideas and accomplishes nothing.

         From its tumultuous start, the Surrealist movement was out to shock. The flurry of activity that accompanied its debut in late 1924 and early 1925, including the broadside A Corpse (which spat on the much beloved and recently deceased novelist Anatole France, an act of cultural blasphemy), the aggressive prose and propositions of Andre Breton’s Manifesto of Surrealism (“Beloved imagination, what I most like in you is your unsparing quality”), and the common cause that the group tried to make with the reviled Communist Party, were not only steps toward defining a philosophical program, but also ways of slapping bourgeois proprieties repeatedly across the face.

         This was true not only of their actions but also of their proclaimed choice of role models, many of whom would not have passed a modern-day ethics test. The most glaring case in point is the Marquis de Sade, poster boy for aberrant sexuality and one of Surrealism’s lauded heroes. Many more examples can be found in Breton’s Anthology of Black Humor, a veritable rogue’s gallery of dubious precursors. Sade’s novels are jam-packed with physical and mental abuse, coprophagia, cannibalism, torture, rape, and murder perpetrated indiscriminately against women and men of all ages: he did not lend his name to a major psychopathology lightly. Nor were these acts merely theoretical, for the man practiced what he predicated — not to the extent portrayed in his books, not by a long shot, but enough to keep him behind bars for nearly half his seventy-four years, first under the monarchy, then under the French Revolution, then again under Napoleon. And we must be clear: Sade was no innocent victim. He used his wealth and his privilege to indulge in prodigious sexual predation, like an eighteenth-century Jeffrey Epstein. As such, his life and his books have been a thorn in the side of progressive-minded thinkers for the past two centuries. How can you promote freedom of expression and still defend that?

         And yet his work has been defended, and persuasively so, by such formidable intellectuals as Angela Carter, Roland Barthes, Maurice Blanchot (whose monograph Lautréamont and Sade owes much to Surrealist thinking), Susan Sontag, and Michel Foucault, to name just a few. Why would they do this? One answer comes from Simone de Beauvoir, in her landmark analysis of Sade’s writings from 1953, titled, appropriately, “Must We Burn Sade?” Sade, she declared, “drained to the dregs the moment of selfishness, injustice, and misery, and he insisted upon its truth. The supreme value of his testimony is the fact that it disturbs us. It forces us to reexamine thoroughly the basic problem which haunts our age in different forms: the true relation between man and man.” In a rare moment of convergence between De Beauvoir and the Surrealists, the poet Paul Eluard anticipated this view in 1937 in his book L’Evidence poétique, writing: “Sade wanted to restore to civilized man the power of his primitive instincts . . . He believed that out of this, and this alone, true equality would come. Since virtue is its own reward, he labored, in the name of everything that suffers, to drag it down and humiliate it… with no illusions and no lies, so that those it normally condemns might build here on earth a world on the immense scale of mankind.”

         The arguments for and against Sade are many and complex, and they are also largely familiar. On one side, De Beauvoir defends him as a cold-eyed realist. On the other, writers such as Andrea Dworkin warn that his books, like any form of pornography, could incite acts of violence, especially against women. Both arguments have validity, and it is not a cop-out to admit that there is no single answer. If anything, the true energy of art and literature might reside in the questions they pose rather than the certainties they offer. What I will say is that, more than the events portrayed in his books, which are so over-the-top that they often become plainly absurd, perhaps the most shocking thing about reading Sade is how un-titillating so much of it is. As you plow though description upon minute description of various improbable scenarios, their inflexible regulation ultimately undermines any eroticism they might have been meant to contain. Ironically, the parts of Sade’s work for which he is the most infamous are actually the most tedious.

         The intriguing part, to pick up from De Beauvoir, is Sade’s lucidity. Interspersed with the horrific actions are many pages of philosophy — I would go so far as to call it moral philosophy — that take an unsparing and admirably honest view of human interactions, and strip away the pieties with which we have comforted ourselves for two millennia. Here, for instance, is one of Sade’s fictional stand-ins instructing an eager young pupil about vice and virtue:

    Nature has endowed each of us with a capacity for kindly feelings: let us not squander them on others . . . Let us feel when it is to [our] advantage; and when it is not, let us be absolutely unbending. From this exact economy of feeling, from this judicious use of sensibility, there results a kind of cruelty which is sometimes not without its delights. One cannot always do evil; deprived of the pleasure if affords, we can at least find the sensation’s equivalent in the minor but piquant wickedness of never doing good.

    Nietzsche is not too far away.

         It is worth recalling that what landed Sade in the Bastille was not so much his acts of cruelty as the fact that he had sex with both men and women, and perhaps even more so that he was accused (though never convicted) of blasphemy against the highly influential and politically connected Church, which did not countenance challenges to its celestial worldview, or its sovereignty. It is also worth remembering that his books (such as the above-quoted Philosophy in the Bedroom) were written during the long years he spent in various prisons and asylums, and that in many ways their vindictive savagery — their sadism, if you will — can be read as a howl of rage against captivity, no matter which political regime was in charge. It is again De Beauvoir who notes that “Sade does not give us the work of a free man. He makes us participate in his efforts at liberation. But it is precisely for this reason that he holds our attention.” The value of a figure such as Sade, in other words, lies not in the acts that he describes but in the ethical challenges that he poses. It is one thing to create from a position of moral good, as many great writers and artists have done. But a steady diet of such work gives you only half the story, and arguably not the more necessary half.

         So the question remains: Is Sade worth reading on these grounds, or should he indeed be burned? Is the offense that his writings constitute a reason for locking his work away, as he himself was locked away — and as his manuscripts were locked away for decades in the so-called “Hell” section of the French National Library? Or do his “efforts at liberation,” or his decidedly unsentimental views on society, offer reasons to look beyond the parts we find distasteful? Is the threat posed by Sade that his books beget actual horrors, as Dworkin argued? Or that, long before Freud came along, they upended our complacent belief in the basic altruism of people and forced us to observe human nature at its ugliest and most unnerving?

         The questions are all the more relevant in that, unlike so many of his contemporaries, Sade’s impact did not fade with time. In 1959, the Canadian conceptual artist Jean Benoît, under the auspices of the Surrealist group, performed a piece called The Execution of the Testament of the Marquis de Sade. Benoît was well aware that his invitation-only audience would not be easy to impress: gathered that evening in the commodious Paris apartment of the poet Joyce Mansour were some one hundred “writers, poets, painters, filmmakers, critics . . . women in evening gowns . . . their nails painted blue or green… As well as a woman in black velvet whose nipple fit through a small hole in her dress.” At a prearranged signal, the attendees stood in a semicircle facing a stage area, their ears assaulted by the prerecorded sounds of an erupting volcano and readings from Sade’s works. Benoît then appeared, dressed in an ornate black costume with sharp protrusions over his chest and legs, a grotesquely extended erection, and a cape from which blood seemed to be dripping. Piece by piece his costume was slowly removed by his wife, the artist Mimi Parent, revealing his nude body to be painted all in black, his heart covered by a red star (Sade’s emblem). With a shrill cry, Benoît grabbed a red-hot iron placed nearby and branded the word “Sade” into his flesh, squarely over his heart. He then held out the still-smoking iron to his audience and demanded, “Who’s next?” The Chilean painter Roberto Matta was so carried away by the performance that he spontaneously rushed up, tore open his shirt, and seared his own left breast.

         Benoît’s performance marks a relatively anomalous point on the timeline of outrage that I alluded to above: the members of the audience were clearly rattled — in Matta’s case, to the point of voluntarily broiling his own flesh — but they were not offended. They had come in search of provocation and they were not disappointed. How that performance might fare sixty years on, in our own over-cautious day, is another story altogether.

         Sade, as I have noted, was one of Surrealism’s foundational pillars. Another was his spiritual great-grandson, the Montevideo-born poet the Comte de Lautréamont. Lautréamont, who died in 1870 at the age of twenty-four under mysterious circumstances, is mainly remembered as the author of the prose poem The Cantos of Maldoror. The narrative, such as it is, follows the picaresque and often hallucinatory adventures of its eponymous anti-hero, who styles himself the personification of evil and who is engaged throughout most of the book with a battle to the death against God, or as he calls him, the Creator (when he isn’t calling him worse). Maldoror is full of wild invention and grimly exhilarating humor over an underlayer of deep torment; the list of writers, artists, musicians, and filmmakers it influenced stretches from Dalí and Godard to Jim Morrison, John Ashbery, and the Beats. It is also full of what we would now consider child abuse, misogyny, sadism (that word again), animal cruelty, and various other atrocities. No doubt it would have been banned when it was first printed in 1869, had anyone actually noticed its existence at the time.

         But then, shortly before his death, the author pulled an about-face. Immediately after celebrating evil in Maldoror, Lautréamont — this time under his birth name, Isidore Ducasse — published a slim pamphlet called Poésies, consisting of brief aphorisms in prose. On closer examination, it turned out that many of these aphorisms were actually canonical maxims by moralists such as Pascal and La Rochefoucauld, familiar to any French schoolchild, but turned on their heads to celebrate positivity and humanism. So, for instance, where Pascal had written, “Man is only a reed, the weakest in nature . . . a vapor, a drop of water is enough to kill him,” Ducasse countered with, “Man is an oak. Nature contains nothing sturdier,” and so on. As he announced in a programmatic headnote, “I replace melancholy with courage, doubt with certainty, despair with hope, wickedness with good… skepticism with faith.”

         Many, Ducasse included, have described Poésies as a “correction” of Maldoror, but we might also see it as a counterpoint. Ducasse began with passion and rage, the sparks needed to light the fire and set revolutions in motion, then continued with the more sober and optimistic reflection needed to bring them to fruition. Faced with the cynicism and the defeatism of his own frenzied fantasies, as well as with the suppression of human grandeur in moralists such as Pascal, Ducasse replaced “despair with hope,” offering a lesson of agency and uplift from within the very corpus of repression. Like Rimbaud, whose renunciation of poetry and disappearance into the African desert sealed his literary reputation, Ducasse’s repudiation of his own jet-black apoplexies highlighted both the torment and its negation in an endless dialectical spin-cycle, an enigma forever to be read and pondered, never to be solved.

         In embracing transgressive figures such as Sade and Lautréamont, the Surrealists were not merely straining for provocative effect. More substantially, the movement promoted itself as one of the great currents of liberation in the twentieth century, the resolute enemy of stifling social and moral conventions, and this extended far beyond literature and art. Its calls, in the 1920s and 1930s, for rethinking the status of women and people of different races, for freedom of imagination and sexuality, and for political revolution directly influenced the protests of May ’68 and are not unlike calls that have sounded more and more urgently in recent years. It is also at the origin of many things we now take for granted, from the imagery that we respond to, to the humor that we appreciate, to the sense of strangeness that we unthinkingly call “surreal”; and it laid the groundwork for a larger degree of candor and personal engagement in artistic expression, to which today’s productions owe a great deal.

         In 1922, André Breton, Surrealism’s founder and primary theorist, declared: “Poetry, which is all I have ever appreciated in literature, emanates more from the lives of human beings — whether writers or not — than from what they have written or from what we might imagine they could write.” In other words, poetry was less about words on a page than about a living attitude, or, as Breton said, a “specific solution to the problem of our lives.” It was an early version of what in the 1960s would be expressed as “the personal is political,” and it implicated the poet and the artist in the work they created. It was not enough for your art to say the right things; if you were going to contribute a “specific solution,” you also had to walk the walk.

         What makes Surrealism such a good case study for our present context is precisely the disparity between the ideals that it promoted and the results that it often delivered, or the behavior that its members displayed. On the one hand, unlike their more sulfuric role models, the Surrealists might vehemently advocate for a positive and synthetic vision: the “point of the mind,” as Breton famously put it, “at which life and death, the real and the imagined, past and future, the communicable and the incommunicable, high and low, cease to be perceived as contradictions.” But they could also prove just as corrosive as Sade or Lautréamont, and were never shy about expressing their dislikes.

         Their early version of “cancel culture” could take various forms, from the twin lists headlined “Read / Don’t Read” — the “Don’t Reads” being the longer of the two and ending in “etc., etc., etc.” — to the so-called trial of Maurice Barrès. A renowned novelist, Barrès had been admired by Breton, Louis Aragon, and other future Surrealists in their formative years as a paragon of personal freedom, the “prince of youth,” but with World War I he had hardened into a super-patriot and arch-conservative. Feeling betrayed, Breton and his friends staged a mock trial of Barrès in May, 1921 before a packed house at the Hôtel des Sociétés Savantes, an ornate late-nineteenth-century edifice on the aptly named Rue Danton in Paris’ sixth arrondissement. The charge: “conspiracy against the security of the Mind.” When the “court” ultimately handed down a sentence of twenty years’ hard labor for Barrès, Breton, who had pushed for the death penalty, was disappointed. Barrès himself, of course, was nowhere near the proceedings and probably couldn’t have cared less — this was, after all, before social media existed — but Breton had prosecuted his case with such ferocity that some wondered what might have happened had Barres been present. (Fifteen years later, they would get their answer from Moscow.)

         Much less symbolic were the Surrealists’ true cancelations: the periodic exclusions of writers from within their own ranks. In 1926, not long after the movement’s founding, several of the original personnel were drummed out of the group for not embracing its turn toward Communist politics. This was followed by other purges over the succeeding years, resulting in the loss of many prominent members, some of whom were pushed to the brink of suicide, or beyond it, for failings that ranged from practicing hack journalism to political waffling to not condemning vehemently enough those whom the group had condemned. While Surrealism promised many freedoms, the one freedom it apparently could not abide, and toward which Breton could react with remarkable savagery, was the freedom to dissent. In the name of emancipation it practiced excommunication.

         These excommunications were in part a referendum on loyalty — are you with the program or not? — and sometimes simply an outlet for personal spite, but more fundamentally they were an interrogation of identity: you are defined by the company you keep, by both your actions and theirs, and tolerating intolerable elements reflects back on you. Returning to Breton’s statement about “poetry emanating from life,” what happens when the life no longer lives up to the demands of the poetry? And more to the point, how successfully did Surrealism as a collective live up to its own demands? Let us consider three aspects of Surrealism’s engagements: with race, with sexuality, and with gender.

         In matters of race, the Surrealists publicly aligned themselves with people of color, denouncing French colonialism and the inequities it fostered at home and abroad. In 1941, under the collaborationist Vichy government of Marshal Pétain, Breton invited the Afro-Cuban artist Wifredo Lam to illustrate one of his books, telling a journalist from the conservative Le Figaro that the choice of Lam was meant “to make clear just how sympathetic I am to Marshal Pétain’s racist concepts.” (To Breton’s disgust, the newspaper omitted that particular quote from the published interview.) Not long after, in Martinique, he met and collaborated with the poets Aimé and Suzanne Césaire, future founders of the Négritude movement. And in 1946, on a visit to Haiti, Breton told a student audience: “Surrealism is allied with people of color … because it has always been on their side against every form of white imperialism and banditry.” By many accounts, the popular uprising that deposed the repressive Haitian president Elie Lescot several weeks later derived some of its inspiration from Breton’s statements.

         All well and good. But figures such as Lam and the Césaires — as well as Hector Hippolyte, Hervé Télémaque, René Menil, Léopold Senghor, Jules Monnerot, Pierre Yoyotte, and Ted Joans — remain an oft-neglected minority among the many who passed through Surrealism, and only recently have such artists and writers begun receiving serious attention from historians of the movement. The most meaningful blend of Surrealism with a specifically black vision, Afro-surrealism, did not originate within the European groups, but had to develop on its own, independent of them, and with distinct differences.

         And this does not take into account other non-European Surrealists, such as César Moro, Fernando Lemos, María Izquierdo, and Frances del Valle; or Mahmoud Sa’id, Fouad Kamel, and Georges Henein; or Kansuke Yamamoto, Toshiko Okanoue, and Shūzō Takiguchi. Most of these figures, while retaining a greater or lesser identification with the movement, ultimately had to craft a version of Surrealism that spoke to their own cultural realities; whereas the Paris, Brussels, and London groups, even while promoting a broad internationalism, still kept their focus on a predominately white European set of references. There is more than a trace of exasperation in this remark by the Japanese Surrealist Takenaka Kyūshichi, from 1930, only six years after the movement was launched: “True Surrealists take a step beyond Breton. They are not confined by the Surrealism of Breton’s ‘Manifesto.’”

         When it comes to sex, while Surrealism enjoys a libertine reputation in the popular imagination, all licentiousness all the time, the reality was that these bourgeois young men were rather prudish. Yes, they believed in “free union” and “mad love” and abhorred marriage as an institution (even though many of them were married); and yes, they promoted the idea that a grand passion could excuse anything — though, not surprisingly, that particular freedom generally ran in only one direction. The inquiries on sex that the Surrealists conducted in the late 1920s and early 1930s were remarkably frank for the time, but also rife with the prejudices of the era, and quite a few strictures were voiced — particularly by Breton, who pontifically denounced sex workers, multiple partners, promiscuity, women’s orgasms, and male homosexuality. Needless to say, most of these sessions were among men only; in one of the few that a woman did attend, she listened for a while, then remarked, “You boys need to learn a few things.”

         Which brings us to the most blatant and the most complex of the Surrealist double-standards, the status of women in this allegedly emancipatory and revolutionary movement. Where to begin? We could start with René Magritte’s well-known canvas from 1929, I Do Not See the [Woman] Hidden in the Forest, which depicts a fairly classic painted nude framed by photographic portraits of seventeen Surrealist men: the fact that the men have their eyes closed does not make their gaze any less male. Or we could start with the many Surrealist visual works depicting the female form dismembered, disfigured, or otherwise deformed. Or with the many written works in which women appear as muses, lovers, inspirations, torturers, enigmas, enlighteners, or sorceresses, but almost never as autonomous individuals.

         Conversely, we could cite this passage by Breton, written as World War II was drawing to a close, in which he calls for a matriarchal social order: “May we be ruled by the idea of the salvation of the earth by woman, of the transcendent vocation of woman…The time has come to value the ideas of women at the expense of those of men, whose bankruptcy has become tumultuously evident today.” But since, good intentions aside, this still sounds a lot like mansplaining, let us bring in some other voices – for example, the British Surrealist painter Ithell Colquhoun, who reflected that despite pronouncements such as the one by Breton quoted above, “most of [his] followers were no less chauvinist for all that. Among them, women as human beings tended to be ‘permitted not required.’” The painter Leonora Carrington, when I once asked her for her opinion of Surrealism, called it simply “another bullshit role for women.”

         We can easily understand the resentment and the rage. On the one hand, Surrealism claimed to be one of the most woman-focused and erotically free movements in the history of literature and art. Alongside the paintings and photographs that mangled women were an equal number that exalted them, or at least the idea of them, as superior beings attuned to natural and supernatural forces beyond the reach of men. Within the movement, women were honored as muses and creative inspirations, and they were promised an alternative to the stifling roles that mainstream society expected them to fill.

         But all too often these grand promises simply fell flat, and the women who came to Surrealism as artists and writers with their own talents and ambitions — Leonora Carrington, Dorothea Tanning, Remedios Varo, Joyce Mansour, Lee Miller, Leonor Fini, Meret Oppenheim, Eileen Agar, Jacqueline Lamba, Gisele Prassinos, Kay Sage, Nelly Kaplan, and others — or who claimed for themselves the same freedoms in lifestyle and beliefs that the men did were disappointed to find themselves facing obstacles from their own peers that hardly differed from those of society at large. Colquhoun, for instance, was expelled from the British Surrealist group because her interest in witchcraft was deemed inappropriate by her male colleagues, who otherwise championed all kinds of transgressive anti-rationalisms in their philosophy and their work.

         Given all this, it would be too easy to brush away Surrealism as just another narcissistic patriarchal exercise that failed to live up to its big claims. The more seductive the promise, the more painful the letdown. But there might be a better lesson to be drawn from the experience of women in the movement. Constituting a “minority within a minority,” as Eileen Agar put it, they forged their own freedom and tilled their own ground, mapping out — as the critic Sacha Llewellyn writes — “their own autonomous identities… transforming and appropriating the iconography of women that male Surrealists ascribed to them… to produce radical works centered on the female condition.” The point here is not to discard Surrealism, but instead to identify the moments it offers in which prohibition becomes opportunity. Rather than repudiating Surrealism for its failures, the women and the artists of color who gravitated toward it took its promise of liberation and made it their own. The best criticism is neither rejection nor apology, but constant reevaluation and regeneration.

         What does it mean for the humanities now if art, writings, and philosophies deemed unworthy simply get thrown onto the trash heap of history because they fail to conform to prevailing notions of truth, goodness, or beauty? Where will fruitful challenges originate, if not in studying and debating works that offend and shock, or that fail to keep their promises? Must we “burn” everyone who, in their poetry or their person, does not live up to the ideals that we wish them to embody?

         Over the years, as I continued to ponder my philosophical quandary from graduate school, I came to realize that the point of reading literature so closely was to learn how to read the world, that is, the fine print of the world, the signs and underlying messages encoded in people, things, events, exchanges, and surroundings. Paradoxically, given the emphasis on hermetic concentration, what this training really provided was a wider and deeper sensitivity to the context surrounding these people and events. This involved opening my eyes and my mind to ways of thinking and expressing that I had not yet encountered — some of which troubled or upset me, but all of which were crucial steps in my learning how to understand my environments with discernment and with broadmindedness.

         Writers such as Sade, Lautréamont, and the Surrealists are problematic in that they challenge the values we like to think we celebrate, or confront us with frustrations and disappointments; but as such they also open doors toward a meaningful response. They were desperate people living and creating in desperate and pivotal times, whether the French Revolution, the Franco-Prussian War, or the aftermath of World War I. As we live through our own desperate and pivotal era, we might do well to ask what work such as theirs, however removed or outdated it might seem at first, can tell us about our own volatile experience.

         One of the most pernicious aspects of our fractious political and cultural landscape — alongside the decimation of our personal liberties, the erosion of civic discourse, governmental paralysis in the face of rising gun violence, and so much else — is the intolerance that it has fostered: not only the caricaturish intolerance for the values of diversity and inclusion that the liberal arts are meant to promote, but a resistance, even a fear — all along the political spectrum — toward engaging with viewpoints that we find alien and distressing, precisely because they are alien and distressing. As if we had somehow lost our ability to speak to things that we abhor in other than extreme ways. We demand, we shout, we insist, which is sometimes the necessary response. But what we must not lose is the ability to talk, and more than that, to listen, to weigh, to ponder, to empathize.

         If we are to be full-fledged human beings, we must learn the ability to enter into other points of view, including those that antagonize our deepest beliefs. Even as we disagree with or fight them, we must recognize them as human expressions. Saving ourselves from a reality soiled by ever more entrenched parochialism and flattened by defeatism and despair will depend on our capacity to evaluate lucidly, to look beyond buzzwords and bubbles, to see past labels that are often just a surrogate to thinking. There are no viable surrogates to thinking. It will depend on our ability and our willingness to contemplate the lessons offered by the humanities with minds wide open, and to interrogate our own and others’ beliefs with honesty, compassion, and courage. Knowing how to read the world with an open mind is not just a life hack; it is also a tool for survival.

         The Surrealists, for all their obsession with grasping the unconscious, were not particularly known for empathy. While some of their pronouncements, such as Breton’s remark about the “point of the mind,” might be read as aspirational, they generally preferred to operate in the vituperative register — which some, like Robert Desnos, elevated to a fine art. And yet they knew how to hear, and to hear effectively. Before they went on the attack, they did the necessary work. In 1949, for example, Breton unmasked a forged poem by Rimbaud strictly on the basis of intuitive affinity. His 68-page pamphlet, Caught Red-Handed, takes aim at the literary critics who fell for the hoax (basically all of them), dismantling their arguments point by point in a humiliating show of superior understanding. Not very friendly, perhaps, but in its wounding way more respectful: Breton actively listened to what those critics had to say before offering his withering refutation. There is a lesson in that. We can, if we must, endure a culture of intellectual incivility; sometimes it might even be beneficial. What we must not abide is a culture of intellectual abdication, or intellectual cowardice.