Digitization, Surveillance, Colonialism

             As I write these words, articles are mushrooming in newspapers and magazines about how privacy is more important than ever after the Supreme Court ruling that has overturned the constitutionality of the right to have an abortion in the United States. In anti-abortion states, browsing histories, text messages, location data, payment data, and information from period-tracking apps can all be used to prosecute both women seeking an abortion and anyone aiding them. The National Right to Life Committee recently published policy recommendations for anti-abortion states that include criminal penalties for people who provide information about self-managed abortions, whether over the phone or online. Women considering an abortion are often in distress, and now they cannot even reach out to friends or family without endangering themselves and others. 

    So far, Texas, Oklahoma, and Idaho have passed citizen-enforced abortion bans, according to which anyone can file a civil lawsuit to report an abortion and have the chance of winning at least ten thousand dollars. This is an incredible incentive to use personal data towards for-profit witch-hunting. Anyone can buy personal data from data brokers and fish for suspicious behavior. The surveillance machinery that we have built in the past two decades can now be put to use by authorities and vigilantes to criminalize pregnant women and their doctors, nurses, pharmacists, friends, and family. How productive.

    It is not true, however, that the overturning of Roe v. Wade has made privacy more important than ever. Rather, it has provided yet another illustration of why privacy has always been and always will be important. That it is happening in the United States is helpful, because human beings are prone to thinking that whatever happens “over there” say, in China now, or in East Germany during the Cold War to those “other people,” doesn’t happen to us — until it does. 

    Privacy is important because it protects us from possible abuses of power. As long as human beings are human beings and organizations are organizations, abuses of power will be a constant temptation and threat. That is why it is supremely reckless to build a surveillance architecture. You never know when that data might be used against you — but you can be fairly confident that sooner or later it will be used against you. Collecting personal data might be convenient, but it is also a ticking bomb; it amounts to sensitive material waiting for the chance to turn into an instance of public shaming, extortion, persecution, discrimination, or identity theft. Do you think you have nothing to hide? So did many American women on June 24, only to realize that week that their period was late. You have plenty to hide — you just don’t know what it is yet and whom you should hide it from.

    In the digital age, the challenge of protecting privacy is more formidable than most people imagine — but it is nowhere near impossible, and every bit worth putting up a fight for, if you care about democracy or freedom. The challenge is this: the dogma of our time is to turn analog into digital, and as things stand today, digitization is tantamount to surveillance. 

    Behind the effort to digitize the world there is a corporate imperative for growth. Big tech companies want to keep growing, because businesses are rarely stable animals — companies that are not on their way up are usually on their way down. But they have been so successful and are so gigantic that it is not easy for big tech to find room to grow. Like Alice in Wonderland, trapped in the rabbit’s house after growing too big, tech companies have their arms and legs sticking out the windows and chimney of the house of democracy. One possibility for further growth is to attract new users. But how to find fresh blood when most adults with internet access worldwide are already your users? One option, which Facebook is unscrupulously pursuing, is to focus on younger and younger children. The new target group for the tech company is children between the ages of six and nine. This option is risky. There are several investigations into Facebook and Instagram for knowingly causing harm to minors. What, then, are the other options for the expanding behemoths? 

    The preferred option these days is to digitize more aspects of the world. Despite the rapid advancement of digital technologies, most of our reality is still analog, even after the onset of covid. Most of our shopping is offline. Most readers prefer paper books. Much of our homes, our clothes, many of our conversations, our perceptions, our thoughts, and our loved ones are analog. That is, most of our experience has not been translated into ones and zeroes, which are the building blocks of digital technology. Experience, almost by definition, is directly lived, unmediated by a screen.

    Tech giants wish to change all that. They share the desire to digitize the world because it is an easy way to gain more ground, to expand by enlarging the house. In this sense, digitization is the new colonialism. Digitization is the way to grow an empire in the twenty-first century. Everything analog is a potential resource — something that can be digitally conquered and converted into data and then traded, directly or indirectly. That is why Google keeps coming up with new products. Maps? Chrome? Android? Those were not designed for you. They are all different ways of collecting different data from you. That is why Facebook and Ray-Ban have together come out with new glasses that have microphones and a camera: more “data capture,” which in reality means the conquest of life by corporate avarice. That is why Apple is launching an augmented reality product, and why Microsoft is proposing a platform that creates three-dimensional avatars for more interactive meetings. And why Facebook — sorry, Meta — is insisting on its metaverse. 

    The tech titans assure us, of course, that their new inventions will respect our privacy. What they fail to mention is what I call the Iron Law of Digitization: to digitize is to surveil. There is no such thing as digitization without surveillance. The very act of turning what was not data into data is a form of surveillance. Digitizing involves creating a record, making things taggable and searchable. To digitize is to make trackable that which was beyond reach. And what is it to track if not to surveil?

    A good example of the close link between tracking and surveillance are AirTags. In 2021, Apple launched the AirTag: a small coin-like device with a speaker, a Bluetooth antenna, and a battery, designed to help people keep track of their items. You can attach an AirTag to your keys and link it to your phone, and if you lose your keys, the device will ping Apple products around it and use Bluetooth to triangulate its location, which you can see on a map on your phone. The AirTag can also beep to let you know where it is.

    Keeping track of your keys seems innocent enough, but the AirTag is designed to track more in general. You can track a wallet instead of keys, or a purse — and not necessarily your purse. Privacy and security experts warned Apple that AirTags would be used for stalking. In response, Apple said it had implemented a notification feature that alerts people with iPhones if there is an AirTag following them. But this measure is insufficient in various ways. First, many people don’t have iPhones, and if you have an Android you have to download an app to be notified through your phone; the vast majority of people have not downloaded it and will likely not download it. You might think that the phone notification is not necessary, because AirTags are meant to start beeping at a random time between eight and twenty-four hours after they have been separated from their paired iPhones, but the beeping is so low that people might not hear it. Moreover, eight hours is plenty of time for a stalker to follow and find his victim. Even if you have an iPhone, my own experience is that there is no guarantee that you will be notified about an AirTag that is tracking you. A few months ago my brother and I rented a car from a peer-to-peer network. After a few hours of driving the car, my brother’s iPhone notified him that there was an AirTag around. The owner of the car had placed it in a locked glove compartment. My iPhone, however, never notified me of the AirTag — even after having been near the car for more than twenty-four hours. We never heard any beeping.

    The New Jersey Regional Operations & Intelligence Center issued a warning to police that AirTags posed an “inherent threat to law enforcement,” as criminals could use them to identify officers’ “sensitive locations” and personal routines. One year after their launch, there were at least 150 police reports in the United States mentioning AirTags, and recently, one murder case. That might not seem like much, but cases are likely in the thousands, given how many people might not notice they are being tracked or might not report it to the police. Not that reporting it to the police is of great help. Police often don’t know what to do about it; sometimes they don’t even take a report, which leaves vulnerable people (women, most often) unprotected. 

    Stalking affects an estimated 7.5 million people in the United States every year and, not surprisingly, it is on the rise. Last year a study by the security company Norton found that “the number of devices reporting stalkerware on a daily basis increased markedly by 63% between September 2020 and May 2021.” We are producing more and more technology to track — of course stalking is on the rise! To expect anything different would be to engage in self-delusion. In the pre-internet age, it was expensive, effortful, and risky to spy on someone. Today, you can buy an AirTag for $29.

    What is most striking about the AirTag example is how foreseeable these issues were. It’s not that the AirTag was misused in any surprising or imaginative way. When an AirTag is used for stalking, it is being used exactly according to its design. Some dual uses of technology are surprising. Gunpowder was originally designed for medicinal purposes — who would have thought it might change war forever? But tracking technologies are designed to track — and tracking is surveillance, and surveillance amounts to control. Human beings are social beings, which means that most of the time what we are most interested in is other people. We should hardly be surprised when tracking technology is employed to track people, the most salient element of most people’s lives. AirTags are the tracking device par excellence. They are designed to track and to do nothing else. Yet smartphones, for all their many uses, are also tracking devices. Your phone can make calls and take photographs, but above all it collects information about you and others.

    Too many people enthusiastic about digital technology are under the impression, as convenient as it is misguided, that if people consent to data collection, and if the data processing happens within our own phone or computer, there is no problem with privacy. If only it were so simple. There are at least two reasons why there are still privacy issues when it comes to the collection of personal data in our devices.

    First, there is no informed consent in data collection. The consent we give is neither consent, because it is not truly voluntary, nor informed, because no one has any idea where that data may end up and what inferences may be drawn from it in the future. We are forced into “consenting” because if we do not consent we cannot be full participants in our society. There is no leeway for negotiation in platforms’ “terms and conditions.” It’s their way or the highway, and their way can change at any time and without warning. But we could not give informed consent even if we had the chance, because data is so abstract and unpredictable in the kinds of uses it may have, and the kind of inferences it will be able to produce, that not even data scientists can give informed consent. No one knows what consequences today’s data collection will have.

    Second, data creation is itself morally significant. The term “data collection” is somewhat misleading, in that it seems to suggest that to collect data is to assemble things that are already there. But data are not natural phenomena, like mushrooms that we find in the forest. We do not find data. We create data. Data collection implies data creation. And that act of creation is a morally significant decision, because data can be dangerous. Data can tell on us: whether we are thinking about changing jobs, whether we are planning to have children, whether we might be thinking of divorcing, whether we might be considering having an abortion. Data can harm people. For this reason, data creation carries with it a moral responsibility and a duty of care towards data subjects. 

    “What privacy problem can there be if the data is on the user’s encrypted phone?,” a tech executive asked me once, assuming that users are in control of their phones, and ignoring the many examples that show otherwise. Our phones have a life of their own. They send data to third parties without us even realizing it, for starters. Every phone connected to the internet is hackable. Domestic abusers can take advantage of technologies to control their partners and their children. If an abuser forces you to share your password, the data that your phone has created without your asking it (where you have been, who you have called, etc.) can work against you. A TSA officer can ask you to unlock your device at the border and can download your data. That can happen even if you are American, and even if it is your work phone, in which you have confidential professional data. The police can ask you to unlock your phone too. And who can guarantee that an insurer will not ask you for access to that data in the future? If you do a commercial DNA test, even if it was only for fun, you are obligated to disclose it to your insurer. Can we be sure insurance companies will not ask for access to our smartwatches or smartphones some day? As soon as personal data has been created and stored, there is a privacy risk for the data subject, which then spills on to be a risk to society. 

    The risks to society are significant and varied. They go from national security (all that personal data can be used to extort public officials and military personnel, for instance) to threats to democracy, which will be my focus here.

    Just like the old colonialism, digitization carries with it a certain ideology that it seeks to impose. It comes with ideas of what progress looks like. Old colonialism imposed a certain language, etiquette, clothing, social institutions, and ways of life. New colonialism imposes code, exposure as etiquette, a weakening of old social institutions, and ways of life that lead to societies of control.

    Technology is never neutral. Tech companies find it convenient to present their products as neutral tools, but marketing bears little relation to truth. Artifacts inevitably embody values. We make artifacts so that they do something for us, and we wouldn’t bother making them if we did not value whatever it is that they do. Since technology is designed with a purpose in mind, artifacts end up having affordances. An affordance is what the artifact invites you to do. It is an implicit relationship between the designer and the user through the object designed. A chair affords you to sit on it. We design things like buttons and handles to match our bodies, perceptive systems, and desires. A gun affords you to use it to threaten, hurt, and possibly kill; it does not afford you to cook with it. Pans and skillets afford you to cook. Surveillance tools afford control; they afford the chance of keeping a close watch on something or someone. A camera allows you to watch anyone who appears in its purview. And a camera is a tool for surveillance irrespective of whether the footage is encrypted and in your phone. This is not to imply that encryption is not important. It certainly is, because it adds very necessary security to sensitive data. But no amount of encryption will detract from a camera the affordance to surveil. 

    Contemporary surveillance tools all too often are a double-sided mirror, which not only enables you to watch others but also enables others to watch you. They are often also camouflaged as some other kind of tool, like a phone or a TV. Before the age of the internet, surveillance tools were mostly one-directional. A Stasi agent monitoring a suspect in East Berlin through a wiretap could listen to her target without thereby opening the possibility of being wiretapped herself. But the internet allows for multiple directional flows of information. You might buy an Amazon Ring camera to watch whoever gets near your door, but that device allows Amazon (and your housemates) to learn things about you. It can track when you leave your home, and when you come home and with whom. It can also be used to inform the police (in some cases without your permission and even without a warrant). And anything that can be online is hackable, so you are enlisting into the risk of criminals accessing your footage, for example, to figure out when you are away so they may rob your home. 

    Your Ring camera is not only surveilling you — it is also watching and listening to your neighbors. Amazon has recently rejected the request made by Senator Ed Markey that the company introduce privacy-preserving changes to its doorbell camera after product testing showed that Ring routinely records audio conversations happening as far away as the opposite sidewalk. Your neighbor could be recording the conversations that you have at your doorstep or driveway and could post them online. If you use a screen door and keep your front door open, a Ring device could be recording the conversations you have in your living room. The potential for blackmail, stalking, and public shaming is immense.

    Other surveillance tools are much less obvious than a camera. Take something like Alexa. It’s a speaker that plays music. It is a timer. It can read you the news. It can allow you to order all kinds of products. It doesn’t look or feel like a surveillance machine, but it is keeping a close watch on you. Amazon wants to turn Alexa into an appliance that can predict what you want. For it to accomplish its task, it has to know you very well. Alexa collects data from what you say and shares it with as many as forty-one advertising partners. If you have not opted out, human beings might be reviewing what you tell Alexa. And, sure, you can have your data periodically deleted and opt out of human review, but your data will still be used to train Alexa, whether you like it or not. 

    In more than one out of ten transcripts analyzed, Alexa “woke up” accidentally and recorded something surreptitiously. The same thing happens to other digital assistants. An Apple whistleblower confessed to have “heard people talking about their cancer, referring to dead relatives, religion, sexuality, pornography, politics, school, relationships, or drugs with no intention to activate Siri whatsoever.” The police might be interested in getting access to that data. Alexa recordings have already been used in all kinds of legal cases, from proving infidelity in a divorce case and identifying drug users in a household to providing evidence in murder cases. If police can access recordings made by our devices at home, how is that different from having the police living under your roof? We would never be at ease having the police living in our homes, so why do we invite Alexa in? Aren’t we uncomfortably close to building a police state, or at the very least building the structure that could support an almost omniscient police state?

    In the 1990s, we owned the objects we bought. Today we still pay for our phones and doorbells, but they work for other people, and often against our own interests. And of course it’s not just AirTags, smart doorbells, smartphones, and Alexas. It’s your smartwatch, and your smart TV, and car, and electricity meter, and kettle, and laundry machine. Everything “smart” is a spy. And while every piece of data may seem uninteresting and innocuous, you would not believe how precise a picture emerges from joining the dots of all those data points. 

    Data creation and data collection will only increase if we continue the trend towards augmented and virtual reality. These technologies will want to collect much more data about everything, from your indoor spaces to the movement of your eyes. Eye-tracking technology will be crucial in creating a rich digital environment. It is likely that virtual reality will mimic human sight, which focuses on something and blurs the background. If everything is equally salient, it is harder to navigate your surroundings and you can easily get motion sickness. To simulate our natural visual experience by offering low-quality images in your peripheral vision and high-quality images on what you focus on, the tech needs to identify what you want to pay attention to. Eye-tracking is the most important source of information for that. Relatedly, eye-tracking can be used to increase the user’s ability to direct and control her experience.

             Unfortunately, your gaze can be incredibly revealing. Your eye movements, iris texture, and pupil size and reactions can inform others about your identity (through iris recognition), state of mind (e.g., if you are distracted), emotions (e.g., if you are afraid), cognitive abilities (based on factors like how long you look at something before acting), your likes and dislikes (including your sexual interests), your level of fatigue (through analyzing your blinking), whether you are intoxicated, and your health status (by looking for patterns of eye movements that might be symptomatic of problems such as Alzheimer’s or schizophrenia). Even if some of these inferences might be scientifically questionable, experience suggests that companies are likely to try their luck with them anyway.

    By creating and collecting so much personal data, it is becoming more difficult to avoid surveillance. Even if you leave your phone at home (I know, a big if), you might still get caught by surveillance through dozens of cameras as you go about your day. If we plaster our cities with sensors of various kinds, there is no opting out or escaping it. The danger is in the long term. Surveillance is a slow-acting poison. Its consequences are not immediately apparent. All of which leads to the surveillance delusion: the mistaken belief that surveillance has many advantages and no significant costs. For every individual decision, surveillance can seem like an attractive solution in the short term, when we imagine that all goes exactly as planned: it seems to keep us safer, it helps us track what we care about. But the long-term and systemic effects of surveillance are often overlooked. Under the surveillance delusion, only the benefits of surveillance are valued, and surveillance is understood to be a convenient solution to problems that could be solved through less intrusive means. But surveillance often creates more weighty problems for democracy in the long run than the ones it can solve.

    Democracy is a complex house with many pillars sustaining it, and it can crumble so slowly that we might not know immediately when we are undermining it. Journalism, for example, “the fourth estate,” has long been considered an important pillar of democracy. Citizens have to be well informed enough about their society to be able to make autonomous democratic decisions, such as whom to vote for. When we reduce privacy, we weaken journalism. In July 2021, a leak revealed that more than fifty thousand human rights activists, academics, lawyers, and journalists around the world had been targeted by authoritarian governments using Pegasus, a hacking software sold by the Israeli surveillance company NSO Group. It is probably not a coincidence that the most represented country among the people who were targeted with spyware, Mexico, is also the deadliest country in the world for journalists, accounting for almost a third of journalists killed worldwide in 2020. When journalists do not have privacy, they cannot keep themselves or their sources safe. As a result, people stop going to journalists to tell their stories, and journalists quit their jobs before they lose their lives, or they focus on safe stories, and investigative journalism slowly dies, thereby gravely hurting democracy. 

    Some people think that if surveillance is done by corporations and not the government, the concern is lessened. Others think the opposite: that if surveillance is done by the government and only by the government, we will be safe. Both views are wrong: corporate surveillance is as dangerous as government surveillance and vice versa, and even peer-to-peer surveillance undermines ways of life that are supportive of freedom and democracy. 

    Giving too much personal data to governments will grant them too much power, which can support authoritarian tendencies. As I have argued, surveillance tools afford control, and when governments hold too much control over the population they become authoritarian. You might happen to trust your current government, but you cannot be sure that you will trust the next government. And you cannot be sure that a foreign power will not hack the data held by your government, or even invade your country. One of the first stops for Nazis in a newly invaded city was the registry, because that is where the personal data was held that would lead them to Jews. The best predictor that something will happen in the future is that it has already happened in the past, and personal data has already been used to perpetrate genocide. A contemporary Nazi regime with access to the kind of fine-grained data we are collecting would be near indefeasible. That alone makes surveillance reckless. China is using its surveillance apparatus against “enemies of the state”: from minorities such as the Uyghurs and the Tibetans to the defenders of democracy in Hong Kong. We must dismantle architectures of surveillance before they get used against us.

    Corporate surveillance is just as much of a problem. First, any data collected by companies can — and often does — end up in the hands of governments, whether through governments purchasing data, legitimately acquiring it (e.g., through a warrant or subpoena), or hacking it. In practical terms, corporate and government surveillance are indistinguishable. Moreover, corporations do not have our best interest at heart, and these days they are certainly not guardians of democracy or the common good. Thanks to corporate surveillance you can be unfairly discriminated against for a job, or insurance, or a loan. And personal data can be used to produce personalized propaganda, pit citizens against one another, and undermine civic friendship and democracy. Companies, after all, think of themselves as answerable only to shareholders. 

    Corporate surveillance is all the more worrying in the case of companies that can become more powerful than entire countries. Once again, this worry gives us reason to learn from old colonialism. At its summit, the East India Company was the largest corporation in the world, and it had twice as many soldiers as the British government. Among its many sins were slave trafficking, facilitating the opium trade, exacerbating rural poverty and famine, and looting India. A senior official of the old Mughal regime in Bengal wrote in his diaries: “Indians were tortured to disclose their treasure; cities, towns and villages ransacked; jaghires and provinces purloined.” So it’s not only that powerful corporations can violate human rights. To some extent, they can also act like states when they are the protagonists of colonialism. As William Dalrymple puts it, 

     

    We still talk about the British conquering India, but that phrase disguises a more sinister reality. It was not the British government that seized India at the end of the 18th century, but a dangerously unregulated private company headquartered in one small office, five windows wide, in London, and managed in India by an unstable sociopath — Clive.

    Just like at the end of the eighteenth century, corporations are leading colonialism in the twenty-first century. This time round it is big tech doing the looting (of our privacy, at the very least). They are the entities setting the agenda and imposing a culture of exposure around the world. Big tech companies benefit from our spending as much time as possible on their devices and platforms, sharing as much personal data as possible — which is why they sell the idea of exposure as a virtue: tell us what you feel, where you go, what you eat, what you think about other people, what worries you, and how we can make money off you. And if you don’t want to tell? Well, that must be because you have something to hide, which in big-tech-speak is not about protecting yourself from wrongdoers but about being a wrongdoer yourself. Big tech colonialism shames us into exposure for their own profit, and in doing so, they poison the public sphere. 

    Cultures of exposure are another good example of how surveillance leads to control. The pressure to overshare encourages social vices such as stalking and witch-hunting. If everyone is pressured into exposing their opinions and habits, it is a matter of time before someone finds some of them objectionable and starts hunting people for their views. It is interesting how something that used to be regarded as inappropriate — exhibitionism — has now morphed into being considered a social imperative — transparency. Some measure of transparency is certainly appropriate when it comes to institutions — but not when applied to individual citizens. Both exhibitionism and social policing cause “either-you’re-with-us-or-against-us” mentalities and thereby jeopardize civic friendship. 

    Liberal democracies aim to allow as much freedom to citizens as possible while ensuring that the rights of all are respected. They enforce only the necessary limits so that citizens can pursue their ideal of the good life without interfering with one another. But for a liberal order to work, it is not only governments and corporations that have to give citizens a space free from unnecessary invasions; citizens have to let one another be as well. Civility requires that citizens exercise restraint in the public sphere, especially regarding what we think of one another. To expect people to be saints is unreasonable. “Everyone is entitled to commit murder in the imagination once in a while,” as Thomas Nagel has remarked. If we push people to share more than they otherwise would, we will end up with a more toxic environment than if we encourage people to edit or curate or limit what they bring into the public sphere. A culture of exposure invites us to share our imaginary acts of murder, needlessly pitting us against each other. Sparing each other from our less palatable facets is not a vice, but a virtue. Protecting privacy — our own and that of others — is a civic duty.

    Totalitarian societies tend to match institutional surveillance with peer-to-peer surveillance to achieve near-total control of the population. During China’s Cultural Revolution, people were encouraged to denounce their neighbors and even their family members. Children sent their parents to their deaths. The same thing happened in Stalin’s Soviet Union. The East German Stasi used an astonishingly high number of informants to infiltrate the general population. When we use social media for trolling, witch-hunting, and publicly shaming others, we behave more like subjects of totalitarian states than as citizens of free societies. 

    We resist the colonialism of digitization partly through culture. We defy digital colonialism when we value the analog, the unrecorded, the untracked. Tibetan Buddhist monks have a tradition of spending days creating beautifully intricate mandalas using colored sand. When they finish their work of art, they sweep it all away in a ceremony. The sand is collected in a jar which is wrapped in silk and taken to a river, where it is scattered. Sand mandalas are a homage to impermanence. Unlike paintings, which strive to resist the passage of time, sand mandalas are there to remind us that there is beauty in the ephemeral. 

    We challenge digital colonialism when we enjoy life without wanting to freeze it into a photograph. We resist totalitarianism when we decline to publicly shame someone for a mistake that anyone could have made. We preserve intimacy when we allow a conversation to go unrecorded. We stand up for democracy when we buy a paper book at a bookshop using cash. 

    Yet culture is not enough. We also need the right technology. Architectures of surveillance afford control over the population. Our current technology — all of it the result of engineering and corporate decisions, and none of it inevitable in its present configurations — is priming society for an authoritarian takeover. Analog technology is more respectful of citizens. We could also make digital technology less intrusive by creating and collecting less personal data, by periodically deleting data, and by improving our cybersecurity standards. In a global context in which a country such as China is exporting surveillance equipment to around one hundred and fifty countries, the job of liberal democracies is to be a counterweight to that authoritarian influence by exporting privacy through culture, technology, and legal standards.

    We need the right regulation to match culture and technology, because collective action problems can only be solved through collective action responses. For starters, we should ban the sale of personal data. As long as personal data can be bought and sold, companies will not resist the double temptation of creating and collecting as much of it as possible, and then selling it to the highest bidder. The trade in personal data is jeopardizing democracy through personalized propaganda. We do not sell votes, and for many of the same reasons we should not sell personal data. 

    We should also limit the purview of the digital. Asking technology companies not to digitize the world is like asking builders to please refrain from paving over natural spaces. Unless society sets legal limits, profit-seeking will reign. Corporations will sell our democracies if it is lucrative enough and we let them. Governments create protected areas to restrain the impulse to build over every square inch. We need similar protected areas from surveillance. It is in the very nature of big tech to turn the analog into digital, but turning everything into a spy is a threat to freedom and democracy. Full digitization equals total surveillance. There is some data that is better not to create. There is some information that is better not to store. There are some experiences that are better left unrecorded. 

    Just over a decade ago, enjoying digital technology was a luxury. Increasingly, luxury is being able to enjoy space and time away from digital technology. Spaces that are free of digital technology stimulate deeper connections between people, more honest conversations, free experimentation, the enjoyment of nature, being grounded in our embodiment, and embracing lived experience. That is why Silicon Valley elites are raising their children without screens. 

    We need urgently to defend the analog world for everyone. If we let virtual reality proliferate without limits, surveillance will be equally limitless. If we do not set some ground rules now on what should not be digitized and augmented, then virtual reality will steamroll privacy, and with it, healthy democracies, freedom, and well-being. It is close to midnight. 

     

    The Autocrat’s War

    The Emperor Nicholas was alone in his accustomed writing-room in the Palace of Czarskoe Selo, when he came to the resolve. He took no counsel. He rang a bell. Presently an officer of his Staff stood before him. To him he gave his orders for the occupation of [the Danubian] Principalities. Afterwards he told Count Orloff what he had done. Count Orloff became grave, and said, “This is war.” 

    Alexander William Kinglake

    The Invasion of the Crimea, 1863 

    Alexander William Kinglake, the nineteenth-century British travel writer and historian who published a history of the Crimean War in eight volumes, could hardly have known how and in what surroundings Nicholas I made the fateful decision that caused the declaration of war by the Ottomans. In the imagination of nineteenth-century historians and writers, wars were the products of high politics, and the Crimean War, one of the most senseless, ridiculous, and tragic defeats in Russian history, was commonly blamed upon the Russian tsar and his abysmal vanity, arrogance, religious fanaticism, and nationalism. Court historiographers spilled a lot of ink trying to exonerate Nicholas I and shift the blame for launching the bloody war onto Russia’s treacherous allies and insidious rivals.

    It is therefore even more surprising that Nikolai Chernyshevsky — Russia’s first revolutionary democrat, who apparently read Kinglake’s volume in his prison cell at the Peter and Paul Fortress in 1863 — also thought that the tsar was not the guilty party: “Who shed these rivers of blood? … Who? Oh, if only conscience and facts had allowed us to think ‘the late sovereign,’ how good this would have been! The late tsar is long dead, and we would not have to worry about Russia’s future…. But, my dear reader, neither the dead tsar nor the government is guilty of the Sevastopol war.” 

    According to Chernyshevsky, the main suspect was the Russian educated “public,” who had laid the blame on the dead tsar and continued to dwell without punishment or remorse: “The public is immortal; it does not resign, and there is no hope that this persona that caused the Crimean war ceases to represent the Russian nation and to have great influence upon its fate.” Without due respect for the greatness of Russian poets and writers, including Pushkin, Chernyshevsky blamed them for impressing on the minds of light-minded Russians the fantasies of taking control over Constantinople and beating the Ottomans on their land.

    Nobody, for sure, wanted the war, and only when they kissed their loved ones farewell did the same people who had carelessly joked about the “Russian Bosporus” understand what the war was about. Russia suffered a humiliating defeat, senselessly wasting thousands of lives and millions of rubles. Yet the horrors of the Crimean war, even if only seen through the eyes of Russian soldiers and not their Turkish (or British and French) counterparts, were soon forgotten. 

    Not long after the shameful debacle, the government approved the establishment of a “Slavic committee” in Moscow that aimed to “prevent” and anticipate Western influence upon the Southern Slavs of the Ottoman Empire. Twenty years later, Nicholas’ son Alexander II waged another war against the Turks, claiming to protect the Christian population of the Ottoman Empire. The second Eastern war in 1877–1878 was a military success, but most importantly, it was a propagandistic triumph that took off the table the question of responsibility for another imperialist adventure. Clearly the government had learned the lesson of the Crimean embarrassment: dealing with the questions of causality and responsibility had to be an integral part of the war effort and strategy.

    The catastrophic war against Ukraine that started in 2014 and entered a bloodier phase in February 2022 has already produced heated debates about its causes. The question of whether this is “Putin’s war,” or “Russia’s war,” or “the Russians’ war” echoes Chernyshevsky’s dilemma, but the answers, usually emotional and spontaneous, express the incomprehensibility of violence rather than a serious attempt to understand the roots of the disaster. Writers habitually compare Putin’s Russia to Hitler’s Germany, drawing parallels between the lethargic character of the Germans’ denial of Nazi crimes and the Russian public’s support of war in Ukraine. While this comparison points to a plausible diagnosis — a peculiar intellectual antibiosis of society — the causes of the disease in its respective settings are most likely different. In any case, current debates about whom to blame often simplify the issue, operating with imprecise categories and ignoring the context. Scholarly analysis will have to frame the problem as broader and punchier, considering the role and responsibility of the autocrat and the ruling clique not only in waging the war but also in turning the majority of the population into his supporters and accomplices. 

             While a cold and dispassionate analysis of the genesis of the current war may seem improbable at the moment, there is one thing that we can do: look back at past conflicts and analyze how Russia’s wars usually began. This comparison suggests that the formulas discussed above — “one man’s war” or a “nation’s war” — are themselves the products of the rhetorical attempts to either celebrate or exonerate rulers and to shift responsibility for waging the conflicts, either successful or failed, onto society. Wars belong to a particular category of events that are always shrouded in mythology: state propaganda doubles its efforts when it deals with armed conflicts. In the panoply of myths, one persistent trope stands out. It describes the archetypal scenario of a war’s outset; and Russia’s failed wars were not only those that Russia lost militarily, but also those that did not follow the prescribed scenario, the ones that laid bare the ruler’s personal role. To deal with the problem of causality and responsibility, however, it is important to distinguish the rituals of launching wars from the actual political mechanisms of their enactment. 

    As the war in Ukraine grinds on, it is illuminating to consider the precedents of Russia’s imperial wars of the nineteenth and early twentieth centuries, so as to trace how the wars began, how those beginnings were described, and what those beginnings tell us about the range of responsibility for unleashing violence. Despite the time lapse, the comparison between the politics of war in imperial Russia and in contemporary Russia is useful and legitimate: as Putin’s persistent references to the Russian imperial legacy demonstrate, he intentionally and unintentionally emulates the old mechanisms of autocratic governance. Wars, and not domestic reforms, however “great” they may have been, represented the main mechanism of legitimation in autocracies. Almost all the rulers of the Romanov dynasty fought at least one war during their rule. It is reasonable, therefore, to suggest that autocracies do not merely share a general inclination toward violence, but also display similar mechanisms of geopolitical decision-making. At the outset of war, the key moment of every monarch’s rule, an autocrat claims a complete authority that in peaceful times may appear limited and constrained.

             This complete authority, the way in which war is used to strengthen dictatorial power, may not be put fully on display. To justify war, an autocrat may cite an alleged provocation from below or a popular demand to which he responds. He may shift the burden of responsibility for human losses onto his advisers while accruing to himself the political benefits of victories. For this reason, the real mechanisms of war politics should be critically examined. And there is the additional question of the role of society. Does it bear responsibility for the violence, as Chernyshevsky thought? Does society have agency in an autocratic state, and does the autocrat take “public opinion” into account? Additionally, is collective responsibility a useful category, or should only individual perpetrators or groups and organizations take the stand in the makeshift court of history? 

    Let us begin with the role of the autocrat. In Timothy Frye’s wise and counter-intuitive words, “Recognizing Putin as an autocrat … brings into sharp focus the inherent limits of his power that are common to autocratic rule.” Historians of Russian autocracy concur with this observation: at no point in Russian history, they say, was a Muscovite tsar, an empress, or an emperor fully “autocratic” in making their choices and decisions. Boyar clans, unofficial parties at court, groups of ministers, court favorites, and lobbyists all worked toward forming the sovereign’s will and making him or her deliver the right decision at the right moment. Mikhail Dolbilov describes this process as “divining,” that is, “constructing” the ruler’s will and couching it in the language of laws and orders. 

    The interactions between the tsars and the advisers, however, were never one-sided: monarchs manipulated people masterfully, exploiting contradictions and conflicts among their favorites and courtiers, artificially sharpening disagreements, and shifting moral and political responsibility for crucial decisions onto representatives of the elites. In addition to these informal networks of power, autocrats also relied on a variety of political bodies. In both monarchical Russia and Putin’s autocracy, legislative chambers and political offices have existed mainly to legitimize the rulers’ decisions and to bind political elites by shared responsibility. There is also the class of technocrats and bureaucrats who bear the burden of governance and execute the monarch’s orders. We may conclude, therefore, that “autocratic will” is a complex set of mechanisms based on the preponderance of informal practices, customs, and rituals over rules and laws.

    But when it comes to wars, the traditional rituals and practices of decision-making prove moot. The role of government usually recedes into the background, and the autocrat surrounds himself with unofficial advisers, often shifting gears in motion, dismissing trusted politicians and bringing forward new people and favorites from the inner circle. Such famous bureaucrat reformers as Mikhail Speransky and Sergei Witte both lost their leading positions on the eve of wars, in 1812 and 1902 respectively. Speransky’s fall was staged as a tragedy: sending off his State Secretary to Siberia, Alexander I cried and lamented that he was sacrificing his adviser for the safety of the empire in view of Napoleon’s imminent invasion. The replacement of Speransky with nationalist conservative politicians represented a part of the pre-war drama, but in reality it reflected the tsar’s efforts to strengthen his absolute authority. 

    Witte’s story is also remarkable: a powerful minister of finance and de facto first person in the imperial government, he lost the political battle against a handful of unofficial advisers to the tsar, who pushed the emperor toward the more aggressive policy in the Far East that ultimately led to a war with Japan in 1904–1905. Describing this episode in his memoir, Witte portrayed the poor gullible tsar as a weak-willed child, easily manipulated by a group of unscrupulous and militant politicians. The story, fairly accurate in details, nevertheless looks like the traditional scenario about wars’ beginnings that features competition between pro-war and anti-war factions at court fighting for the tsar’s attention. In these competitions, the only winner was usually the monarch: launching wars was a way to get rid of importunate reformers, to consolidate supporters, to shake up the political establishment, and to refresh the absoluteness of the tsar’s authority. In the case of the Russo-Japanese War, the trick failed, leading to revolution and the constitutional reform of 1905–1906 that stripped the tsar of some monarchical prerogatives.

    Until the end of the tsarist regime, war and foreign policy remained within the protected sphere of the tsar’s personal rule. The narratives of the wars’ origins, considered without the long preambular part of diplomatic negotiations, were consequently staged as the dramas of the tsar’s choice between different camps, actors, and opinions. Even though unleashing the war was always the tsar’s personal choice and decision, the rhetoric and rituals of war dramas required the presence of others — noble defenders of the empire’s honor, faint-hearted bureaucrats, or evil instigators of violence. The scenarios of wars were designed in such a way that the autocrat was always at the center — and yet never alone. A typical plot of a “good” war as portrayed in the official myths always included 1) attempts at reconciliation and the ruler’s patient search for peace; 2) the people’s demand, and the advisers’ suggestion, to act more decisively; 3) the tsar’s reluctance to shed the blood of his soldiers; and, ultimately, 4) his determination to make the sacrifice for the sake of the empire’s honor and peace. 

    It is important to keep in mind, however, that the conventional plot of the war drama differed from the real politics of autocratic decision-making. Consider the example of the Russo-Turkish war of 1877–1878. Although Putin has never referred to it (perhaps because Turkey remains one of Russia’s somewhat infidel allies), the official narrative of that war, as well the model of interactions between its political actors, eerily resembles the situation on the eve of Russia’s invasion of Ukraine this year. The Russo-Turkish war is usually portrayed as a war for the liberation of the Slavs of the Ottoman Empire, a reaction to Turkish atrocities in Bulgaria and Herzegovina. According to the traditional narrative, Alexander II reluctantly agreed to step in after Russia’s diplomatic efforts to resolve the Eastern crisis had brought no results, while a collective “Europe” demonstrated a cold indifference to the fate of Christians in the Ottoman Empire. The lofty rhetoric of liberation was meant to hide the fact that Russia was ultimately the aggressor; and although it did not plan to incorporate Slavic lands into its territorial domain — it “only” wanted to create dependent satellite states in the Balkans — Russia ended up seizing a portion of Ottoman territory in Eastern Anatolia. 

    The official narrative of the outbreak of the Russo-Turkish war resembles the  libretto of a nineteenth-century opera with a plot developed on two levels — the crowd scenes (the Russian public cheering on the Slavs) and the main drama at the tsar’s court and within the imperial family. The crowd is stirred by the news about the Turks’ atrocities and it demands justice; and promptly produced paintings of pale-skinned Slavic women tortured by dark-skinned barbarous Turks provided the perfect scenery. Troops of volunteers march to the Balkans, and peasants and poor folk send in their modest donations to help their Slavic brethren. Meanwhile, in the imperial palace, the tsar is tragically torn between his human compassion and his duty as the Russian monarch to put the interests of his people above all. At court, there are two forces pulling in different directions: one, exemplified by the tsar’s top bureaucrats, argues for caution and restraint; another, represented by the Empress Maria Aleksandrovna and the heir to the throne, the future Alexander III, is fully on the side of the bellicose public and the champions of the Slavs. The defense minister Dmitry Miliutin listens to the “outpourings” of the sovereign’s heart and records them in his diary. The tsar is sad and alone. His “hollow cheeks” and his eyes swollen with tears betray his sufferings; his health is deteriorating. He stoically withstands unfair criticism for indecisiveness and passivity, yet he is tormented by doubts. The tsar feels for the poor Slavs, yet he knows that all blame for the losses and the casualties of war always “fall on those who make the first step.” And the poignancy of the tsar’s dilemma contrasts with the coldness of Russia’s European counterparts, especially Austria-Hungary and Britain, who cynically pursue their political interests, faking support of the Balkan Slavs. And after a few months of honest attempts to make the Turks change their policy, Alexander II concludes that Russia cannot avoid the war and resolves to act.

    The Russo-Turkish war became a turning point in Russian politics, marking the end of the era of the Great Reforms and prompting the reorientation of Russian domestic and international policies. Even if Alexander’s trepidations were sincere and he came to believe in Russia’s mission to liberate the Slavs, there is no doubt that he used the split of the elites to his political advantage and manipulated the groups at court as well as his family. Wars are almost never the outcomes of external factors alone: to understand their sources, one must also look inside and analyze the internal domestic tensions between the elites, the ruler, and the groups of interests. 

    When it comes to the current war in Ukraine, we do not yet have the luxury of first-hand accounts, but political analysts and intelligence reports suggest that Putin, just like Nicholas I, made this decision in solitude. Putin turned his obsession with Ukraine’s resilience in the face of Russia’s pressure into a state matter, a creed that keeps his close friends and allies together. Little is known about Putin’s inner circle; but the public appearances — and disappearances — of certain statesmen and politicians allow us to deduce that since the beginning of the invasion in February the narrow group of trusted friends and advisers has become smaller and tighter, while the role of technocrats has become entirely subsidiary. The government’s influence has been significantly reduced, and the role of the Security Council, chaired by the president but unofficially led by Nikolai Patrushev, Putin’s old friend and a former head of the FSB, has increased. All those who remained in power were compelled to publicly express their support of the “special operation in Ukraine.” 

    In this case, as in multiple episodes from the war history of the Russian Empire, the decision came from the autocrat who, as the ritual prescribes, solicited advice from the people and the elites. The meeting of the Security Council, broadcast on Russian state television, showed a handful of top officials, who, shaking and in trembling voice, gave their consent to the invasion. Yet if we look beyond the ritual, wars in autocracies are always the ruler’s wars. When it comes to the decision to fight a war, the “inherently limited” power of autocrats becomes, in fact, unlimited. Wars represent a way to build and maintain autocracies, even if they can also lead to their collapse.

    Let us now return to Chernyshevsky’s question: does a people, or just the educated part of it known as “the public,” bear responsibility for unleashing the war? In the aftermath of the Crimean War, people considered themselves the victims of Nicholas I’s regime. Yet in 1877 the situation looked different. The second Eastern war was portrayed as a war by popular demand that was almost forcefully imposed on the tsar. True, the ideas of cultural and political patronage over Balkan Slavs in the 1870s had gained popularity in Russian society. The so-called Slavic Committees in Moscow, St. Petersburg, and provincial cities initially focused on strengthening cultural unity and on humanitarian help, but after the suppression of the rebellion in Herzegovina in 1875 they switched to more active support of the “insurgents,” sending supplies and recruiting volunteers to fight for the freedom of Slavic “brethren.” The flip side of this activity was, indeed, the rise of anti-Turkish sentiments. The government publicly demonstrated its neutrality and non-involvement; it also quietly tried to lean on the Slavic committees and to channel the outpour of pro-Slav emotions in the right direction.

    But did these pan-Slavic circles — allegedly grassroot organizations supported and patronized by the ruling elite — represent “society”? A closer look at Russia’s political landscape of the 1870s shows that it is almost impossible to draw a line separating the “state” from the “public.” Russia did not have legal political parties until 1905, and the public sphere was closely policed by the government. As a result, a handful of conservative journalists and writers — Mikhail Katkov, Vladimir Meshcherskii, Ivan Aksakov — dominated the public mind, controlled the flows of information, and formed the language of public debates. Their influence was not, however, limited to the public. Katkov and Meshcherskii were privy to ruling circles: along with the tsar and other members of the elite, they were the main stakeholders in the campaign against the Ottomans. In contrast, liberal and democratic proponents of the Slavic cause were repressed, silenced, and exiled. The sad irony of the pro-Slavic campaign of 1876 lay in the fact that in the same year when the tsar resolved to support the autonomy of Slavs in the Ottoman Empire, he signed the ill-famed Ems edict prohibiting both the publication of books and theatrical production in the Ukrainian language, scornfully called a “dialect” in this law. As the Ukrainian historian and politician Mykhailo Drahomanov long ago pointed out, the “liberation” of Ottoman Slavs by the anti-liberal Russian Empire, where Slavic peoples, including Ukrainians and Poles, were deprived of even basic elements of autonomy, was a misnomer. Moreover, as Drahomanov observed, all talk about public initiative in support of the Slavs made no sense: the “unofficial Russia” that championed the campaign was undistinguishable from the “official” one. 

    Drahomanov’s words could easily be used to describe the situation in contemporary Russia. Those who are now allowed to speak on behalf of society are closely linked to the state; those who disagree with the state’s policy have been silenced and jailed, or they have had to emigrate or go underground. Most of the millions of people who support Putin and his plans of imperial revival know little about the world outside Russia, or even outside their town; they have been raised on state propaganda and are unwilling to question the veracity of the myths that it produces. They are excited about military victories because different ideas have never been inculcated in their minds, neither by the schools nor by the Orthodox Church. Many of them live in misery and abandonment, and they seek emotional comfort not in kindness and compassion but in an illusionary victory in a “special operation.”

    Wars have always been portrayed as the moments of unification between the autocrat and the masses — a kind of political communion, a shared national epiphany. The war consensus transcends the bureaucratic buffer that, in peaceful times, stands between the ruler and his subjects. When, in the 1870s, nationalists celebrated these consummations of unity, others saw the attempt to drag simple folks into war politics as cynical and dangerous. As Prince Petr Viazemskii remarked, “The people cannot wish the war but inadvertently push toward it … The government silently lures the people into this political chaos, and they may pay for it dearly.” Putin justified the invasion by the sufferings of the Russian-speaking population in eastern Ukraine, which was allegedly vying for autonomy and aspiring to strengthen ties with Russia. The orchestrated pro-Russian demonstrations and marches in Donetsk and Luhansk replicated almost verbatim the process of building up pro-war, pro-Slav, and anti-Turkish sentiments in the 1870s, as did the public euphoria in Russia in response to the annexation of Crimea in 2014. In lieu of the “barbarous” Turks, there are the Ukrainian “Nazis,” who, according to Putin, tormented the population in eastern Ukraine. The people — duped by state propaganda — may express nationalist sentiments, but the autocrat never really takes them into consideration when he gives the order to attack. 

    It is important, therefore, to make a distinction between the rhetorical references to public support and the reality of the decision-making process. The historian David McDonald, commenting on certain assumptions about the state’s responsiveness to nationalist sentiments in pre-revolutionary Russia, has rightly observed that these assumptions “neglect the finer mechanisms of causation and overlook the fact that imperial statesmen were highly reluctant to cede any voice at all to society in matters of foreign policy. While public opinion played an episodic role in the discussion of foreign policy, as a state matter, such issues could be considered only by professional officials responsible to His Imperial Majesty.” In autocratic orders, the notion of a war by popular demand is nonsense. 

    The ruler, in other words, does not care what an average Russian, or Russian society as a whole, thinks about Ukraine or the Ottoman Empire. Although a significant part of the Russian population supports the war now, it did not cause the war, and the majority had been opposed to the idea of armed conflict before the invasion. Of course, there were Russian writers, most notably Dostoyevsky, who penned pan-Slavic articles, and journalists who created the racist images of Turks, and there have been Russian politicians and intellectuals who have haughtily refused to recognize the cultural and political sovereignty of Ukraine; and they are all responsible for endorsing violence. Every soldier who has pulled a trigger, launched a missile, or thrown a grenade is complicit; every governor or theater director who has voiced support for the “special operation” bears the guilt for the lost lives of innocent Ukrainians. But all the individual responsibilities for these actions do not add up to the collective responsibility of “the Russians.” The notion of collective responsibility allows war criminals and the outspoken supporters of violence to escape judgment. The “responsibility of nations” often means no one’s guilt. 

    Does this suggest that Chernyshevsky was wrong in blaming “the public” and not the tsar for the horrors of the Crimean War? Not exactly. He was right in predicting that Russian educated society would fail to comprehend the simple thought that any war, victorious or not, is hideous, that war cannot be a source of glory and dignity, neither for a man nor for an empire. This thought remained alien to the Russian nobility, which continued to seek honor on the battlefield, and, with the exception of Tolstoy, the holy fool of Russian literature, the thought did not find expression in literary works. It is therefore very important to understand how and why a hostility to war, or an aversion to it, failed to develop in a country where every single family has lost at least one member in one of the many wars fought in the last hundred years. Chernyshevsky was also right in pointing out the cultural hauteur of the Russian literary elite who inculcated in their “public” a sense of imperial superiority — over Turks, Europe, Ukrainians, and others. This sense of superiority now fuels support for the Ukraine war among contemporary Russians. As many commentators have already pointed out, Russia has as yet failed to go through the process of de-imperialization and reckoning with its imperial (pre-revolutionary and Soviet) past. 

    Another theme that emerges with surprising persistence in the “who is to blame?” debates concerns the responsibility of the West for “provoking” Putin. The West must not repent for offending Putin and injuring his self-esteem, because it would only play into the autocrat’s hands. Putin’s propaganda openly justifies its aggression by alleging the hostility of Western powers who have turned Ukraine into a playground for their military operations against Russia. The motive of this putative Western threat is another cliché, copy-pasted from a typical scenario of Russian imperial warfare, in which almost every war, wherever it took place, was seen as a war against a collective “West.” Putin’s remarks last June about the world order as divided into two camps, namely sovereign states and their colonies, and his attempts to present this war as Russia’s defense of its sovereignty against the West, repeat almost verbatim the ideas of Russian nineteenth-century nationalists. 

    Mikhail Katkov, one of the main proponents of war against the Ottomans, thought that only by isolating itself from the West could Russia avoid the sad fate of falling into economic and political dependence on Europe. Isolationism — the rejection of shared cultural, legal, financial, and political standards and values — appeared to be a way of regaining and strengthening Russian independence. Indeed, the war with Turkey in 1877-1878 eventually turned into a civilizational clash with Europe and ended two decades of Russia’s fine attempts at Westernization and reforms. The most visible manifestation of Russia’s anti-Westernism was the reactionary reign of Alexander III, his official Slavophilism and imperialist policy. For its part, the de-Westernization of Russia in 2022 will be remembered by the disappearance of McDonald’s, empty shopping malls, and the deficit of imported consumer goods — but there have been less visible and more profound changes in the systems of education, industry, and finance. Russian universities and academic institutions have been cut off from networks of international cooperation, investors have walked away from the country’s economy, and Russian producers have to learn from scratch how to replace imported parts and machines. 

    The Russian invasion of Ukraine seemed improbable until the last moment, because it defied rationality and threatened to ruin the Russian economy and inflict unthinkable losses on the Russian population. Yet its economic irrationality has been twisted by the autocrat to prove the unselfishness of the war’s goals, and to demonstrate Russia’s uniqueness and difference from the obnoxiously pragmatic and materialistic West. Prince Dmitrii Obolenskii expressed this mood on the eve of the Russo-Turkish war: “I know that we have no money. I know that the generals are bad…. But this does not matter, because the main question is, What are we?” As in 1877, Russian authorities in 2022 high-mindedly boast about their altruism, although the main burden of war, as always, falls on the shoulders of the poor. No one can predict the human and material costs — for Russia, Ukraine, and the entire world — of the current war, but we must make sure that this accounting is made and all the losses are tallied, and that the people who inflicted the losses bear the responsibility.

    The invasion in Ukraine has had profound effects not only on the physical dimensions of people’s existences, but it has also affected the way they experience time, place, and history. Historical planes have shifted, dumping Russia into a temporal pit without a future and with a questionable past. Putin, who directs this bloody drama, suspends the historical specificity of these events by constantly referring to his crowned predecessors and following the imperial scenarios of war, as if an atemporal pattern, a Russian destiny, is simply being re-enacted. This suspension of time is not accidental — for Putin’s regime, war has turned into a mode of existence, an endless present, an eschatological battle without a strategy and a timeline. Some of Putin’s critics have inadvertently fallen into his trap, mistaking his rhetoric for reality. Instead of studying Russia’s imperial past to understand the precise mechanisms of autocratic power and thereby untangle the jumble and mess of Putin’s ideas, they look back to the past in order to revert to meta-historical stereotypes and cliches so as to judge and accuse. The discourse about “the Russians’ war” is often built on poorly understood historical parallels and assumptions regarding Russians’ genetic propensity for violence and their inability to develop an inner sense of freedom. This invidious essentializing is the mirror image of the pro-war Slavophile nonsense about the mystical singularity of Holy Russia. 

    The analysis of the causes of this terrible war should look beyond the rhetorical fog of Putin’s propaganda and include the serious treatment of the politics of war and the structure — the logic — of autocratic power. At what point, and why, does an autocrat resolve to initiate a war? Which elements and factors in the internal dynamics of an autocratic system trigger aggression? Why do the mechanisms of restraint not work? We must also begin a careful historical inquiry into how (and whether) Russian society has dealt with the problems of violence and responsibility. Ritual repentance on Facebook pages on behalf of the Russian nation will remain useless until we understand the actual causes of war. And when the time comes, the people responsible for the horrors of the current war will (I hope) face judgment, and courts will establish the guilt of individuals complicit in encouraging, supporting, financing, or justifying the war. There is a significant nexus, analytical and moral, between causality and culpability. 

     

    Taste, Bad Taste, and Franz Liszt

    I

    My title may appear provocative, but I doubt whether anyone is likely to disagree that of all the great composers Liszt is the one most frequently accused of bad taste, and also that the accusation has never threatened his status among the great. Indeed, as Charles Rosen once suggested, the accusation in some sense actually identifies Liszt’s particular position in the pantheon.

    Rosen put it in the form of a trumped-up paradox, saying of Liszt that his “early works are vulgar and great; the late works are admirable and minor.” Very cagey, this: Liszt’s most-admired works, say the Faust-Symphonie or the B-minor Sonata, came in between. Take away the invidious comparison, and take away the sophistry, and Rosen’s point still resonates. But take away the vulgarity, and Liszt is no longer Liszt. Reviewing the first volume of Alan Walker’s biography of Liszt in the New York Review of Books, Rosen went even further in his baiting, asserting that “to comprehend Liszt’s greatness one needs a suspension of distaste, a momentary renunciation of musical scruples.” And then, for good measure: “Only a view of Liszt that places the Second Hungarian Rhapsody in the center of his work will do him justice.” 

    That was not an endorsement of the Rhapsody, which Rosen, along with Hanslick and Bartók, thought “trivial and second-rate.” What made the provocation doubly surefire was the racial innuendo that tainted not only Liszt and the Rhapsody, but all who came in contact with them. Did not Pierre Boulez say of Bartók that his “most admired works are often the least good, the ones which come closest to the dubious-taste, Liszt-gypsy tradition”? And does that not go a long way toward accounting for Bartók’s overt hostility toward a tradition, that of the so-called verbunkos, on which he remained covertly dependent? The taint even tainted the tainter ​​— all of which was simply too much for Alfred Brendel, who, exasperated, took Rosen’s bait:

    Though enjoying, once in a while, some of the Hungarian Rhapsodies and operatic paraphrases, I wince at Charles Rosen’s assertion [that] in the matter of taste, no composer could be more vulnerable than Liszt. . . . In contrast to Charles Rosen, I consider it a principal task of the Liszt player to cultivate such scruples [as Rosen bids us renounce], and distil the essence of Liszt’s nobility. This obligation is linked to the privilege of choosing from Liszt’s enormous output works that offer both originality and finish, generosity and control, dignity and fire. 

    I sympathize with Brendel’s aversion to Rosen’s deliberately annoying formulations, but I find Brendel’s fastidiousness insufficiently generous toward Liszt and the impulses that his work embodies, which, though not always noble, are undoubtedly great. Rosen came closer than Brendel did to pinpointing the fascination that Liszt exerted over his times, and continues to exert over us. Especially worthy of pursuit is Rosen’s most irritating pronouncement of all: “Good taste,” he teased, “is a barrier to an understanding and appreciation of the nineteenth century.” 

    If the remark grates, it is because of the aspersion it seems to cast on the century that now looms in retrospect as the greatest century of all for music — or at least as the century in which music was accorded the greatest value. But suppose we read the aspersion the other way — as a critique of good taste? Ever since reading the Rosen-Brendel exchange a quarter of a century ago, I have had an itch to use Liszt and his reception as a tool to situate good taste (along with greatness) in social and intellectual history, and to fathom the profound ambivalence with which virtuosity has always been regarded.

    So let me begin again, with another quotation — something that has been rattling in my head even longer, more than half a century now. When I was an undergraduate, I read Thomas Mann’s last novel, The Confessions of Felix Krull, Confidence Man. At one point the social-climbing title character receives guidance from a nobleman, the Marquis de Venosta, whose world he wants to crash. Among the many insights that the Marquis offers him is this: “You come, as one now sees, of a good family — with us members of the nobility, one simply says ‘of family’; only the bourgeois can come of a good family.” 

    What does this mean? What is the difference between “family” and “good family”? What it seems to come down to is that “family” is an existential category, while “good family” is an aspirational one. The bourgeoisie is the aspiring class. The aristocracy simply is. And so it is with “taste” and “good taste.” “Taste” is something the elect possess and exercise without calculation or necessary self-awareness. “Good taste” is exhibited rather than exercised: it is something attributed to the maker of deliberate and calculated choices in recognition of their correctness, as a mark of social approval. “Taste” is a matter of predilection, “good taste” is a matter of profession. A display of good taste is a mark of aspiration to social approbation, and the standard to which exhibitors of good taste must aspire is never their own. To show good taste is to seek admission to an elite station which the possessor of “taste” occupies as an entitlement. A show of good taste is thus never a mark of election; rather, it marks one as an outsider wanting in. It implies submission as well as aspiration, hence inhibition. Like Felix Krull, people who display their good taste are trying to crash a social world.

    Recall now the famous words that Haydn spoke to Leopold Mozart in February 1785:

    Before God, and as an honest man, I tell you that your son is the greatest composer known to me either in person or by name. He has taste, and, what is more, the most profound knowledge of composition. 

    Imagine for a moment that Haydn had said to Leopold not that Wolfgang “has taste,” but that “he has good taste.” The compliment would have crumbled. “Taste” (Geschmack), in the sense that Haydn used the word, was an existential category. Either you were of the elect or you weren’t; and if you did not have taste as a birthright you could not acquire it, even though you had “the most profound knowledge of composition.”

    But what did it consist of? In this context, clearly enough, “taste” was an unerring and intuitive insider awareness of what was fitting. The closest any musician came to enunciating such a definition may have been Johann Mattheson in 1744, at the outset of a chapter entitled “Vom musikalischen Geschmack” in a book devoted to the aesthetics of opera:

    Taste, figuratively speaking, is the inner awareness, preference, and judgment by which our intellect impinges upon sensory matters. If, as Pliny would have it, the tongue has a mind of its own, so the mind can be said to have its own tongue, with which it tastes and evaluates the objects of its attention. 

    In that figurative sense, “taste” was comparable to the securely inculcated breeding that the Marquis de Venosta had in mind when he distinguished “family” from “good family.” 

    Mattheson’s ingenious, opportunistic inversion of a dimly remembered Pliny provides a link between the gustatory and the derivative or conceptual meanings of the term, while also giving off an echo of its social history; for as soon as the word “taste” was elevated beyond its purely sensory meaning in the seventeenth century, it connoted an attribute of aristocracy. The sociologist Stephen Menell locates that origin at the French court, where members of the old noblesse d’épée, threatened by the ever-aspiring, ever-rising bourgeoisie, secured positions at court as “specialists in the art of consumption” (at first of food), developing hierarchies of taste and codes of behavior that stressed the restraint of gluttony and refinement of table manners. Taste had become a metaphor for discrimination. 

    The turn from food to art as the arena for the exercise of taste can be traced first in Italy. Giulio Mancini, the personal physician to Pope Urban VIII and a famous collector of fine painting, equated gusto and giudizio (taste and judgment) in his Considerazioni sulla pittura, an essay published in 1623. Half a century later, in 1670, the attempt to acquire taste without breeding was satirized for all time in Molière’s Le bourgeois gentilhomme. The butt of the satire could be described, long avant la lettre, as “good taste,” which was the quality or attainment to which Monsieur Jourdain aspired. Good taste, in effect, was imitation taste, not the real thing.

    The notion of taste as an absolute standard, sanctioned by a consensus of the capable (“men of sentiment”) and associated in the first instance with one of David Hume’s most famous essays, has persisted since the eighteenth century despite the rise of less intransigent definitions. Its staying power is attributable to the conviction, among the politically conservative, that (to quote Wye J. Allanbrook) “the agreement of cultivated people about what is good and beautiful was a force for the political cohesion of the community” and a support, or occasional pinch-hitter, for hereditary aristocracy. As Schiller emphasized in On the Aesthetic Education of Man in 1794, “No privilege, no autocracy of any kind, is tolerated where taste rules”; but that is because taste itself offered an alternative standard of excellence, working through positive rather than negative reinforcement (the promise of esteem replacing the threat of coercion) to internalize the pressure. Where its autonomy and universality are believed in, spontaneous fellow-feeling and disinterested fraternity can seem to rule. But such belief, far from spontaneous, must be cultivated, or rather, instilled. 

    A century and more after Schiller, T. S. Eliot echoed his sentiments when he defined “the function of criticism” as “roughly speaking, . . . the elucidation of works of art and the correction of taste.” This was the formulation of a man who would shortly declare himself to be “classicist in literature, royalist in politics, and Anglo-Catholic in religion.” The word for it, and it has become a fighting word, is elitism.

    Where Eliot went, Stravinsky tagged dependably behind. In the Poétique musicale, his own pinnacle of intransigence delivered at Harvard a decade later, in 1939–1940, Stravinsky devoted the last of his six leçons ostensibly to musical performance, but in fact he made it clear from the outset that the subject matter of the lecture, which outwardly took the form of a diatribe against virtuosos expressly intended as a correction of taste, was in fact d’ordre éthique plutôt que d’ordre esthétique — “of an ethical rather than of an aesthetic order.” At the height of his dudgeon, Stravinsky declared: “Whereas all social activities are regulated by rules of etiquette and good breeding, performers are still in most cases entirely unaware of the elementary precepts of musical civility, that is to say of musical good breeding — a matter of common decency that a child may learn.” And yet, when invoking the grand thème de la soumission, the “great principle of submission,” that runs like a thread through all six lessons, Stravinsky contradicts himself, proclaiming instead that “this submission demands a flexibility that itself requires, along with technical mastery, a sense of tradition and, commanding the whole, an aristocratic culture that is not merely a question of acquired learning.” There is your existential taste: something that one possesses as a birthright, as an aristocrat possesses (and is possessed by) “family.”

    How far this is, we are apt to think, from our colloquial concept of taste as mere personal preference, the thing that is proverbially beyond dispute. That definition, too, has a long history, going back to the anonymous Latin maxim — De gustibus non est disputandum — that everybody knows. That maxim, however, is less ancient than it might appear. It is by no classical author. Its origin, rather, is presumed to be medieval and scholastic by virtue of its concern to distinguish between matters open to reason and persuasion and those which philosophers, or at least scholastics, had better leave alone. As the economists George J. Stigler and Gary S. Becker put it, at the outset of a famous article in which they broke the old taboo and embarked on a path that led, for one of them, to the Nobel Prize:

    The venerable admonition not to quarrel over tastes is commonly interpreted as advice to terminate a dispute when it has been resolved into a difference of tastes, presumably because there is no further room for rational persuasion. Tastes are the unchallengeable axioms of a man’s behavior. 

    Taste as axiomatic (and professed) personal preference seems a bulwark of personal autonomy, a democratic or egalitarian notion. As Liszt himself once said, “It is a matter of taste whether the old or the new is more charming. Taste is quite certainly a personal thing.” But consider this story, which will bring us back to music. It comes from a famous pamphlet, Comparaison de la musique italienne et de la musique française, issued in 1704 by Jean Laurent Lecerf de la Viéville, Lord of Freneuse, in answer to a like-named pamphlet, Paralèle des Italiens et des Français, issued in 1702 by another French aristocrat, Abbé François Raguenet. As Lecerf relates, a courtier fond of the brilliance and grandeur of Italian music brought before King Louis XIV a young violinist who had studied under the finest Italian masters for several years, and bade him play the most dazzling piece he knew. When he was finished, the king sent for one of his own violinists and asked the man for a simple air from Cadmus et Hermione, an opera by his own court composer, Jean-Baptiste Lully. The violinist was mediocre, the air was plain, nor was Cadmus by any means one of Lully’s most impressive works. But when the air was finished, the king turned to the courtier and said, “All I can say, sir, is that that is my taste.” 

    The king did effectively put an end to the argument by invoking his taste, but was that because there can be no disputing tastes or because there can be no talking back to a king? Lecerf’s argument with Raguenet, who had waxed rapturous about the voices of castrati, was really all about authority, not taste. In disputes or assertions regarding tastes, authority has many surrogates. Among professionals, including musical professionals, the chief surrogate is experience. Consider this famous footnote from Johann David Heinichen’s thoroughbass treatise of 1725.

    If experience is needed in any art or science, it is certainly needed in music. . . . But why must we seek experience? I will give you one little word that encompasses the three basic requirements in music (talent, knowledge, and experience), and its heart and its outer limits as well, and all in four letters: Goût. Through application, talent, and experience, a composer needs to acquire above all an exceptional sense of taste in music. The distinguishing feature of a composer with well-developed taste is simply the skill with which he makes music pleasing to and beloved by the general, educated public; in other words, the skills by which he pleases our ear and moves our sensibilities. . . . An exceptional sense of taste is the philosopher’s stone and principal musical mystery by means of which the emotions are unlocked and the senses won over. 

    This is the kind of taste — something acquirable through labor and application (provided one has good instruction), hence available not only to the aristocracy of birth but also to an aristocracy of talent and training — to which Francesco Geminiani referred in the title of A Treatise of Good Taste in the Art of Musick (c. 1749), a title that on the surface might seem to offer a counterexample to the distinction between “taste” and “good taste.” In the body of the treatise, however, Geminiani (who had lived in London since 1714 and was writing in idiomatic English) usually inserts the indefinite article before “good taste.” Thus, at the beginning of the preface: “The Envy that generally attends every new Discovery in the Arts and Sciences, has hitherto deferr’d my publishing these rules of Singing and Playing in a good Taste”; and, at the end: “Thus I have collected and explain’d all the Ingredients of a good Taste.

    That indefinite article does a lot of work: it is incompatible with both of the categories of taste with which we are concerned, whether with “taste” as the superior existential endowment Haydn attributed to Mozart, or with the “good taste” in which Liszt was held by Rosen and Brendel to be deficient. When you put Geminiani’s odd usage together with the title of his previous treatise, to which A Treatise of Good Taste was a supplement and on which it was dependent — that is, Rules for Playing in a True Taste on the Violin, German Flute, Violoncello, and Harpsichord (London, c. 1745) — it is clear that the two expressions “a good taste” and “a true taste” are interchangeable equivalents of “correct (or elegant) style.” And indeed, it turns out that the Treatise on Good Taste is merely a manual on embellishment, consisting of a table of ornaments followed by models for application, chiefly to familiar Scots airs furnished with a thoroughbass. As Robert Donington comments in his foreword to the facsimile edition:

    “Good taste” was almost a technical term of the period. It was used not merely for a refined and cultured attitude toward music in general; it was used for a refined and cultured ability to invent more or less improvised ornamentation for melodies often notated in plain outline, but requiring such ornamentation in order to be given a complete performance. 

    Corroboration of this usage in eighteenth-century English comes from Dr. Burney, who in his musical travelogue of 1771 defined “taste” as “the adding, diminishing, or changing [of] a melody, or passage, with judgement and propriety, and in such a manner as to improve it.” In short, therefore, and ironically, Geminiani’s brand of “good taste,” insofar as it implies the addition of impromptu passagework to written compositions, virtually coincides with the “bad taste” of which Liszt and his contemporaries would be accused a century after Geminiani’s time, and up to the present day. It did not take long for fashions to start changing. At the very end of his General History of Music, in the twelfth chapter of the fourth volume, published in 1789, devoted to the “General State of Music in England at our National Theatres, Public Gardens, and Concerts, during the Present Century,” the same Dr. Burney wrote off Geminiani’s guides to “a good taste” as having appeared “too soon for the present times. Indeed, a treatise on good taste in dress, during the reign of Queen Elizabeth, would now be as useful to a tailor or milliner, as the rules of taste in Music, forty years ago, to a modern musician.” 

    II

    Yet insofar as Geminiani offered instruction in correct practice, his good taste did imply submission to a standard, a matter of meeting expectations. The taste or ability about which Heinichen and Geminiani wrote was not the personal preference of any particular performer or composer, nor of the authors themselves, nor even the consciously formulated demand of the “general, educated public.” Effort and education can give us all equal access to correct style: the taste of one is (or ought to be) the taste of all. It is on the promise to impart that universal taste, which all successful composers must master, that the authority of Heinichen’s or Geminiani’s manuals depended. It was an authority that, in the guise of classicism, could become authoritarian. 

    Take, for example, Voltaire’s article on Goût in the seventh volume of Diderot and d’Alembert’s Encyclopédie, issued in 1757 — the same year as Hume’s seminal essay, but expressing what seems to be a pre-Humean formulation, in which l’homme du goût, “the man of taste” (compare Hume’s “men of sentiment” or the Marquis de Venosta’s “person of family”) is expressly equated with le connoisseur, the one who knows the rules of style as the gourmet knows the rules of the kitchen and the dining table. “If the gourmet immediately perceives and recognizes a mixture of two liqueurs, so the man of taste, the connoisseur, will see at a glance any mixture of styles” — and, of course, disapprove. The standard is one of purism, and failure to meet it constitutes le goût dépravé, debased taste, otherwise known more simply as bad taste. When Voltaire admits the phrase un bon goût, it is as the back-formed opposite of un mauvais gout. Only the latter can be personal. As an idiosyncrasy it is tantamount to a flaw that one must eliminate so as to restore the universal norm, which is simply le goût, with no qualifier. “They say there is no point disputing tastes,” Voltaire concedes:

    and this is right enough when it is only a matter of sensory taste, . . . because one cannot correct defective organs. It is different with the arts; as their beauties are real, there is a good taste that discerns them and a bad taste that does not; and the mental defect that gives rise to a wayward taste can often be corrected. 

    Here Voltaire anticipates Eliot: taste, for him, is no mere matter of fallible individual preference, but one of conformity to an established criterion, hence subject to correction. From there, Voltaire connects “good taste” to the idea of perfected style, or what literary historians would eventually christen “classicism”:

    The taste of an entire nation can be corrupted. This misfortune usually comes about after periods of perfection. Artists, for fear of being imitators, seek untraveled paths; they flee the natural beauty that their predecessors had embraced; there is some merit in their efforts; this merit covers their faults; the novelty-besotted public runs after them; it soon loses interest, however, and others appear who make new efforts to please; they flee even further from nature; taste disappears amid a welter of novelties that quickly give way one to another; the public no longer knows where it is, and it longs in vain for the age of good taste that will never return. It has become a relic that a few sound minds now safeguard far from the crowd.

    This wholly aristocratic, existential notion of “good taste,” ever resistant to destabilizing innovation, is a decreed taste, sanctioned by tradition. Still a child of the seventeenth century, Voltaire locates its source dogmatically in “nature.” D’Alembert, the editor of the Encyclopédie, in an appendix to Voltaire’s article, somewhat modernizes (that is, relativizes) Voltaire’s position by vesting the power of decree in “philosophy,” which at least implies human agency:

    In matters of taste, a smattering of philosophy can lead us astray, while philosophy better understood can bring us back. It is an insult to literature and philosophy alike to think that they could harm or exclude one another. Everything that pertains, not only to our way of thinking, but also to our way of feeling, is philosophy’s true domain. . . . How could the true spirit of philosophy be opposed to good taste? On the contrary, it is its strongest support, because this spirit consists in returning everything to its true principles, in recognizing that every art has its own particular nature, each condition of the soul its own character, each thing its own particular tint—in one word, that one should never transgress the limits of a given genre. 

    These extracts exhaust references to le bon goût (rather than the more usual goût, unmodified) in the Encyclopédie. The addition of the adjective does not change the meaning; “good taste” here does not differ from “taste” tout simple, the sense of suitability that Haydn recognized as Mozart’s mark of election. And that is because the philosophes located the criterion of correct discrimination not in the perceiving subject but in the object perceived, rightly apprehended according to “its own particular nature,” of which philosophy is the arbiter. To acquire taste, on the encyclopedists’ terms, one had to submit to their authority. It became a task for a new cohort of eighteenth-century thinkers to emancipate the notion of taste from that of external authority, while nevertheless remaining faithful to the idea of its universality or its status as what Kant called a sensus communis, a “common sense,” meaning “a sense shared by all.” This required some fancy skating.

    Kant’s solution was to posit that taste was subjective in that it concerned not the properties of objects but the pleasure or displeasure of contemplating subjects. Hence “it is absolutely impossible to give a definite objective principle of taste . . . for then the judgment would not be one of taste at all.” And yet such reactions were ideally universal because they derived from a faculty possessed by humans, only by humans, and by all humans. Within Kant’s careful definitions, all have taste, and all have the same taste. It must, therefore, enjoy “a title to subjective universality,” or what we now somewhat less paradoxically call intersubjectivity.

    Evidence of universality is to be sought in consensus, which must be discernible despite the great variety in subjective preference that strikes the casual observer. For Hume, this made it all the more imperative to seek, or establish, “a Standard of Taste: a rule, by which the various sentiments of men may be reconciled; at least, a decision, afforded, confirming one sentiment, and condemning another.” The problem for Enlightened theories of universal taste was that of outliers, people of ostensibly normal endowment who nevertheless diverged from the intersubjective consensus. Is it possible to speak of “wrong” taste, even if, as Kant maintained (and as everyone beginning with Hume seems to agree), “the judgment of taste is . . . not a judgment of cognition,” and therefore cannot be considered factual? If there can be wrong taste, then there can be bad taste; and if there is bad taste, then there can be normative good taste — something that can be aspired to. We are approaching the crux of our problem.

    The most ingenious attempt to account for wrong taste within a universalist theory of taste is found in the introduction to Edmund Burke’s famous Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful, first published in 1757, the same bumper year that saw the publication of both the seventh volume of the Encyclopédie and Hume’s essay on taste. Having defined taste as “that faculty or those faculties of the mind, which are affected with, or which form a judgment of, the works of imagination and the elegant arts,” Burke invoked John Locke’s distinction between wit and judgment. “Mr. Locke,” he writes, “very justly and finely observes of wit, that it is chiefly conversant in tracing resemblances: he remarks, at the same time, that the business of judgment is rather in finding differences.” As we know from experience, wit is much the more pleasurable function, as the perception of resemblances is a matter of immediate sensibility, whereas the discrimination of differences requires expertise and mental effort. Thus, Burke argues, taste being a judgment, its exercise is more or less correct depending not upon what he calls “a superior principle in men,” but rather “upon superior knowledge,” in the sense of wide acquaintance.

    That is the crucial move. Once we postulate that taste is not a simple idea but a compound of sensibility and knowledge, it follows that a deficiency of taste can be the result of a deficiency in either category. “From a defect in [sensibility],” Burke writes, “arises a want of taste,” which is to say an inability or disinclination to render any judgment at all; whereas “a weakness in [knowledge] constitutes a wrong or a bad [taste].” This passage, coeval with Voltaire’s Encyclopédie entry but the work of a newer breed of thinker, constitutes, to my knowledge, the earliest recognition that there can be such a thing as bad taste, as distinct from a want of taste. The latter can only be deplored or pitied, as it was by Voltaire and by Mann’s Marquis de Venosta, whereas one can aspire, with Burke or Eliot, to correct the former.

    The consequences of this distinction are far-reaching, and baleful; and Burke, to his credit, did not flinch from them. If “the cause of a wrong taste” is “a defect of judgment,” he allowed, then the mis-evaluation of works of art

    may arise from a natural weakness of understanding, . . . or, which is much more commonly the case, it may arise from . . . ignorance, inattention, prejudice, rashness, levity, obstinacy, in short, all those passions, and all those vices, which pervert the judgment in other matters, prejudice it no less in this its more refined and elegant province. 

    But if “bad or wrong taste” can be taken as a symptom of vice or perversion, the door has been opened wide to abuse. Burke recognizes this in an especially pregnant passage that enlarges upon an earlier point — that discrimination diminishes rather than enhances pleasure because it lessens the number of objects from which we can naively derive satisfaction.

    The judgment is for the greater part employed in throwing stumbling-blocks in the way of the imagination, in dissipating the scenes of its enchantment, and in tying us down to the disagreeable yoke of our reason: for almost the only pleasure that men have in judging better than others, consists in a sort of conscious pride and superiority, which arises from thinking rightly; but then, this is an indirect pleasure, a pleasure which does not immediately result from the object which is under contemplation.

    What we are witnessing here is the birth, or at least the christening, of aesthetic snobbery, which is always and only social snobbery in disguise. An indirect or even perverse pleasure it may be, but snobbery is a powerful pleasure; and Burke’s explanation of snobbery, as the sole compensation we receive for the loss of immediacy and naive pleasure that our critical judgment exacts from us, is the best account I have ever encountered of its value to snobs (a category that at times — let’s admit it — tempts us all). It amounts also to an account and critique of aspirational “good taste,” which arises alongside and in response to aesthetic snobbery, the most quintessentially bourgeois of all snobberies, and might even be deemed tantamount to it.

    It is not taste (pace Stravinsky) but “good taste” that conflates aesthetic and moral quality, and sits in judgment over them conjointly. Since it is the bastard child of snobbery, “good taste” requires the ever more exacting exercise of negative judgment. Forgetting, or affecting to reject, the Kantian proviso that taste is a property not of contemplated objects but of contemplating subjects, “good taste” constructs spurious existential categories such as “kitsch,” a term that arose in the course of the emergence we are now tracing (and Google can tell you how often it is attached to Liszt). As snobbery’s surrogate, aspirational “good taste” easily turns competitive. Critics who earn followings do so (as Louis Menand smirked about Pauline Kael) because they have recognized, and pander to, “the truth” that “people, at least educated people, like not to like movies, especially movies other people like, even more than they like to like them.” 

    The conjoint promise of safety and self-congratulation gives one an incentive to expand the range of objects one can consign to the outer darkness, so as to maximize one’s “conscious pride and superiority,” to recall Burke’s more elegant expression. Hence such impressive works of pseudo-scholarship as Gillo Dorfles’ extravagant compendium Il Kitsch: Antologia del cattivo gusto, published in Milan in 1968 and translated into English as Kitsch: The World of Bad Taste, which contains, alongside what anyone might expect (Nazi and Soviet poster art, eroticized religious images, the Mona Lisa imprinted on bath towels and eyeglass cases), several items that can only have been calculated to shock the reader by their inclusion, such as New York’s Cloisters, the museum of medieval art endowed by John D. Rockefeller in 1938. A caption explains that “The structure is entirely modern but incorporates authentic architectural features from the cloisters of medieval monasteries. Authentic objects and works of art are displayed in the halls, which are always full of tourists.” We are left in little doubt as to what — or rather, who — the aspersion is meant to degrade. 

    The inevitable race to the limit in the fastidious exercise of captious “good taste” was well captured by Joseph Wood Krutch in 1956 when reviewing a book by Mary McCarthy, an especially exigent arbiter. “Her method is one of the safest,” he remarked.

    If you deny permanent significance to every new book or play time will prove you right in much more than nine cases out of ten. If you damn what others praise there is always the possibility that your intelligence and taste are superior. But if you permit yourself to praise something then some other superior person can always take you down by saying “So that is the sort of thing you like.” 

    That fear afflicts performers as well as critics. There is a coruscating passage on taste in the treatise Du chant, from 1920, by Reynaldo Hahn, the singer, composer, and voice teacher who perhaps better than any other musician — and not only because he was Marcel Proust’s lover — embodies the spirit of the belle époque, a time synonymous with elegance, as elegance may be thought synonymous with taste. But the writing drips with sarcasm:

    When singing is not directed by the heart (and you know that one cannot lightly command the service of the heart), when singing is not guided by feeling, by understanding, by the direct outpourings of the heart, it is taste that assumes control, directing and presiding over everything. Then it must be everywhere at once, acting in a hundred different ways. Think of it! Every detail of the vocal offering must be submitted to the dictates of taste.

    Let me be precise. By taste, I do not mean that superior and transcendent ability to comprehend what is beautiful which leads to good esthetic judgment. In fact, we cannot ask all singers to be people of superior taste, since such a requirement would reduce still further the very limited number of possible singers. By taste, I mean a wide-ranging instinct, a sure and rapid perception of even the smallest matters, a particular sensitivity of the spirit which prompts us to reject spontaneously whatever would appear as a blemish in a given context, would alter or weaken a feeling, distort a meaning, accentuate an error, run counter to the purposes of art.

    I repeat: A particular sensitivity of the spirit is necessary in this sort of taste, as well as emotion and a certain fear of ridicule. It is no doubt for this reason that women display a better sense of taste in singing than men. 

    A certain fear of ridicule. It is obvious that Hahn is speaking not of existential but aspirational taste; taste that hedges against the depredations of snobs, who censor idiosyncrasy along with sincerity and force artists (and especially, in Hahn’s bigoted view, those of the weaker sex) to retreat into what Russell Lynes, the social historian of art, in a famous article in 1949 that proclaimed a new social order based not on “wealth or family” but on “high thinking,” derided as the “entirely inoffensive and essentially characterless” precincts of “good taste.”

    Of course, Lynes was writing in the age of Rosen and Brendel, and describes a late stage in the socio-aesthetic process whose beginnings Edmund Burke had charted long before the stultifying category of “good taste” had gained momentum, although he may be said to have predicted it. At the end of his discussion of (universal) taste, Burke notes optimistically that “the taste . . . is improved exactly as we improve our judgment, by extending our knowledge, by a steady attention to our object, and by frequent exercise.” To boil it down to a formula, he proposes that taste = judgment = knowledge, and he who knows most judges best. Appeals to the ignorant, therefore, are subversive of taste, because they thwart the advancement of knowledge. Those who seek, or gain, the applause of the ignorant are threats to the maturation of taste.

    III

    The stage has been set for our hero. But before he enters, there remains one last matter to broach, namely the ambiguous character of virtuosity and the ambivalent attitude toward it in Liszt’s day on the part, not of audiences, surely, but of the newly professionalized class of tastemakers — what Liszt, in exasperation, called “the aristocracy of mediocrity.” 

    Gillen D’Arcy Wood, a social historian of literature and music and their interrelations under romanticism, identifies Liszt’s wry phrase with “an increasingly influential middle-class cultural regime that wished to be purified of virtuosic display,” an aspiration he calls, straightforwardly enough, virtuosophobia. Virtuosophobia is obviously akin to what the literary historian Jonas Barish called “the antitheatrical prejudice,” in a book that traced — from ancient Greece to the middle of the twentieth century -— the curious inconsistency whereby “most epithets derived from the arts” — words such as poetic or epic or lyric or musical or graphic or sculptural — “are laudatory when applied to the other arts, or to life,” with the conspicuous exception of terms derived from the theater, such as theatrical or operatic or melodramatic or stagey, which, by contrast, “tend to be hostile or belittling.” One reason for the antitheatrical prejudice is that theatrical acting, being by definition an act of dissembling, transgresses against ideals of sincerity; and virtuosos are often similarly accused, the terrific effect of their performances being unrelated, or not necessarily related, to genuine feeling.

    This was an observation constantly made about Liszt during his lifetime, and not always invidiously. His American pupil Amy Fay, who attended his master classes in Weimar1873, wrote in her memoir, Music Study in Germany, that

    when Liszt plays anything pathetic, it sounds as if he had been through everything, and opens all one’s wounds afresh. . . . [He] knows the influence he has on people, for he always fixes his eyes on some one of us when he plays, and I believe he tries to wring our hearts. . . . But I doubt if he feels any particular emotion himself when he is piercing you through with his rendering. He is simply hearing every tone, knowing exactly what effect he wishes to produce and how to do it. 

    To Liszt’s manner, Fay contrasted that of Joseph Joachim (once Liszt’s protégé, later his most zealous detractor) who exemplified the submissive and antitheatrical attitude later associated with Werktreue. Where the one was “a complete actor who intends to carry away the public,” the other was (that is, acted) “totally oblivious of it.” Where the one “subdues the people to him by the very way he walks on the stage,” the other is “‘the quiet gentleman artist’ who advances in the most unpretentious way, but as he adjusts his violin he looks his audience over with the calm air of a musical monarch, as much as to say, ‘I repose wholly on my art, and I’ve no need of any “ways or manners.”’” 

    Which of course is also a means of taking possession of one’s public. What Fay described were two species of charismatic — that is, histrionic — “ways or manners,” as she surely knew. (And Liszt was well aware of the alternative species. Describing the charismatic playing of John Field, he showed the same subtle irony as Fay describing Joachim: “It would be impossible to imagine a more unabashed indifference to the public. . . . He enchanted the public without knowing it or wishing it. . . . His calm was all but sleepy, and could be neither disturbed nor affected by thoughts of the impression his playing made on his hearers [since] art was for him in itself sufficient reward.”) The affectation of quiet absorption was the truly romantic (“disinterested”) attitude, as was the antitheatrical prejudice itself and the virtuosophobia that was its musical outlet; for it was romanticism that made a fetish of sincerity. As early as 1855, in a famous letter to Clara Schumann explaining his defection from Liszt’s orchestra in Weimar, Joachim broadened the antitheatrical, virtuosophobic rhetoric to encompass Liszt’s compositions as well, focusing on the sacred works as especially flagrant breaches of propriety. By the end of the passage, it is impossible to separate the bad taste of Liszt the composer from that of Liszt the performer as the butt of Joachim’s righteous indignation.

    For a long time now I have not seen such bitter deception as in Liszt’s compositions; I must admit that the vulgar misuse of sacred forms, that a disgusting coquetterie with the loftiest feelings in the service of effect was never intended — the mood of despair, the emotion of sorrow, with which the truly devout man is raised up to God, Liszt mixes with saccharine sentimentality and the look of a martyr at the conductor’s podium, so that one hears the falseness of every note and sees the falseness of every action. 

    Most explicit of all was Nietzsche. In Der Fall Wagner he asked, rhetorically, where Wagner belonged, and his answer went beyond Wagner to indict Wagner’s father-in-law as well. Wagner belongs “not in the history of music. What does he signify nevertheless in that history? The emergence of the actor in music: a capital event that invites thought, perhaps also fear. In a formula: ‘Wagner and Liszt.’” But at least Wagner did his acting in the theater. About Liszt, who turned instrumental performance into a branch of theater, one can only think worse. Nietzsche’s peroration, in three italicized “demands,” points the final finger at the musician, not the actor, for music is brought down as the theatrical is elevated. “What are the three demands for which my wrath, my concern, my love of art has this time opened my mouth?” thunders Nietzsche. They are these:

    That the theater should not lord it over the arts.

    That the actor should not seduce those who are authentic.

    That music should not become an art of lying. 

    Nor can virtuosos ever be “disinterested,” to invoke Kant’s principal aesthetic yardstick. Like other theatrical performers, they are never without a Zweck, an ulterior purpose, namely to impress us into thunderous vanity-stroking applause and exorbitant pocket-lining expenditures; and our interest in their overcoming obstacles is a human, rather than an aesthetic, interest — the sort of interest that attends to the performances of athletes and prestidigitators as well as musicians. D’Arcy Wood gave this a social twist when writing of the “antagonism,” so evident in Georgian England, and especially when Liszt tried to storm its aesthetic barricades with so much less success than he had enjoyed on the continent, “between literary (and academic) culture and the sociable practices of music, between Romantic middle-class ‘virtues’ and aristocratic virtuosity.” 

    We are back again to the Marquis de Venosta, and the distinction between “family” and “good family.” The former is an unearned status; the latter, a reputation earned through the exercise of virtue — which demanded vigilance against virtue’s false cognate. Though etymologically descended from virtue, virtuosity, in the middle-class view, was sheer vice, inextricably associated with all the other vices, and that remains our incorrigibly Romantic, middle-class view today. The author of a serious scholarly book on Paganini, published in 2012, wanted to know, for example, whether “the greed, lust, pride, and vainglory that [were] manifested in multiple aspects of the virtuoso’s life [can] be viewed any longer as separate from the aesthetic of virtuoso performance.” 

    Hence one of the paradoxes of nineteenth-century musical reception that continues to haunt us in the twenty-first century is the simultaneous denigration of virtuosity and fetishizing of difficulty. To unpack it we might begin by returning to Edmund Burke and his famous treatise. The section on the sublime contains a short paragraph, seemingly an afterthought, on difficulty as a “source of greatness”:

    When any work seems to have required immense force and labor to effect it, the idea is grand. Stonehenge, neither for disposition nor ornament, has anything admirable; but those huge rude masses of stone, set on end, and piled each on other, turn the mind on the immense force necessary for such a work. Nay, the rudeness of the work increases this cause of grandeur, as it excludes the idea of art and contrivance; for dexterity produces another sort of effect, which is different enough from this. 

    Thus, difficulty overcome too dexterously is not sublime; or rather, the dexterous overcoming of difficulty destroys the sublime effect and vitiates the awe that it inspires. Substitute “virtuosity” for Burke’s “dexterity” and the reason will become apparent why the English critics who wrote about Liszt in the 1840s so belittled or even deplored his “transcendent” virtuosity, associating it with triviality rather than with grandeur. The very act of transcendence was virtuosity’s transgression — a transgression against the virtue of difficulty.

    The works of Beethoven were, in Burke’s sense, the Stonehenge of music. Even before his sketchbooks exposed “the immense force necessary for such a work” to the inquisitive eye, his labor was a proverbial struggle per aspera ad astra. And performing his music was likewise a proverbial struggle it became a sacrilege to appear to transcend. The approved attitude toward Beethoven — the tasteful attitude — was Stravinsky’s grand thème de la soumission, epitomized in Artur Schnabel’s famous remark that “I am attracted only to music which I consider to be better than it can be performed. Therefore I feel (rightly or wrongly) that unless a piece of music presents a problem to me, a never-ending problem, it doesn’t interest me too much.” And if Schnabel’s piety represents the epitome, way beyond epitome was the British conductor Colin Davis, who said of Beethoven’s Missa solemnis, “It’s such a great work, it should never be performed.” 

    Beethoven’s unique social situation was bound up equally with the new attitude toward works and difficulty — or rather, the new valuation placed on old attitudes toward them — and with his removal from society as a result of deafness. It put Beethoven at the opposite social extreme from the virtuoso, who (like Beethoven himself in the earlier stages of his career) was sociability personified. Beethoven’s vaunted difficulty was abetted by his aristocratic patrons, while the virtuoso was seen as playing to the common crowd. The newly reified concept of artwork that Beethoven’s talent and fate so abetted is our concept still. It is what made possible the notion of “classical music,” which is to say, music conceptualized as a permanent and immutable object, at the same level of reification as the products of other artistic media like painting or sculpture: a concrete entity deserving the designation “work.” From something that elapses in time, music was thus reconceptualized as something that exists ontologically in an “imaginary museum,” as Lydia Goehr put it in the title of her celebrated book — a kind of notional space. 

    So let us imagine a reified musical work that way — as an article somehow located in a curated space. The humility so demonstratively voiced by Schnabel or Davis (whether or not we accept it at face value) is located below it. It looks up, like anything aspirational. But the attitude of the virtuoso — who transcends all difficulties, makes light of them, and makes everything seem easy (as the commonplace accolade would have it) — is located, like anything transcendent, above the work. It looks down. And therefore it is an arrogant crossing of an ethical line, a hubristic affront to aspiration; a fortiori, it is an affront to “good taste.” A London critic’s review of Liszt’s rendition of the Emperor Concerto, which casts him in the role of a bad curator, is a perfect summation of these strictures: “The many liberties he took with the text were evidence of no reverential feeling for the composer. The entire concerto seemed rather a glorification of self, than the heart-feeling of the loving disciple.” 

    And yet — as always — one man’s transgression is another man’s transcendence. There is always a more “spiritual” way of viewing virtuosity: as a literal triumph over the physical. Heine wrote that where others “shine by the dexterity with which they manipulate the stringed wood, . . . with Liszt one no longer thinks of difficulty overcome — the instrument disappears and the music reveals itself.” But then he immediately turns around and contradicts himself in his fascination, all but universally shared by those who experienced Liszt in the flesh, with the pianist’s physical presence, obsessing over his way of “brush[ing] his hair back over his brow several times,” turning his listeners into viewers, or rather voyeurs, who feel “at once anxious and blessed, but still more anxious.” The phobia, repressed, returns.

    The strongest avowal of virtuosophobia, the censorious distinction between virtuosity and difficulty, comes from Liszt himself, in the second of his so-called Baccalaureate Letters, published in the Parisian Gazette musicale on February 12, 1837, with a dedication to George Sand. The relevant passage runs as follows:

    In concert halls as well as in private drawing rooms . . . , I often played works of Beethoven, Weber, and Hummel, and I am ashamed to say that for the sake of winning the applause of a public which was slow in appreciating the sublime and beautiful, I did not scruple to change the pace and the ideas of the compositions; nay, I went so far in my frivolity as to interpolate runs and cadenzas which, to be sure, brought me the applause of the musically uneducated, but led me into paths which I fortunately soon abandoned. I cannot tell you how deeply I regret having thus made concessions to bad taste, which violated the spirit as well as the letter of the music. Since that time absolute reverence for the masterworks of our great men of genius has completely replaced that craving for originality and personal success which I had in the days too near my childhood. 

    Thus, with a presumed literary assist from Marie d’Agoult, Liszt accuses himself of mauvais goût, a locution that was still a novel one at the time of writing. But confessions can also be a form of boasting, and self-abasement a form of self-promotion. I think it pretty clear that Liszt, at that moment engaged in a very public rivalry with Sigismund Thalberg, was using the rhetoric of penitence and contrition in this way, as part of a campaign to show that he, and not his challengers, had become (to quote a famous passage from a letter he had written several years before) “an artist such as is required today.” That is to say, an artist who was abreast of the latest intellectual fashions, who was prepared to use the press to establish good public relations, and who was therefore able to maintain preeminence in the new era of publicity. Unlike his rivals, he was displaying himself as an artist who possessed both taste and “good taste,” who cultivated the aspirational posture, who looked up, not down, at “the masterworks of our great men of genius.”

    There is no reason to doubt the sincerity of Liszt’s aspirations. But as Kenneth Hamilton has observed, “numerous reviews of his concert tours of the 1840s indicate that [as of 1837], he cultivated an attitude akin to St. Augustine’s famous exhortation, ‘Oh Lord, grant me chastity — but not yet!’” He was still ready and able, in the words of Carl Reinecke, to “dazzle the ignorant throng.” Still, the social animus in that charge should caution us against too readily slapping a “populist” label on Liszt. Dana Gooley reminds us that some of Liszt’s concert practices suggest the opposite. He imposed higher ticket prices than did any of his contemporaries, which Gooley interprets as an attempt “to siphon out the middle bourgeoisie” and ensure that his recitals remained high-prestige events, not popular entertainments. The Baccalaureate Letters themselves show him striving to found his reputation on “his nearness to the intellectual and political elites of Paris,” the “cultural trendsetters.” 

    One of the most revealing portraits of Liszt the composer-performer in all the glorious inconsistency of his behavior, accurately reflecting the ambivalences of mores in transition, is the recollection of Vladimir Vasilievich Stasov, first published in 1889, of the great pianist’s debut in St. Petersburg forty-seven years earlier, in 1842:

    Everything about this concert was unusual. First of all, Liszt appeared alone on the stage throughout the entire concert: there were no other performers — no orchestra, singers or any other instrumental soloists whatsoever. This was something unheard of, utterly novel, even somewhat brazen. What conceit! What vanity! As if to say, “all you need is me. Listen only to me — you don’t need anyone else.” Then, this idea of having a small stage erected in the very center of the hall like an islet in the middle of an ocean, a throne high above the heads of the crowd, from which to pour forth his mighty torrents of sound. And then, what music he chose for his programs: not just piano pieces, his own, his true métier — no, this could not satisfy his boundless conceit — he had to be both an orchestra and human voices. He took Beethoven’s “Adelaïde,” Schubert’s songs — and dared to replace male and female voices, to play them on the piano alone! He took large orchestral works, overtures, symphonies — and played them too, all alone, in place of a whole orchestra, without any assistance, without the sound of a single violin, French horn, kettledrum! And in such an immense hall! What a strange fellow! 

    In a somewhat earlier memoir, “The Imperial School of Jurisprudence Some Forty Years Ago,” Stasov recalled that, after the first item on the program, Rossini’s William Tell overture, Liszt “moved swiftly to a second piano facing in the opposite direction. Throughout the concert he used these pianos alternately for each piece, facing first one, then the other half of the hall.” Stasov was seated near the composer Glinka, and overheard his conversation before the concert. When one noble lady, Mme. Palibina, asked Glinka whether he had already heard Liszt, Glinka replied that he had heard him the previous evening, at an aristocratic salon.

    “Well, then, what did you think of him?” inquired Glinka’s importunate friend. To my astonishment and indignation, Glinka replied, without the slightest hesitation, that sometimes Liszt played magnificently, like no one else in the world, but other times intolerably, in a highly affected manner, dragging tempi and adding to the works of others, even to those of Chopin, Beethoven, Weber, and Bach, a lot of embellishments of his own that were often tasteless, worthless, and meaningless. I was absolutely scandalized! What! How dare some “mediocre” Russian musician, who had not yet done anything in particular himself [by that time, Glinka had written both his operas!], talk like this about Liszt, the great genius over whom all Europe had gone mad! I was incensed. It seemed that Mme. Palibina did not fully share Glinka’s opinion either, for she remarked, laughingly, “Allons donc, allons donc, tout cela ce n’est que rivalité de métier!” [Come now, come now, all this is nothing but professional rivalry!] Glinka chuckled and, shrugging his shoulders, replied, “Perhaps so.” 

    So if Liszt knew enough to pay tribute, or at least lip service, to the new Romantic ideals, his public acclaim and his consummate, irrepressible virtuosity continued to threaten them. Even after his true capitulation to good taste, when he withdrew from the concert stage to devote himself to what was considered at the time a particularly high-minded species of modern composition, he was regarded as threatening by musicians with a different notion of high-mindedness. Liszt came to symbolize the danger of the mass audience and those who catered to it — a danger that his composing may have posed even more drastically, in the eyes of some, than his piano playing.

    In the later nineteenth century the chief threat to musical idealists was no longer exercised by virtuosos, but by composers who subordinated musical values to mixed media: opera composers, to be sure, who as always commanded the largest and least discriminating audiences, but also — and worse — those who tried to turn their instrumental music into wordless operas, as Liszt did in his symphonic poems and programmatic symphonies. Whether embodied in the corruption of texts or in the corruption of media, the corruption that the fastidious really feared was the corruption of taste and mores, which looked to guardians of good taste like corruption of the flesh. In the early correspondence of Brahms and Joseph Joachim, the adjective Lisztisch was already a code word. In one letter, Joachim writes to Brahms of a certain passage that Brahms had written: “Es bleibt mir häßlich — ja verzeih’s — sogar Lisztisch!” (“I think it’s awful, even — forgive me — Lisztish!”). Or consider Brahms writing to Clara Schumann in 1869:

    Yesterday Otten [G. D. Otten, conductor of the Hamburg Philharmonic] was the first to introduce works by Liszt into a decent concert: “Loreley,” a song, and “Leonore” by Bürger, with melodramatic accompaniment. I was perfectly furious. I expect that he will bring out yet another symphonic poem before the winter is over. The disease spreads more and more and in any event extends the ass’ ears of the public and young composers alike. 

    This diagnosis of social pathology became quite explicit among the Brahmins, among whom Theodor Billroth, the famous surgeon, was the exemplary figure. Writing to the composer after a performance of Brahms’ First Symphony, Billroth gave voice to a new aristocracy of Bildung, of education, taste, and culture — or was it just Liszt’s old aristocracy of mediocrity?

    I wished I could hear it all by myself, in the dark, and began to understand [the Bavarian] King Ludwig’s private concerts. All the silly, everyday people who surround you in the concert hall and of whom in the best case maybe fifty have enough intellect and artistic feeling to grasp the essence of such a work at the first hearing — not to speak of understanding; all that upsets me in advance. 

    Billroth stands in a resistant line that gathered strength as it moved into the twentieth century: the modernist line that helped create the storied Great Divide between art and mass culture. It passes through Schoenberg — for whom “if it is art, it is not for all, and if it is for all, it is not art” — on its way to the likes of Adorno, Dwight Macdonald, and others who insisted that art identify itself in the twentieth century by creating elite occasions, which is to say occasions for exclusion. Liszt, with his generous and inclusive impulse, created many problems for that project.

    As the line of social resistance passed through the twentieth century it got ever shriller, culminating in the pronouncements we have sampled by Rosen and Brendel, allies in snobbery despite their disagreement over Liszt. Charles Rosen never claimed to be a historian (as anyone knows who has read the introduction to The Classical Style), but it takes a singular disregard of history to assert, as he did, that “‘good taste’ is a barrier to an understanding and appreciation of the nineteenth century,” when in fact good taste was the invention of the nineteenth century. It was the invention of nineteenth-century bourgeoises who aspired to the condition of royalty — Billroths who wanted to be Ludwigs, surgeons who wanted to be kings.

     

    In its present state of devolution, the line of good taste has descended to the likes of Jack Sullivan, whose Wikipedia entry identifies him as “an American literary scholar, professor, essayist, author, editor, musicologist, concert annotator, and short story writer,” and who was quoted in the New York Times, on the very day I was drafting the paragraph you are now reading, as complaining in the Carnegie Hall program book about the standard performing version of Chaikovsky’s Variations on a Rococo Theme for cello and orchestra, as revised after its premiere in 1877 by the original performer and dedicatee, Wilhelm Fitzenhagen, at the composer’s request. Under the impression that the original version was to be performed by Yo-Yo Ma, with Valeriy Gergiyev and the Mariyinsky Orchestra, and paraphrasing a letter to Chaikovsky from his publisher, Sullivan grumbled that Fitzenhagen had taken Chaikovsky’s “cannily constructed Neo-Classical piece and ‘celloed it up’ for his own grandstanding purposes.” Thrice-familiar strictures, these; as is the tone of social derision that the phrase “celloed up” (compare “gussied up” or “lawyered up”) is calculated to convey.

    In fact, like every self-respecting virtuoso, Yo-Yo Ma had played the Fitzenhagen version, which includes all the passages (like the famous octaves at the end) that have made the Variations a concert perennial instead of the rarity it remained during Chaikovsky’s lifetime. “Well, who better than Mr. Ma to play something celloed up,” wrote the sharp reviewer for the Times, James Oestreich, exposing the obtuseness of the class warriors with a well-aimed shaft of contrarian bad taste. As I chuckled, I thought of Baudelaire and his immortal sally, “Ce qu’il y a d’enivrant dans le mauvais goût, c’est le plaisir aristocratique de déplaire,” “the heady thing about bad taste is the aristocratic pleasure of giving offense.” And I recalled the bravura defiance of William Gass, novelist and critic and curmudgeon supreme, in his immortal essay “What Freedom of Expression Means, Especially in Times Like These”:

    It is a tough life, living free, but it is a life that lets life be. It is choice and the cost of choosing: to live where I am able, to dress as I please, to pick my spouse and collect my own companions, to take pride and pleasure in my opinions and pursuits, to wear my rue with a difference, to enjoy my own bad taste and the smoke of my cooking fires, to tell you where to go while inspecting the ticket you have, in turn, sent me. 

    What makes this story and its attendant ruminations more than a digression is the letter in which Fitzenhagen reported to Chaikovsky about his first performance, in Wiesbaden in 1879, of the celloed-up version. “I produced a furor,” he assured the composer. “I was recalled three times.” And then he describes the reaction of one particular member of the audience: “Liszt said to me: ‘You carried me away! You played splendidly.’ And regarding your piece he observed: ‘Now there, at last, is real music!’” Mark that it was the sixty-eight-year-old Liszt who was encouraging Fitzenhagen to cello up, thirty years after his retirement from the concert stage and almost forty years since the baccalaureate letter in which he recanted “runs and cadenzas which [bring] the applause of the musically uneducated, but violated the spirit as well as the letter of the music.” Now at peace, the venerable abbé was declaring his solidarity with the applauders.

    In closing, a few words about the Second Hungarian Rhapsody. Yes, of course it is a central work for Liszt; without it, he would not be what he is in our imaginations. But what do those who object to it find objectionable? Why does Brendel exclude it from the category of “works that offer both originality and finish, generosity and control, dignity and fire”? When I hear it well played, I am amazed at the originality with which Liszt imitated the cimbalom, and I marvel at the beautifully realized (and “finished”) form and pacing of the piece, and I fail to see where it is deficient either in control or in dignity. The derision with which it is treated, even by those (like Brendel) who have put in the time and effort to master it, seems a particularly crisp instance of the antitheatrical prejudice as applied to a composition that has become the test par excellence of a pianist’s ability to enact the role of virtuoso, an enactment that achieves its zenith with those special performers, such as Rachmaninoff or Horowitz or Marc André Hamelin, who can top the piece off with their own nonchalant cadenzas, the nonchalance signifying the truly Lisztisch transgressive transcendence that drives aspirational musicians mad.

    And there is more: like a gas (and of course it is a gas), the Second Hungarian Rhapsody has escaped its container and leeched out into the popular culture — which is only meet, after all, since that is where its inspiration had come from. (That, of course, is what objectors object to.) Many other works by nineteenth-century masters had a similar source in restaurant and recruitment music; one need think only of all those Brahms finales — to concertos for piano, for violin, and for cello plus violin, or to his piano quartets. Like Liszt’s Rhapsody, they adapted the sounds of environmental music to the special precinct of the concert hall. But unlike Liszt’s Rhapsody, they were never reabsorbed into the environment. Liszt’s Rhapsody inhabits animated cartoons: Mickey Mouse, Bugs Bunny, and Tom and Jerry have all played it, not to mention (arranged by King Ross) a whole animal orchestra, courtesy of Max Fleischer. It was heard, and used, in dance halls; it was in the repertoire of every swing band. It even haunts sports arenas: I am informed by Wikipedia that in the 1970s the St. Louis Cardinals’ organist Ernie Hays played Hungarian Rhapsody No. 2 to signal that pitcher Al Hrabosky (nicknamed “The Mad Hungarian”) was warming up before appearing as a relief pitcher. It is everywhere. There is even an LP recording of the Rhapsody by a Communist-era Hungarian fakelore ensemble, purporting to return it to an “authentic” environment from which it had never come. 

    Is this something to condemn, something to resist? Or is this interpenetration of the artistic and the vulgar worlds an ineluctable mark, perhaps the defining mark, of Liszt’s greatness? To attempt, like Brendel, to purge Liszt of these impolite associations is indeed to misunderstand his place in our world; but Rosen, too, beholds the vulgar Liszt with distaste. Far better, in the words of Kenneth Hamilton, is to “embrace our own inner Second Hungarian Rhapsody.” We’ve all got one, and Liszt knew it. To accept his invitation to flout snobbish “good taste” might help us reassert, or recover, taste — which is to say, Mozart’s taste as defined by Haydn: namely, a reliable sense of what is fitting, and when.

    The Earth, stuffed to the gills with burning coals

    *   *   *

    The Earth, stuffed to the gills with burning coals

    and consuming itself from its birth

    bristling with folds that sharpen into peaks, sometimes of short hairs

    sometimes forming dark dense beards

    and hollowed out with giant cavities filled with restless water

    from which emerged the grand debris of its genesis

    and full of shallow holes where other waters drowse

     

    Despite the gravity, everything pushes upward

    springs toward its Creator

     

    The sun, a rival, pulls everything toward itself, pulls dangerously

    To avoid catching fire

    the grass clings to the soil, the tree spreads its foliage

    In its shadow the man stretches out, then one day

    lies down forever some few feet underground

    Over our heads masses are moving, whitish

    *   *   *

    Over our heads masses are moving, whitish

    cottony, ghosts on the weather maps

    Windings, swirls, languid scrolls

    under the sting of the wind, wandering herds

     

    Floating bodies. Appearing. Disappearing. In our own image.

     

    We, more unstable than plants fixed to the ground

    or the fish sheltered in water

    We, unable to go back to the ancestral birds

    and flee into the stratosphere

     

    Dominant, dominator winds, chubby sons of Aelous 

    yesterday pushing or smashing Ulysses’ fleet as they wish

    We, more destitute as we progress, our soul

    eaten away by matter, at the mercy of an incendiary night

     

    The Unjust Fate of Man

    On the sandy path that goes by my door

    and leads to the station of dreams,

    where I had just walked, a muffled cry 

    reached my ear.

    I stopped walking and saw a clump 

    of dry, drowsy grass.

    The cry came from the ground.  A root deplored

    being without news from the stem up there with its boughs,

    its flowers and its fruit, maybe even 

    its trunk in full maturity.

    “Why was I, newly born, 

    the ancestor, thrown into a dungeon,

    my maternal task complete,

    like a convict, or worse, a useless being,

    without my having seen, known or recognized the world—

    and what mouth pronounced the injustice of my fate.”

    Before Nightfall 

    Leaning in summer tuxes across the balcony

     

    or reclining like nudes with their hair thrown back,

     

    some trees, after high conversation, complained

     

    about having to go back to the deaf earth again.

     

    The leaves pulled on their arms to keep them

     

    from going and to get even closer to whom?

     

    To what? Which essential truth?

     

    As if the human shadows inside the rooms

     

    would give them some clarification,

     

    some formula against the faceless barking . . .

     

    And what did they sense in me but a trembling?

     

    Night stood in the background.

     

    A flame flew into the grass’ eyes.

     

    I did not move, no longer knowing who I was

     

    or if dawn would also come for me.

    Translated from the French by Henri Cole

     

    Mother death

    *   *   *

    Mother death

    you came to him

    so mildly

    so cruelly

    alternating authority with seduction

     

    He out-of-breath following you or fleeing you

     

    In the end

    you wore the features of Morphine

    and clasped her to you

    cruelly

    mildly

     

    I gave his body to flames

    married his ashes to the sea

    and I alone burn

    in the fire of absence   

     

    The Cult of Carl Schmitt

             I

             As a political thinker, the German philosopher Carl Schmitt was enamored of symbols and myths. His biographer has shown that during the 1930s Schmitt was convinced that providing National Socialism with a rational justification was self-contradictory and self-defeating. The alternative that was conceived by Schmitt, a conservative who was an eminent member of the Nazi Party, was to establish the Third Reich’s legitimacy by means of symbolism and imagery culled from the realms of religion and myth. Schmitt’s attraction to symbols and myths stemmed from his skepticism about the value of “concepts,” which he viewed only instrumentally, as Kampfbegriffe or weapons of struggle. As Schmitt explained, about reading Hobbes’ Leviathan, “we learn how concepts can become weapons.” “Every political concept,” he claimed, “is a polemical concept,” a statement that reflects the essential bellicosity of his thought.

    When it came to fathoming the mysteries of human existence, Schmitt insisted that the cognitive value of symbols and myths was far superior to the meager results of conceptual knowledge. This deep mistrust of reason was related to his veneration of “political theology,” which Schmitt introduced into the mainstream of modern political thought. Schmitt’s devaluation of secular knowledge was exemplified by his well-known dictum that “all modern political concepts are secularized theological concepts,” an assertion that reflected his disdain for the legacy of Enlightenment rationalism. That disdain is what has given Schmitt’s thought new life in our own bleak and inflamed times.

             Schmitt was born in 1888 and died in 1985. He was a constitutional theorist who wrote brilliant polemics against parliamentary democracy and on behalf of dictatorial rule. He played a prominent role in providing a pseudo-legal justification for the Nazi seizure of power and was a virulent anti-Semite. During the early 1920s, the myth that captured Schmitt’s imagination was the “myth of the nation,” a trope that Mussolini had refashioned into a core precept of Italian fascism. Schmitt explored this theme in 1923 in the concluding pages of The Crisis of Parliamentary Democracy. His unabashedly enthusiastic treatment of unreason offers an important clue with respect to his future political allegiances.

    Echoing the phraseology of the proto-fascist and anti-Dreyfusard Maurice Barrès, who died in that year, Schmitt extolled Mussolini’s March on Rome as a triumph of “national energy.” He thereby acknowledged fascism’s capacity to infuse modern politics with a vitality that was absent from the stolid proceduralism of political liberalism. Schmitt was present at the University of Munich in 1921 when Max Weber delivered his celebrated lecture on “Science as a Vocation.” Schmitt agreed wholeheartedly with Weber’s characterization of modernity as an “iron cage:” a world in which the corrosive powers of “rationalization” and “disenchantment” had precipitated a crisis of “meaninglessness.”

    Schmitt’s antidote to this malaise, and to the intellectual maturity of liberalism, was his “decisionism” — his reconceptualization of sovereignty as the right to decide on the “state of exception.” The ruler was the one, the only one, who had the power to decree a state of exception, and to enforce it. Schmitt attributed a cultural and even epistemological superiority to the “exception” as opposed to the “norm” and the “rule.” It was the antithesis of Weberian disenchantment. As Schmitt exulted, “In the exception, the power of real life breaks through the crust of a mechanism that has become torpid by repetition.” In keeping with the discourse of political theology, Schmitt stressed the parallels between the “state of exception” in jurisprudence and the “miracle” in theology.

    Schmitt exalted the fascist coup as a historical and philosophical turning point in the struggle to surmount the straitjacket of rule-guided bourgeois “normativism,” a legacy of the Enlightenment that Kulturkritiker such as Nietzsche and Spengler deemed responsible for modernity’s precipitous descent into “nihilism.” For Schmitt, the March on Rome was the state of exception come to life. The scholarly nature of his treatise notwithstanding, Schmitt was unable to conceal his prodigious pro-fascist fervor. “Until now,” he wrote, “the democracy of mankind and parliamentarism has only once been contemptuously pushed aside through the conscious appeal to myth, and that was an example of the irrational power of the national myth. In his famous speech of October 1922 in Naples before the March on Rome, Mussolini said, ‘We have created a myth, this myth is a belief, a noble enthusiasm, it does not need to be reality, it is a striving and a hope, belief and courage. Our myth is the nation, the great nation which we want to make into a concrete reality for ourselves’.”

    Schmitt celebrated Mussolini’s mobilization of the “national myth” as “the most powerful symptom of the decline of the… rationalism of parliamentary thought … [and the] ideology of Anglo-Saxon liberalism.” (We would now call Schmitt’s position “post-liberalism.”) Italian fascism was the harbinger of a brave new world of conservative revolutionary political ascendancy: a glorious form of Herrschaft predicated on the values of “order, discipline, and hierarchy.” Mussolini’s putsch represented much more than a simple “regime change.” It signified a qualitative setback for the “ideas of 1789” and a resounding triumph of the counterrevolutionary ethos, as represented by the “Catholic philosophers of state” — Joseph de Maistre (1753-1821), Louis de Bonald (1754-1840), and the relatively unknown Spaniard Juan Donoso Cortés (1809-1853) — whom Schmitt revered.

    Following the Great War, the political challenge that Schmitt confronted was how to “actualize” the tenets of counterrevolutionary thought in a godless secular age whose assault on the twin pillars of traditional political authority, throne and altar, had eliminated absolute monarchy as a viable political option. Schmitt’s French doppelgänger, Charles Maurras, the leader of the Action Française, grappled with this dilemma as well. Schmitt was an avid reader of Action Française, which Maurras edited, and he regarded Maurras as France’s most interesting thinker. Maurras, despite his counterrevolutionary revulsion against the legacy of 1789, remained anachronistically wedded to monarchism. Schmitt, by contrast, opted for a Flucht nach vorne, what we would call a forward defense, which is to say, he went on the attack. At the dawn of what he believed would be a new post-liberal era, Schmitt made a definitive break with all forms of traditionalism with his particular doctrine of dictatorship.

    Schmitt found ample ideological support for his authoritarian credo in the counterrevolutionary doctrines of Maistre and Donoso Cortés, both of whom occupied a privileged position in Schmitt’s pantheon of esteemed intellectual precursors. In 1821, in Les Soirées de St. Petersbourg, Maistre — like Schmitt a proponent of “political theology” — exalted the figure of the Executioner as God’s emissary on earth and an agent of Divine justice. Maistre apotheosized the Executioner as a puissance créatrice, a creative force, and an être extraordinaire, an extraordinary being. Maistre maintained that, in view of humanity’s innate propensity for evil, the Executioner was the ultimate guarantor of secular order. As such, he alone separated human society from a headlong descent into anarchy and chaos.

    Yet it was Donoso Cortés’ unmatched political clairvoyance that expanded Schmitt’s horizons, thereby allowing Schmitt to transcend the constraints of traditional conservatism, whose gaze — as the case of Charles Maurras demonstrated — was obsessively and counter-productively fixated on the past. In Political Theology, Schmitt praised Donoso Cortés as a paragon of “decisionistic thinking and a Catholic philosopher of state who was intensely conscious of the metaphysical kernel of all politics.” According to Schmitt, Donoso Cortés was the only counterrevolutionary thinker who drew the proper conclusion from the “Scythian fury” of the revolutions of 1848, which “godless anarchists” such as Bakunin and Proudhon had directed against the forces of the ancien régime: that absolute monarchy had indisputably become a thing of the past. It was over. As Schmitt put it in Political Theology, insofar as “there were no more kings, the epoch of royalism had reached its end.” The conclusion that Donoso Cortés drew was that because “[monarchical] legitimacy no longer existed in the traditional sense… there was only one solution: dictatorship.”

    Donoso Cortés exalted dictatorship as an inviolable and sacrosanct decision — a decision, that, as Schmitt explained, is “independent of argumentative substantiation” and that “terminates any further discussion about whether there may still be some doubt.” Schmitt’s encounter with Donoso Cortés’ “decisionism” was the “primal scene” of his political philosophy. It determined what Schmitt described in Roman Catholicism as Political Form in 1925 as the “complex oppositorum” between political theology and secular political rule. Schmitt embraced Donoso Cortés’ Christological understanding of anarchists and socialist as agents of the Antichrist, as political actors whose goal it was to “disseminate Satan.” For Schmitt, Donoso Cortés had correctly understood that the momentous battle between “absolute monarchy” and “godless anarchism” was not merely another profane political conflict. Instead it was a struggle that anticipated the Last Judgment.

    Donoso Cortés’ apocalyptic view of politics as a final struggle between Good and Evil became the cornerstone of Schmitt’s “decisionism.” Schmitt praised decision as a force that “frees itself from all normative ties and thereby becomes absolute.” Hence, according to Schmitt, it was the “royal road” to dictatorship. As he explained in Political Theology:

    The true significance of counterrevolutionary philosophers of state [such as Maistre, de Bonald, and Donoso Cortés] lies in the consistency with which they decide. They heightened the moment of the decision to such an extent that the notion of legitimacy . . . was finally dissolved. As soon as Donoso Cortés realized that the period of monarchy had come to an end because there no longer were kings . . . he brought his decisionism to its logical conclusion: he demanded a political dictatorship. . . Donoso Cortés was convinced that the final battle had arrived. In the face of radical evil, the only solution is dictatorship.

    Donoso Cortés’ epiphany concerning the political-theological significance of dictatorship anticipated the Grand Inquisitor episode of The Brothers Karamazov. The important parallels between Schmitt’s views on dictatorship and Dostoevsky’s allegorical treatment of it were not lost on the renegade Jewish theologian Jacob Taubes. Following World War II, Taubes wrestled profoundly with Schmitt, whom he called the “apocalyptic prophet of the counterrevolution,” and with whom he had an interesting correspondence. Taubes’ reflections concerning the totemic significance that Schmitt attributed to Dostoevsky’s parable about the political consequences of human sinfulness are worth citing:

    I had quickly come to see Carl Schmitt as an incarnation of Dostoevsky’s Grand Inquisitor. During a stormy conversation at Plettenberg in 1980, Schmitt told me that anyone who failed to see that the Grand Inquisitor was right about the sentimentality of Jesuitical piety had grasped neither what a Church was for, nor what Dostoevsky—contrary to his own conviction—had “really conveyed, compelled by the sheer force of the way in which he posed the problem.” I always read Carl Schmitt with interest, often captivated by his intellectual brilliance and pithy style. But in every word I sensed something alien to me, the kind of fear and anxiety one has before a storm, an anxiety that lies concealed in the secularized messianic art of Marxism. Carl Schmitt seemed to me to be the Grand Inquisitor of all heretics.

    Support for Taubes’ intuition about Schmitt and the Grand Inquisitor as fraternal spirits is provided by a friend from Schmitt’s Munich days. In a letter in February 1922, Hermann Merk suggested to Schmitt, half-seriously, that, “if someone were to establish a Lehrstuhl at the University of Munich for the justification of the Spanish Inquisition, you would be the ideal person to occupy it, and I would be your most devoted student!”

    Schmitt’s glorification of dictatorship as a sovereign decision that “terminates any further discussion” resurfaced in his landmark debate in 1931 with Hans Kelsen, the eminent jurist and legal philosopher who was forced to leave Germany two years later because he was a Jew, about who is the “Guardian of the Constitution.” Schmitt’s numerous champions have portrayed his defense of executive sovereignty as a last-ditch attempt to safeguard the Weimar Republic against the encroachments of political extremism, both left and right. They have neglected to consider the political-theological underpinnings of Schmitt’s worldview. In the colloquy with Schmitt, Kelsen, a vociferous champion of Rechtsstaatlichkeit (rule of law), advocated strengthening the federal constitutional court as the instance of last resort. Schmitt, conversely, basing himself on Article 48, the Weimar Constitution’s notorious emergency powers proviso, argued in favor of a “sovereign” presidential dictatorship. In light of Schmitt’s strong commitment to the paradigm of political theology, it is difficult to avoid the conclusion that, in the debate with Kelsen, he favored “saving” democracy by destroying its institutional and normative guarantees. As Jürgen Habermas has aptly commented, “Anyone who would want to replace a constitutional court by appointing the head of the executive branch as the “Guardian of the Constitution” — as Carl Schmitt wanted to do in his day with the German president — twists the meaning of the separation of powers in the constitutional state into its very opposite.”

    A year later, in July 1932, Schmitt played a key role in the infamous Preussenschlag controversy, Chancellor Franz von Papen’s constitutional coup against Prussia’s Social Democratic government. Schmitt vigorously argued the case on behalf of the Reich before the federal court in Leipzig. As chancellor, von Papen had contributed significantly to Prussia’s civic disarray by lifting the federal ban against the SA, in an ill-conceived attempt to curry favor with the Nazis. In January 1933, von Papen was named as Hitler’s vice-chancellor. In April, he appointed Schmitt to draft legislation that merged the Länder with the federal government in Berlin. By eliminating the last vestiges of provincial legal autonomy, the Gleichschaltung (synchronization) measures promulgated by Schmitt effectively sounded the death knell of the Weimar Republic. They marked a point of no return on the way to the Nazis’ consolidation of totalitarian rule.

    In light of Schmitt’s concerted efforts to undermine the Weimar Republic’s constitutional stability, as well as the significant role that he played in providing the nascent Hitler-Staat with a veneer of juridical legitimacy, it is not surprising that after the war he was known as the “gravedigger of the Weimar Republic.” The pivotal role that Schmitt played in contributing to the Weimar Republic’s demise cannot be understood apart from his underlying commitment to the counterrevolutionary political theology of Maistre, de Bonald, and Donoso Cortés. In keeping with their visceral aversion to the heritage of “1789,” Schmitt viewed political liberalism’s fitful ascent during the nineteenth century with similar contempt. Following the revolutions of 1848, Donoso Cortés — an intrepid defender of monarchism and an ideological precursor of the “clerico-fascism” of Franco and Salazar — condemned the heretical strivings of a new generation of political radicals as “Satanism” pure and simple.

    True to his counterrevolutionary lineage, Schmitt’s lifelong ideological animus against parliamentarism and the rule of law was motivated by a similar set of political-theological concerns. Schmitt, too, displayed a visceral aversion to the precepts of modern secularism and its political corollaries: humanism, liberalism, constitutionalism, and social democracy. Consequently, following the Nazi seizure of power, Schmitt had no compunction about glorifying the Hitler-Diktatur as a “Katechon,” a “restrainer” or “bulwark” who staves off the advent of the Antichrist, whose contemporary “agents” were the godless and heretical representatives of the political left: liberals, socialists, Bolshevists, communists, anarchists, and of course Jews.

     

              Schmitt’s glorification of the “national myth” comes in a chapter of The Crisis of Parliamentary Democracy devoted to “Irrationalist Theories of the Direct Use of Force.” His treatment of this theme was nothing if not timely. In October 1917, the Bolsheviks, led by Lenin, overthrew Alexander Kerensky’s Provisional Government and set the stage for seventy-four years of murderous dictatorial rule. The events in Russia had a significant ripple effect. In 1919, Bolshevik-inspired “council republics” were proclaimed in Bavaria and Hungary. Within months, however, both regimes were ruthlessly suppressed by counterrevolutionary militias that reveled in profligate acts of “White Terror.”

    Among paramilitary veterans groups, such as the German Freikorps and the Italian squadre d’azione, violence was elevated to the level of a secular religion. In Central and Eastern Europe, right-wing forces frequently targeted Jews, whom they associated with the “Bolshevik menace,” notwithstanding the fact that the vast majority of Jews were steadfastly opposed to communism. In Germany, antisemitism was the ideological catalyst behind the assassination of prominent Jewish politicians such as the Bavarian Prime Minister Kurt Eisner in 1919 and Foreign Minister Walther Rathenau in 1922. In Russia and Eastern Europe, heightened antisemitism incited indiscriminate and bloody pogroms. During the Russian Civil War, from 1918 to 1921, counterrevolutionary armies in Ukraine murdered an estimated thirty thousand Jews.

    During the war Schmitt was stationed in Munich, where he worked in the intelligence services of the German General Staff. His primary assignment was to monitor contacts between left-wing politicians and pacifists in neighboring Switzerland. The White Terror in Munich — much of which Schmitt witnessed first-hand — was especially bloody. Over six hundred people lost their lives, numerous sympathizers of the “council-republics” were summarily executed after the hostilities had ceased. Similarly, in Hungary, when Béla Kun’s Soviet Republic imploded in August 1919, 1,500 persons were killed, over three times the number of those who perished at the hands of the “Reds.”

    The political tumult that rocked Germany following the left-wing revolution in November 1918, when workers’ and soldiers’ councils proliferated in the wake of the Kaiserreich’s collapse, might accurately be described as a permanent “state of emergency.” Both the war years — when civilian rule was de facto suspended in favor of the Ludendorff-Hindenburg dictatorship — and the prolongation of martial law during the postwar period conditioned Schmitt to accept the Ausnahmezustand, or state of emergency, as the new normal. It became one of his contributions to the vocabulary of modern political philosophy. It reinforced his commitment to authoritarian rule as well as his innate mistrust of civilian interference in politics. Schmitt’s inaugural lecture at the University of Strasbourg, in 1916, had examined the constitutional (staatsrechtlich) parameters of “Dictatorship and State of Siege.” The superiority of dictatorship over “constitutionalism” and “legalism” — both of which hampered the political sovereign’s ability to act forcefully and decisively in a state of emergency — became the defining theme of Schmitt’s work. It was not by chance, therefore, that in 1921 Schmitt selected Dictatorship as the theme, and the stark title, of one of his first major scholarly works.

    The apotheosis of political violence that accompanied the Great War and the spate of civil wars that followed conditioned Schmitt’s famous reconceptualization of politics in The Concept of the Political, in 1927, as the capacity to distinguish “friends” from “enemies.” That was it: the essence, indeed the entirety, of politics. By seeking to ground sovereignty through war as the ultima ratio of politics, Schmitt sought to oppose the growing consensus in favor of international cooperation that followed the League of Nation’s founding in 1919. Following the precedent set in that year by Spengler’s Prussianism and Socialism, Schmitt furnished an urgent brief in support of the values of Prussian militarism. “The concepts of friend, enemy, and struggle [Kampf],” Schmitt insisted, “receive their real meaning insofar as they relate to and preserve the real possibility of physical annihilation. War follows from enmity, [from] the existential negation of another being.” “The political enemy,” he continued, “is the other, the alien, and it suffices that in his essence he is something existentially other and alien in an especially intensive sense . . . War, the readiness for death of fighting men, the physical annihilation of other men who stand on the side of the enemy, all that has no normative, only an existential meaning.” Those must be some of the most chilling words written in modernity. Schmitt’s account of politics wished to replace a rational world of norms and rules with a pre-rational order of visceral ruthlessness in which tolerance was inimical to survival and war was eternal.

    Another one of Schmitt’s main goals in The Concept of the Political was to perpetuate the bellicist ethos of the Frontgeneration. It was an objective that was shared by other conservative revolutionary intellectuals: for example, Ernst Jünger, a conservative and a remarkable writer, whose fifty-year correspondence with Schmitt began in 1930 and ended in 1983. As Habermas has noted, “Schmitt was fascinated by the First World War’s Storms of Steel, to use the title of Ernst Jünger’s war diary…A people welded together in a battle for life and death asserts its uniqueness against both external enemies and traitors within its own ranks.” At one point Schmitt, invoking a metaphor taken from marksmanship, proclaimed that “the zenith of Great Politics is the moment when the enemy comes clearly into view as the enemy.” In The Concept of the Political, Schmitt also sought to combat the spirit of anti-militarism and international comity that, in response to the unprecedented carnage of World War I, had encouraged the expansion of international law in order to ensure a peaceful resolution of regional disputes — a movement that culminated in 1928 in the Kellogg-Briand Pact, which quixotically sought to outlaw war as an instrument of national policy.

    The Social Darwinist undercurrent of The Concept of the Political — Schmitt’s insinuation that preparation for war is the raison d’être of “the political” — anticipated his controversial Grossraum doctrine of the early 1940s, which brazenly redefined “natural right” as the “right of the strongest.” Although Schmitt’s champions have sought to portray him as nothing more than a political realist in the tradition of Machiavelli and Hobbes, Schmitt’s “existential” glorification of “war” as the “readiness for death of fighting men, the physical annihilation of men who stand on the side of the enemy … [hence] the existential negation of another being” is significantly at odds with that tradition. After all, the point of Hobbes’ Leviathan was to transcend the war of all against all by means of a civil compact, not to celebrate and expand it.

    The ideological and political turmoil that convulsed Europe following World War I left Schmitt with a permanent fear of political instability. It also inculcated in Schmitt a hypertrophic and abiding fear of “Jewish Bolshevism.” As Paul Hanebrink observes in The Myth of Judeo-Bolshevism, “From the Vatican to Paris salons to paramilitary barracks in the south of Hungary, the history of the Munich Republic of Councils seemed proof of a Jewish plot to overthrow civilization and impose foreign rule on the nations of Europe.” Schmitt’s diaries from the 1910s are suffused with antisemitic invective. They betray a preoccupation with Jews that borders on the clinical. His Judeophobia was especially acute in the case of the assimilated Jews whom he encountered regularly during his student years and his career as a university professor. According to Schmitt, the major problem with assimilated Jews was that they made it nearly impossible to establish a clear divide between “friends” and “enemies.”

    In a diary entry on October 13, 1914, Schmitt spoke about his “Jewish complex,” the confusing amalgam of fascination and revulsion that he felt toward Jews. Although German Jews superficially resembled “normal Germans,” Schmitt held that, on a more profound level, the differences that separated these two peoples were vast. Ultimately, Schmitt’s Judeophobia — which intensified during the “Judeo-Bolshevist” hysteria that coincided with the suppression of the Bavarian Räterepublik in April 1919 — metamorphosed into one of the defining features of his work. Schmitt’s lifelong animus against political liberalism, which culminated in his confrontation withs Kelsen’s “normativism,” was inseparable from his fears concerning the “disintegrative” and “corrosive” character of Jewish influence. His conservative revolutionary allies excoriated the Weimar Republic as a Judenrepublik; it was, they claimed, undeutsch. In The Crisis of Parliamentary Democracy, Schmitt asserted that Artgleichheit, or “racial sameness,” was one of the indispensable hallmarks of the “leader-democracy” (Führerdemokratie) that he envisioned as parliamentary democracy’s successor.

    As Schmitt’s diaries amply attest, he viewed the Jews as the Drahtzieher, or “string pullers,” who were secretly orchestrating these fateful developments from behind the scenes. Already in the 1920s, Schmitt’s sweeping critique of “political liberalism” and “total mechanization” flirted with the idea of a “Jewish world conspiracy” — a notion that was, among conservative revolutionary intellectuals, a truism. Schmitt’s indictment of modernity as an “age of neutralizations and depoliticizations” overlapped with the ascendancy of what the historian Shulamit Volkov has called “antisemitism as a cultural code.” In the discourse of Central European Zivilisationskritik, the agenda of antisemitism was often advanced under the semantic camouflage of a critique of “modernity,” “capitalism,” “technology,” and “liberalism.” Antisemites alleged that in all of these domains Jews played a deleterious and outsized role. A watershed in this line of attack was Werner Sombart’s well-known treatise The Jews in Modern Capitalism, which appeared in 1911, in which he highlighted the affinities between the Jews as a “nomadic desert people” and the “extraterritoriality” of contemporary international finance, and attributed the Jews’ economic success to their “rootlessness,” which, he claimed, engendered a mentality that was averse to firm conviction and conducive to abstract calculation.

    After 1933, when the political situation became more propitious, Schmitt was free to propound his antisemitic views unabashedly and without fear of reprisal. He wasted no time. The semantic violence that was implicit in Schmitt’s disdain for Kelsen’s “legal positivism” now left nothing to the imagination. In 1934, in an essay called “National Socialist Legal Thinking,” Schmitt explicitly celebrated the Nazi legal revolution as a victory of the German over the tyranny of Jewish “legalism.” According to Schmitt, the Volk’s triumph was abetted by its return to “the natural forms of order that emerge from Blut und Boden [blood and soil].” Schmitt added that “normativism’s” predominance under Weimar was due to the “influx of the alien Jewish Volk.” A corrosive infatuation with “legalism,” claimed Schmitt, was “one of the peculiarities of the Jewish people, who for thousands of years have lived not as a state on a piece of land, but solely in the law and norm, which in the true sense of the word are ‘existentially normativistic.’” With Hitler in power, the antisemitic animus that was implicit in Schmitt’s critique of parliamentarism in the 1920s emerged in all its hatefulness.

    II

    Not only was Schmitt enamored of political myths. He was also an adept self-mythologizer. After the war, this talent proved invaluable in the course of his struggle for rehabilitation.

    During the initial years of Nazi rule, Schmitt’s influence was omnipresent. In the words of his former student Waldemar Gurian, who fled Germany and became an important scholar of totalitarianism and a Catholic political theorist in the United States, Schmitt was the de facto “Crown Jurist of the Third Reich.” Following the Nazi seizure of power, Schmitt accumulated, with astonishing speed, an impressive array of offices and titles. In July 1933, Hermann Goering appointed Schmitt to the Prussian State Council. Schmitt was also named to the presidium of Hans Frank’s Academy of German Law. In 1934, Schmitt accepted a prestigious appointment to the faculty of law at the University of Berlin. He served on the executive committee of the Association of National Socialist German Jurists and was editor-in-chief of the Association’s journal, the Deutsche Juristen-Zeitung.

    In July 1934, Schmitt furnished a legal brief justifying Hitler’s bloody purge of the SA on June 30, 1934 — the Night of the Long Knives. It was called “The Führer Protects the Law,” which became a famous slogan. Schmitt’s opinion was a resounding endorsement of the Führerprinzip as the wellspring of legitimacy. It is difficult to construe Schmitt’s article other than as a writ for unrestrained autocratic lawlessness.

    Already in Political Theology, twelve years earlier, Schmitt had asserted that the sovereign must operate from a position outside of the constitution, at a permanent remove from the constraints of “legality.” One of the reasons that, after 1945, Schmitt found it difficult to shake the “gravedigger of the Weimar Republic” epithet was the widely shared view that, in his capacity as “Crown Jurist of the Third Reich,” Schmitt had merely transposed his earlier glorification of the “state of emergencyto the post-1933 circumstances.

    Schmitt’s talent for self-mythologization became evident with the publication of Ex Captivitate Salus, a memoir, or more precisely an apologia pro vita sua, in 1950. Invoking Herman Melville’s novella Benito Cereno — the tale of a ship captain who, in the aftermath of a mutiny, must do the rebellious crew’s dastardly bidding — Schmitt confabulated the legend that his cooperation with the Nazis merely reflected a desperate struggle for survival. He insisted that his support for the regime had been, from start to finish, involuntary: the actions of someone who, to all intents and purposes, had a gun pointed at his head. Schmitt’s self-exculpatory claims are factually unsustainable. But the facts have not dissuaded a devout coterie of loyalists from accepting the Benito Cereno conceit. In the English-speaking world, the cult of Carl Schmitt was first orchestrated by a clique of postmodern Salon-Bolshevists, and more recently by a little but loud movement of “post-liberals.”

    The legend of Schmitt’s innocence derives from two articles that were published in the SS weekly Das Schwarze Korps in December 1936, which questioned Schmitt’s National Socialist bona fides. The articles portrayed Schmitt as an opportunist who had belatedly joined the party in order to advance his career and to camouflage his pro-Catholic loyalties. Schmitt’s detractors — one of whom, Reinhard Höhn, was a colleague of Schmitt’s at the University of Berlin — were political rivals who resented his meteoric rise to prominence in Nazi legal circles. Moreover, since Schmitt was widely regarded as a protégé of Hans Frank — the politician who eventually headed the Nazi occupation of Poland, oversaw four extermination camps, and was convicted for crimes against humanity at Nuremberg and executed — his adversaries hoped that, by attacking Schmitt, they could also interfere with Frank’s political ambitions. Following the attacks, Schmitt was stripped of his party offices. Thanks to Göring’s patronage, he was permitted to keep his University of Berlin professorship and his position as Prussian state counselor. When viewed through the lens of the unending intraparty squabbles that were endemic to Nazi rule, however, the temporary setback that Schmitt experienced was hardly proof of heterodoxy. Moreover, following his rehabilitation by the SS, Schmitt was permitted to travel and lecture freely. Later Schmitt’s opponents were themselves ignominiously sacked.

    Ever resourceful, with one eye trained on the impending outbreak of war, Schmitt reinvented himself as a specialist in geopolitics. Schmitt’s doctrine of Grossraum relied on Social Darwinist arguments concerning the “natural right” of so-called “large space nations” (Grossraum Völker) to subsume “small space nations” (Kleinraum Völker), thereby making a mockery of existing international law. In essence, Schmitt’s geopolitical thought underwrote the Third Reich’s draconian plans for Eastern European hegemony, the Drang nach Osten. Schmitt outlined his geopolitical theories in “Raum and Grossraum in International Law,” a lecture that he presented in Kiel on April 1, 1939, a fortnight after the German invasion of Czechoslovakia. In his lecture, Schmitt invoked the precedent of the Monroe Doctrine to justify the supremacy of the Grossdeutsches Reich or “Greater Germany” in Central Europe. (Hitler was so enamored of Schmitt’s Monroe Doctrine analogy that he immediately included it in a speech in the Reichstag, warning President Roosevelt to refrain from intervention in the event of a future European war, which was in fact only four months away.) Schmitt’s arguments summarily disqualified existing international law and traditional claims to state sovereignty on the part of so-called “small space nations.” As the refugee scholar Franz Neumann observed in Behemoth, one of the first great studies of National Socialism, Schmitt’s Grossraum doctrine underwrote Hitler’s “Grossdeutsches Reich [as] the creator of its own international law for its own Raum or space.” Neumann aptly denounced Schmitt’s concept as little more than pseudo-scientific cover for the Third Reich’s geopolitical ambitions: “It offers a fine illustration of the perversion of genuine scientific considerations in the interest of National Socialist imperialism.”

    The fact that Schmitt’s doctrine of Grossraum seemed to lack the customary obeisances to Nazi race doctrine — one of the main arguments brandished by Schmitt’s defenders to downplay his contribution to Nazi foreign policy doctrine — is immaterial, since this omission also had a tactical side: it imparted a measure of credibility to Schmitt’s theories in international law circles that they would have otherwise lacked.

    Nor was the idiolect of Nazi race thinking entirely absent from Schmitt’s arguments. In “Grossraum and International Law,” Schmitt’s disparagement of Jews as an “artfremde Volksgruppe” — a “racially alien people” — was tantamount to a death warrant, since, according to the tenets of Grossraum, “racially alien” groups were devoid of legal standing. Nazi Grossraum doctrine — Schmitt’s included — was predicated on the twofold imperatives of Raum and Boden, “space” and “soil.” Since Jews were deemed a “rootless” or bodenloses people, they were denied the legal protections that accrued to “rooted” or bodenständige Völker.

    With the publication of Schmitt’s Grossraum essays, and the adoption of his Monroe Doctrine analogy by the Führer, Schmitt’s “comeback” was virtually assured. As a reporter for The Times of London remarked about Schmitt’s address in Kiel in April 1939: “Hitherto, no German statesman has given a precise definition of Hitler’s aims in Eastern Europe. But perhaps a recent statement by Prof. Carl Schmitt, a Nazi expert on constitutional law, may be taken as a trustworthy guide. Schmitt’s Grossraum concept was rapidly embraced by a cadre of high-ranking SS officers attached to the Reich Security Main Office (RSHA) in Berlin. Infusing Schmitt’s approach with a more explicit völkisch-ideological orientation, they proceeded to invoke Grossraum as a pseudo-legal justification for a Nazi-dominated Europe, for German continental hegemony — a strategy that was predicated on the idea of German racial supremacy, in keeping with Nazism’s understanding of Deutschtum, or Germanness, as Herrenrasse, or the master race.

     

              Schmitt’s postwar apologetics suffered a posthumous blow in 2011, when his diaries from the early 1930s were published. They meticulously document Schmitt’s reactions to National Socialism’s political ascent. In an entry in February 1932, for example, Schmitt avowed that, in the upcoming presidential elections, he planned on voting for Hitler. On January 30, 1933, the day of the Nazi seizure of power, Schmitt remarked: “At the Café Kutschera [in Berlin], where I learned that Hitler had become chancellor and Papen vice-chancellor. Excited, happy, satisfied.” The reasons for Schmitt’s “excitement” at the café are not hard to fathom. He realized that Hitler’s rise to power guaranteed the demise of the Weimar “system,” an entity that Schmitt viewed with contempt and whose downfall he had sought to hasten. Whatever reservations Schmitt may have harbored concerning the advent of Nazi rule prior to January 30, 1933 dissipated rather quickly.

    This conclusion is supported by Schmitt’s reaction to the Reichstag’s approval of the Enabling Act of March 23, which allowed Hitler to legislate by decree. In his comments, which were published in the Deutsche-Juristen Zeitung, not only did Schmitt hail the Act’s passage, he went so far as to attribute constitutional status to the emergency decrees that had been promulgated by the nascent Hitler-state. Thereby, he added, these decrees superseded the legal provisions of the Weimar Republic, whose constitution technically remained in effect. In a follow-up article that was published on May 12 in the Westdeutscher Beobachter, called “The Good Law of the German Revolution,” Schmitt reaffirmed, unequivocally and emphatically, that “the good law of the German Revolution is not dependent on respecting the legality of the Weimar ‘System’ and its constitution.” Gone was the distinction that he had established in his book Dictatorship between “commissarial” (temporary) and “sovereign” (permanent) dictatorship. If ever there was a “sovereign” dictatorship, it was Hitler’s.

    The numerous political and legal commentaries that Schmitt penned in support of the Nazi dictatorship — many of which appeared in official Nazi publications such as the Völkischer Beobachter and the Westdeutscher Beobachter — are extremely revealing with respect to Schmitt’s attitudes at the time. They demonstrate that Schmitt’s accommodation to Nazi rule was speedy, seamless, and unstinting. It was as though, with Hitler’s Machtantritt, a dam had burst, and the new political circumstances allowed Schmitt to freely express political views that during Weimar he had been forced to suppress. The republication last year of Schmitt’s Nazi writings helped to resolve a major controversy that beset Schmitt scholarship for decades: whether January 30, 1933 marked a break with or a continuation of Schmitt’s previous political self-understanding.

    One important measure of the continuities in Schmitt’s worldview is the persistence of race thinking. Prior to the publication of Schmitt’s diaries (the most recent installment, Tagebücher 1924–1929, appeared in 2018), Schmitt’s champions often appealed for a “pluralistic” and “differentiated” understanding of his legacy, an interpretive tack that dissuaded scholars from focusing too much on Schmitt’s anti-Semitism. Yet as evidence of Schmitt’s Judaeophobia began to mount, such appeals rapidly devolved into repression and denial. The publication of Schmitt’s diaries has demonstrated that the “Jewish complex” to which Schmitt alluded in 1914 was merely the tip of the iceberg, the harbinger of a fevered anti-Judaism that crested during the Nazi period.

    In his diary in November 1931, Schmitt excoriated the left-wing Romanian poet and historian Valeriu Marcu as a “horrible Jew, of the dumb and superficial variety.” A month later, on Christmas Eve, Schmitt recounted having sung Christmas songs in his Berlin apartment and being overwhelmed by the “shame and scandal of living in a Judenstadt [Jew-city], insulted and shamed by Jews.” Schmitt’s wrath was often directed against assimilated Jews. In his eyes, by trying to pass themselves off as authentically German, they were doubly guilty. As he wrote on March 19, 1933: “Hopeful because of the Nazis, rage at the Jew [Erich] Kaufmann and the imposture of these assimilated Jews.”

    Kaufmann was one of Schmitt’s colleagues on the law faculty of the University of Berlin. His name surfaced in a letter of denunciation that Schmitt wrote to the Minister of Education on December 14, 1934. In his missive, Schmitt claimed that Kaufmann’s presence was a “slap in the face [to the] National Socialist students.” It was not Kaufmann’s pedagogical abilities, continued Schmitt, that were in question. Instead, it was Kaufmann’s status as an assimilated Jew that mattered; or, as Schmitt put it, Kaufmann’s deleterious “influence on German spiritual life and German youth.” As Schmitt urged in conclusion: “Especially today, when the German Volk and German students are being educated through a process of National Socialist schooling, this type of Jewish infiltration and influence must be rigorously avoided.” Kaufmann was promptly dismissed.

    After the war, Schmitt remained unrepentant and defiant. His journals from the years 1947-1951 are suffused with crude antisemitism. Schmitt derogated Jews as “Isra-Elites,” arguing that they were the only “elites” to have survived the war. And in a classic case of “Holocaust inversion” — transforming victims into perpetrators and perpetrators into victims — he claimed that the Jews had been World War II’s real victors. In September 1945, Schmitt was arrested by the Allies in a “general sweep” and interned as a possible “security threat.” Following his release, Schmitt was re-arrested in March 1947. He was transferred to Nuremberg, where he was interrogated by the American prosecutor Robert Kempner as a “potential defendant.” The indictment centered on Schmitt’s Grossraum articles, which Allied prosecutors regarded as a blueprint for Nazi Germany’s “war of annihilation” in the East. Schmitt avoided prosecution — a direct link between his theories and Nazi policy was not legally demonstrable — and was released two months later.

    The experience left Schmitt embittered. He regarded himself and his fellow Germans as the victims of the Allies’ “discriminatory concept of war” and their indefensible “moralization of punishment.” Schmitt’s objections were consistent with his earlier fulminations against “just war” doctrine and the Versailles Treaty’s “war guilt” clause. In 1958, in his foreword to the Spanish edition of his memoir, Schmitt lamented that the Allied legal proceedings had resulted in the unjustifiable “criminalization of an entire people.” He continued: “As Germany lay on the ground, defeated… the Russians and the Americans undertook mass internments and defamed entire categories of the German population. The Americans termed their method ‘automatic arrest.’ This means that thousands and hundreds of thousands of members of certain demographic groups — for example all high-level civil servants — were summarily stripped of their rights and taken to a camp.” The un-self-awareness — or sheer mendacity — of such passages is breathtaking.

    Schmitt’s exclusive focus on German suffering was characteristic of the mood of “repression” and “silence” that prevailed in postwar Germany. Schmitt excoriated the Nuremberg Tribunal as a violation of the time-honored legal maxim nulla poena sine lege — one cannot be punished for doing something that is not forbidden by law. By the same token, he gave little thought to the question of what form of punishment would be appropriate for the unprecedented criminality and mass atrocities that had been perpetrated by the Third Reich and its functionaries. Nor did Schmitt display a modicum of sympathy for the victims of Nazi Bevölkerungspolitik: the six million Jews who perished in Nazi death camps; the three million Soviet POWs who died in German captivity; the twelve million slave laborers who were dragooned to toil in German armaments factories; and so forth. Instead, he callously rationalized these misdeeds as unavoidable “casualties of war.” In Schmitt’s account, they were victims without perpetrators. Schmitt also liked to attribute the war’s tragic outcome to the “all-conquering progress of modern technology,” whose “dislocations” he proceeded to enumerate, mocking the liberal idea of “progress” along the way: “’Progress’ in the appropriation of the human individual, ‘progress’ in mass criminalization and mass automation. A giant apparatus indiscriminately swallows up hundreds of thousands of people. The old Leviathan appears almost cozy by comparison.”

    Following the war — and before Schmitt’s own apologetics had time to take root — some observers recognized the significant contribution that Schmitt had made to consolidating Nazi rule. In Deutsche Daseinsverfehlung in 1946, Ernst Niekisch accused Schmitt’s “friend-enemy” distinction of having furnished the “algorithm of bestiality” that was ruthlessly put into practice by the SA and SS. Similarly, Rudolf Smend, a former colleague at the law faculty of the University of Berlin, denounced Schmitt as a legal “pioneer of the National Socialist system of violence.” But Schmitt himself systematically eschewed questions of responsibility, personal as well as collective. Like the majority of his countrymen, he demonstrated little enthusiasm for probing the historical origins of the “German catastrophe.”

    From a legal and constitutional standpoint, the Federal Republic of Germany — whose Grundgesetz, or Basic Law, was codified in 1949 — was Schmitt’s worst nightmare. In stark contrast to Weimar, the Bonn Republic was intentionally conceived as a parliamentary system. Its architects expressly sought to forestall the temptations of executive overreach that, under von Hindenburg’s presidency between 1925 and 1934, had plagued German democracy, thereby paving the way for Hitler. The entire project was anathema to Schmitt. He disparaged defenders of the Grundgesetz as “Grundgesetzler” (human rights-lings) and mocked Grundrechte or “basic freedoms” as the “inalienable rights of donkeys.” For Schmitt, the Federal Republic represented a double abnegation of politics, insofar as it elevated the “anti-political” institutions of “parliament” and “judicial review” above the prerogatives state sovereignty. Schmitt’s own bête noire was the federal constitutional court, whose seat was in Karlsruhe. Schmitt composed a sophomoric satirical poem, which he circulated among friends, comparing the justices to lemurs. (“In Karlsruhe there grows a rubber tree/Lemurs scurry around/They append the ‘value’ of ‘freedom’ to the rubber tree.”) Schmitt excoriated the Bonn Republic as a “Justizstaat” — implying that it was not a “real state” — which elevated abstract “values” such as “human dignity” over “authority.” One of Schmitt’s final works was called The Tyranny of Values.

    III

    Following Schmitt’s death in 1985, the German right leaped into action to popularize Schmitt’s critique of German democracy. As the Junge Freiheit, the flagship publication of the Neue Rechte (New Right), put it: “Whoever sleeps with the Grundgesetz under his pillow has no need of Carl Schmitt. Conversely, whoever recognizes that the Grundgesetz is a prison in which the German res publica has been interned reaches for his work.” During the European refugee crisis of 2015-2016, the right-wing ideologue Götz Kubitschek, co-founder of the Institut für Staatspolitik — a conservative revolutionary think tank allied with the far-right political party Alternative for Germany (AfD) — cited Schmitt’s “state of exception” as an argument for implementing emergency measures to rebuff the influx of Syrian immigrants. Alluding also to Schmitt’s “friend-enemy” dichotomy, Kubitschek declared:

    I am convinced that in a “state of exception” … as the threats to one’s own group along ethnic, cultural, and civic lines become clear, so does the question of who ‘We are’ and who ‘We are not’… In other words, when people in this land have had enough, the question of [political] loyalty is bound to arise, as it does already when it is a question of customs, values, and the legal statutes that Islam places on the conduct of everyday life.

    The New Right regarded the refugee crisis, which rocked Chancellor Angela Merkel’s governing coalition to its foundations, as a classical Schmittian “state of emergency” — as a situation that, like the Algeria crisis in France in 1958 that paved the way for General Charles de Gaulle’s coup, portended the Bundesrepublik’s abolition and its replacement by an ethno-populist dictatorship. That Kubitschek’s advocacy of an executive decree banning asylum-seekers violated the Grundgesetz, as well as the tenets of European Union immigration law, seemed a matter of little concern.

    Schmitt’s posthumous influence on German political culture has been enormous. During the 1950s, Schmitt’s site of exile in Plettenburg became a favored pilgrimage destination among radical conservative jurists who were disaffected with the Federal Republic’s Verwestlichung (turn to the West) under Konrad Adenauer’s chancellorship in 1949-1962. Among Schmitt’s numerous acolytes was the jurist and future member of the Karlsruhe Constitutional Court, Ernst-Wolfgang Böckenförde, who applied Schmittian maxims in rulings that involved purported “social welfare” encroachments on state autonomy. And following German reunification in 1990, Schmitt’s intellectual currency skyrocketed. German conservatives asserted that the time had come to replace the “de-politicizations” and “neutralizations” of the liberal Bonn Republic with the prerogatives of a “self-confident nation” (selbstbewusste Nation), in keeping with the Bismarck-era traditions of étatisme and Machtpolitik. Who better to guide the Berlin Republic’s transformation in accordance with these precepts than Carl Schmitt?

    During the 1990s, a contingent of radical conservative intellectuals undertook a public campaign to rehabilitate Schmitt, along with the reputations of like-minded conservative revolutionary thinkers such as Ernst Jünger and Martin Heidegger. Prior to reunification, it had been difficult for Schmitt to escape the taint of his earlier career as the Third Reich’s “Crown Jurist.” Following the collapse of the Berlin Wall, however, a chorus of national conservatives argued that, after forty years of democratic stability, the time had come to lift the taboo. Schmitt was resurrected as a deutscher Klassiker, a “German classic.” Notwithstanding the objections that were raised by a handful of intellectuals, his rehabilitation seemed complete.

    Schmitt’s rehabilitation in Germany was merely the prelude to a multifaceted international revival of his work. Already during the 1990s, those who were disillusioned with neoliberal triumphalism and the “end of history” ransacked Schmitt’s corpus in search of political alternatives. And the ranks of the disillusioned were not confined to the right. Left-wing critics of TINA — the acronym for “there is no alternative,” derived from Herbert Spencer and popularized by Margaret Thatcher, to indicate an acceptance of the liberal order — thought that they had found the support they needed in Schmitt’s claim in The Crisis of Parliamentary Democracy that liberalism and democracy were mutually exclusive political forms. As Alan Wolfe noted in The Future of Liberalism, “To the extent that there is a revival of Schmitt’s ideas taking place in Europe and the United States, it is not because of what is happening on the right. It is because Schmitt has become something of a hero to the postmodern left.”

    Schmitt’s arguments about the endemic corruptions of Western liberalism became increasingly popular among former Marxists who, following the collapse of communism and the discrediting of Marx’s “metaphysics of class struggle,” sought out alternative paradigms of contestation among non-Marxist sources. In light of the fact that the proletariat, the putative “gravedigger of capitalism,” was now comfortably ensconced amid the mind-numbing blandishments of bourgeois consumerism, the prospects of realizing the utopia of a “classless society” seemed more distant than ever.

    Nominally, these self-styled “left Schmittians” embraced Schmitt’s no-holds-barred critique of liberalism in the name of “radical democracy.” Ultimately, however, their animus against the normative safeguards of liberalism proved so powerful and all-consuming that, much like Schmitt, they ended up countenancing brazenly authoritarian political solutions. In their haste to transcend the liberal democratic status quo, the left Schmittians were not averse to flirting with the temptations that Jacob Talmon long ago described as “totalitarian democracy.” They reprised an authoritarian political lineage that stretched from the Jacobin dictatorship of 1793-1794 to Lenin’s What is to Be Done? (1902) to the Chavismo that, since the late 1990s, has made state socialist autocracy a permanent feature of the Latin American political landscape.

    To restate Schmitt’s critique of liberal democracy in Rousseauian terms: whereas democracy strove to realize the “general will” or “universality,” liberalism, which was predicated on “interests,” was incapable of rising above “particularism,” or the mere “will of all,” which never rose to a higher unanimity. Schmitt claimed that “parliamentarism,” as a sphere of “representation” in which “interests” reigned supreme, inherently subverted the universalist strivings of democracy qua popular sovereignty. Hence Schmitt’s conclusion that liberalism and democracy inherently operated at cross purposes. (This is one of Victor Orban’s favorite refrains.) On the basis of these criticisms, Schmitt cynically dismissed parliament as little more than a Schwatzbude or “gossip chamber.” Following the lead of Donoso Cortés, he disparaged the bourgeoisie as spineless and effete, a class that was prone to endless discussion but incapable of a sovereign decision. During the 1920s Schmitt’s political hopes centered on the prospects of a Führerdemokratie, or leader-democracy, a term that for Schmitt and Schmittians is not at all oxymoronic: a form of political authoritarianism that was shorn of pluralism and constitutional interferences, a political system that replaced the liberal idea of “representation” with the “identity” between Führer and Volk, “leader” and “people.” Recall that Schmitt held that political obligation was grounded in “faith” and “myth” as opposed to rational consent. The identity between “leader” and “people” would be reinforced by a emotional emotional bond.

    For ex-Marxists, Schmitt’s critique of political liberalism possessed numerous advantages. Unlike Marxism, it was not tied to an outmoded Hegelian philosophy of history that naively culminated in the grand soir of socialism. Nor was it wedded to an equally anachronistic understanding of the proletariat as the “universal class”: a class that, as Marx had claimed, epitomized all of the injustices of bourgeois society, while being systematically deprived of its benefits. From an empirical standpoint, the “laboring society” of nineteenth-century industrialism on which Marx had predicated his “critique of political economy” had, to all intents and purposes, disappeared. The demise of the factory system meant that the ideas of “class” and “class struggle” had likewise forfeited their centrality. Instead, as sociologists never tired of pointing out, “social stratification” and “status differentiation” had replaced “class” as the interpretive keys to understanding modern society. It was not hard to see that, shorn of “class struggle,” Marx’s theory of revolution had become obsolete.

    The 1960s confirmed that the locus and the nature of political struggle had fundamentally shifted. Conflict was no longer confined to the shop floor or the workplace. Instead, the “new social movements” demonstrated that political contestation had been pluralized. Feminism, gay liberation, the civil rights movement, and environmentalism had exposed the analytical inadequacies of “class analysis.” The new sites of struggle centered on “post-material values” and cultural themes that transcended the economistic focus of traditional Marxism. Among post-Marxists, Gramsci’s notion of “hegemony” played a crucial role, insofar as it directly addressed the cultural dimension that Marx’s critique of political economy had neglected.

    For left Schmittians searching for new forms of contestation in order to combat the “Washington consensus,” Schmitt’s rejection of political liberalism seemed to offer possibilities of radical struggle that the parliamentary left had long abandoned. Hence, Schmitt’s left-wing disciples enthusiastically embraced his “friend-enemy” opposition for infusing radical politics with an ethos of permanent conflict. As Chantal Mouffe argued in The Challenge of Carl Schmitt in 1999, Schmitt’s “concept of the political” anticipated a new era of “political agonism,” in which the consensual politics of liberal-democratic parliamentarism was swept away by a rising tide of dissent and conflict. Mouffe explained:

    In spite of [Schmitt’s] moral flaws… ignoring his views would deprive us of many insights that can be used to rethink liberal democracy… Schmitt’s thought serves as a warning against the dangers of complacency that a triumphant liberalism entails. His conception of the political brings the crucial deficiencies of the dominant liberal approach to the fore. It should shatter the illusions of all those who believe that the blurring of frontiers between Left and Right, and the steady moralization of political discourse, constitute progress in the enlightened march of humanity toward a New World Order and a cosmopolitan democracy.

    By trivializing Schmitt’s rather spectacular failings as “moral flaws,” Mouffe conveniently sidestepped the interpretive question that has preoccupied Schmitt scholarship for decades: the extent to which Schmitt’s celebration of dictatorship and Führerdemokratie during the 1920s presaged his conversion to Hitlerism in 1933. Moreover, was it not Mouffe herself who “blurred the frontiers between Left and Right” by enlisting the support of a conservative revolutionary thinker like Schmitt for the ends of radical democracy? Finally, in an era marked by unprecedented political polarization — “blue states” versus “red states” and so on — and ever-expanding ideological divisions, should not the construction of a common political discourse take priority over a “political agonistics” that would merely widen existing antagonisms?

    Although Schmitt’s “concept of the political” may have liberated neo-Marxists from the straitjacket of historical materialism, it left them fully exposed to the dangers of Schmitt’s own extremely dubious political choices. In fact, by embracing Schmitt’s “decisionism,” as well as his inflexible “anti-normativism” — “the exception is more interesting than the norm,” Schmitt proclaimed; “the norm is destroyed in the exception” — Schmitt’s left-wing partisans opened themselves up to the excesses of “left fascism”: a quasi-aesthetic celebration of “struggle for struggle’s sake” and “conflict for conflict’s sake”; a glorification of endless war that blithely scorned institutional constraints and guardrails. Finally, in keeping with Schmitt’s decisionism, his left-wing disciples turned a blind eye to the content and the ends of struggle.

    A decisionistic refusal to specify the ends of struggle has also been one of the hallmarks of the Schmittianism of the Argentinian political theorist Ernesto Laclau (who was Mouffe’s partner). In On Populist Reason, which appeared in 2007, Laclau described the content of political struggle as an “empty signifier.” According to Laclau, the meaning of struggle would be provided by the populist “leader,” who is tasked with aggregating the conflicting demands of the vox populi in order to achieve a new “hegemonic unity.” Whereas Mouffe’s neo-Marxism still felt obligated to pay lip service to the formal trappings of liberal democracy, Laclau held that the rituals of “parliamentarism” must be simply abolished for the sake of realizing the ever-elusive “general will.” Laclau uncritically adopted Schmitt’s argument in The Crisis of Parliamentary Democracy that “representative democracy” must be replaced by a plebiscitarian “leader-democracy”; and the Schmittian derivation of Laclau’s position may help to explain his neo-Leninism, his view that only the “leader” or “party” can provide the “people” with a unified, revolutionary consciousness, thereby raising it from its “fallen” condition as an inchoate, disaggregated mass. By constructing the “real people” against the “enemies” who have betrayed it — enemies that Laclau defined as the “oligarchy” or “elites” — the leader, so to speak, “extracts” the (real) people from its oppressors.

    In The Crisis of Parliamentary Democracy, Schmitt claimed that genuine democracy was predicated on a series of “homologies” or “identities”: “the identity of governed and governing, sovereign and subject, the identity of the subject and object of state authority.” If, in Schmitt’s teaching, “democracy is not antithetical to dictatorship,” it was in part because dictatorship preserved the identity between the ruled and the ruler. According to Schmitt, dictatorship realized this identity through a process of mass “acclamation” as opposed to parliamentary “representation.” In sum: “Dictatorial and Caesaristic methods [embody] the direct expression of democratic substance and power.” Laclau endorsed Schmitt’s condemnation of representative democracy as a liberal subterfuge that sublimated and distorted popular will, thereby obstructing the identity between “leaders” and “people.” He also substituted Schmitt’s proto-fascist notion of “symbolic representation” — a leftover from Schmitt’s earlier political Catholicism — for the liberal democratic conception of “mandate representation.” Thus, in keeping with Schmitt’s framework, Laclau exalted the leader as the “symbolic representative” of the “general will.” The problem with the fascist glorification of leadership, for Laclau, was only that it was too extreme, though it is hard to know what extreme means for such a proponent of unity in tyranny.

    Laclau’s repudiation of liberal democratic “proceduralism” — whose foremost twentieth-century representatives have been Ronald Dworkin, John Rawls, and Jürgen Habermas — was an important component of the left Schmittians’ struggle against Enlightenment rationalism. This resolutely anti-Enlightenment disposition helps to explain the theoretical alliance between post-Marxists such as Mouffe and Laclau, on the one hand, and poststructuralists such as Derrida and Foucault, on the other. It also provides support for Habermas’ wise suspicion, voiced during the 1980s, that “postmodernity definitely presents itself as antimodernity.” “This statement,” Habermas continued, “describes an emotional current of our times that has penetrated all spheres of intellectual life.”

    The left-wing cult of Schmitt often displayed a self-marginalizing, sectarian quality, which explains its difficulties in gaining acceptance outside of the insular confines of academe. The same cannot be said about the reception of his work among neoconservative policy circles following the 9/11 terrorist attacks, when Schmitt’s pronouncements about the imperatives of emergency governance and the fecklessness of liberal democratic “legalism” assumed canonical status.

    Even civil libertarians, while disagreeing sharply with Schmitt’s conclusions, begrudgingly acknowledged his diagnostic prescience as well as the timeliness of his legal-juridical insights. “Mr. Schmitt Goes to Washington” was the shrewd title of Alan Wolfe’s discussion of the hypertrophy of executive authority under George W. Bush’s presidency. As the political theorist William E. Scheuerman conceded in The End of Law: Carl Schmitt in the Twenty-First Century, “Like no other political or legal thinker in the last century, [Schmitt] placed the problem of emergency government on the intellectual front burner, and he consistently did so as to unsettle those of us committed to liberal and democratic legal ideals. At the very least, his ideas about emergency rule call out for a response from those hoping to preserve the rule of law.” And in 2006, in an article on “Preserving Constitutional Norms in Times of Permanent Emergencies,” the legal theorist Sanford Levinson acknowledged that, in light of the Bush administration’s sovereign disregard for juridical accountability, America was experiencing a “Schmittian moment.” As Levinson put it, “The single legal philosopher who provides the best understanding of the legal theory of the Bush administration is Carl Schmitt, a brilliant German theorist of Weimar, who became, not all together coincidently, the leading apologist for Hitler’s takeover of what Schmitt viewed . . . as a hopelessly dysfunctional German polity.” Elsewhere he noted Schmitt’s status “as the jurisprudential guru of the post-9/11 world, a world in which the state of exception itself had become the new norm; in other words, as many analysts and observers assumed at the time, a permanent state of exception.”

    Schmitt’s philosophy was indeed a great gift to the advocates of what John Yoo, a legal scholar and deputy assistant attorney general in the Office of Legal Counsel in the White House, called “the Unitary Executive.” It was an idea that enjoyed a good deal of political support in the Bush years. “All law,” Schmitt wrote, “is ‘situation law.’ The sovereign creates and guarantees the situation as a whole in its totality. He has the monopoly on this ultimate decision.” The “sovereign” has carte blanche to respond to the changing situation as he sees fit, unimpeded by prior constitutional norms. Of course the West Wing Schmittians either misunderstood Schmitt or were using him dishonestly. Schmitt was not defending a constitutional status quo ante — as his champions continued to claim, despite mounting evidence to the contrary — but facilitating the transition to a “sovereign dictatorship,” in keeping with his glorification of the “Age of Absolutism” as the archetype of political excellence. What on earth was his ghost doing in the White House?

    Constitutionalists such as Levinson and Scheuerman reacted to the “Schmittian moment” in American governance with dismay and alarm, but not all observers were equally troubled. In 2006, in Terror in the Balance: Security, Liberty, and the Courts, Eric Posner and Adrian Vermeule claimed that Schmitt’s arguments in favor of emergency powers unconstrained by congressional oversight and judicial review were exactly what the war on terror demanded. Posner and Vermeule derided opponents of torture for their “self-absorbed moral preciosity.” They described their goal in Terror in the Balance as “extracting the marrow from Schmitt and then throwing away the bones” — a rather infelicitous choice of metaphor. They sought also to combat the moral outrage of conscience-stricken civil libertarians: for example, the seven hundred law professors who, in December 2001, published a petition criticizing the Bush administration’s plan to employ military tribunals to try the Guantanamo Bay detainees. By arbitrarily reclassifying the Guantanamo captives as “unlawful enemy combatants,” the Department of Justice sought to strip them of the legal protections they would be entitled to under the Geneva Accords as prisoners of war.

    The political challenges that the United States faced following the September 11 attacks were undeniably exceptional. For a democratic polity committed to the rule of law, however, the key to addressing the exception lies in insuring that the response is, from a constitutional standpoint, proportionate to the emergency in question. In other words, the response must be calibrated with a view toward returning to the legal-constitutional status quo ante. In 1861, between the attack on Fort Sumter in April and the return of Congress in July, Lincoln acted — to employ Schmitt’s terminology — as a “commissarial” (or limited) dictator. But Lincoln relied on emergency powers in order to safeguard the Republic; his actions always presupposed a return to constitutional normalcy; and so his “exceptional” conduct exemplified the responsible use of executive authority. Moreover, Schmitt’s attempt to define “the political” in terms of the “friend-foe” distinction, his notion of politics as war, is fundamentally at odds with central aspects of the Western political tradition, for which “justice” and “virtue,” rather than “enmity,” are the raison d’être of politics. In American terms, certainly, Schmitt’s “concept of the political” is really a concept of the anti-political, of the breakdown of politics, as in 1861. Our system of government was designed for conflict, which Madison regarded as a permanent feature of human affairs, but there is nothing Darwinian — or Schmittian — about it.

              Meanwhile the left Schmittians viewed the Bush administration’s proto-Schmittian apotheosis of executive authority as a welcome confirmation of their own longstanding antiliberal prejudices. Already during the 1990s, they regarded Schmitt’s arguments about the bankruptcy of political liberalism as received wisdom. Under the influence of French theory, the left Schmittians enthusiastically accepted Schmitt’s claim that liberal “norms” were little more than a swindle. They held that, insofar as norms were prescriptive — hence “normalizing” — they predetermined the parameters of socially permissible behavior. By pathologizing deviance and non-conformity, norms were an essential component of disciplinary society’s implacable social control. The illusion of “autonomy” was merely one of the ruses that “power-knowledge” employed to deceive us into thinking that we were free. This is what happened when Foucault was added to Schmitt. As Foucault wrote, “[the] will to knowledge reveals that all knowledge rest upon injustice, that there is no right … to truth or foundation for truth … The instinct for knowledge is malicious, something murderous, opposed to the happiness of mankind.” In the eyes of Schmitt and Foucault, the dialectic of enlightenment culminated not in emancipation, but in catastrophe.

    Giorgio Agamben’s State of Exception, which appeared in 2004, was conceived as a response to Abu Ghraib and Guantanamo, and represented the consummate synthesis of Schmitt and poststructuralism. By fusing these currents, it raised anti-Enlightenment cynicism to new heights. Agamben maintained that there was nothing “exceptional” about the “state of exception” that, following the September 11 attacks, was declared by the Bush administration. It merely exposed the hidden capacity for violence that lurked beneath the peaceable façade of liberal democratic “legalism.” For Agamben, the state of exception already was “the dominant paradigm of government in contemporary politics.” He praised Schmitt effusively as the theorist who, more than any other, had exposed the hidden link between the state of exception and bourgeois legal convention: “The specific contribution of Schmitt’s theory is precisely to have made such an articulation between state of exception and juridical order possible. It is a paradoxical articulation, for what must be inscribed within the law is something that is essentially exterior to it: nothing less than the suspension of the juridical order itself.” According to Agamben, however, the corrective to the state of exception is not to return to the rule of law, or to make law genuinely effective, since Agamben held, following Schmitt, that the state of exception is — in ways that are never clearly specified — inscribed in the rule of law itself. The vaporousness of all this was a formula for political impotence. The left Schmittians have little to offer apart from an abstract populism, an ill-defined theory of “direct action” and “agonistic struggle.” But will the people follow Agamben into the streets? The notion is a little comic — though the comedy disappears in the face of Agamben’s claim that there are no essential historical differences between Guantanamo Bay and Auschwitz. They both express the hidden telos of “modernity.”

    And now the cult of Carl Schmitt has been again updated. Recently a new authoritarian ethos has emerged on the right, a political current proclaiming that liberal democracy has entered into a state of terminal crisis. The proponents of this ethos claim that the signs of liberalism’s morbidity are omnipresent and undeniable; only fools mired in anachronistic ways of thinking would deny their obviousness. The slogan that has been widely brandished as a cure-all or panacea for liberalism-in-crisis — and an adage that has been enthusiastically embraced by autocrats and their apologists — is “illiberal democracy.” Unsurprisingly, the political philosopher whose doctrines have been repeatedly invoked both to explicate the reasons for liberalism’s irreversible demise and to justify the transition to a new form of authoritarian rule that will save us from its insolvency is Carl Schmitt.

    Schmitt has been canonized by a new generation of “post-liberal” conservatives who have been heartened by the political “successes” of Donald Trump and the Hungarian strongman Victor Orbán. Following Schmitt, they have concluded that constitutional guarantees of civic freedom and legal equality must be jettisoned, since, as enablers of an anarchic centrifugal individualism, they bear primary responsibility for Western civilization’s precipitous unraveling. Last summer Orban received an enthusiastic reception at the CPAC convention in Texas, which provoked a reporter in Washington Post to described the “Orbánization” of American conservatism: “to right-wingers steeped in anti-liberal grievance, Hungary offers a glimpse of culture war victory and a template for action.”

    The uncritical reliance on Schmitt’s positions attests to a revolting lapse of historical memory. Equally troubling, when pressed to identify a political successor to liberalism, the post-liberal responses track Schmitt’s ill-advised endorsement of “leader-democracy.” Schmitt’s argument that liberalism is incoherent and self-negating, since, as a form of political rule it is inherently averse to “authority” and “order,” in lieu of which “political rule” itself becomes meaningless, suffuses Patrick Deneen’s influential anti-liberal broadside, Why Liberalism Failed. As Deneen observes, parroting Schmitt, “democracy, in fact, cannot survive under liberalism.” The shameful lineage is clear. As Robert Kuttner noted appositely in the New York Review of Books, “Deneen’s thinking echoes an older line of reactionary argument on the folly and perversity of liberal democracy that extends back from twentieth-century anti-liberal intellectuals, like Leo Strauss and fascist theorists Carl Schmitt and Giovanni Gentile, to monarchic critics of liberalism, like Joseph de Maistre.” The fundamental flaw of Why Liberalism Failed is that its understanding of liberalism’s inadequacies derives from the worldview of a thinker who was rabidly opposed to liberalism and everything it stood for. Faulty premises yield faulty results. Schmitt’s representation of liberalism’s deficiencies contains a kernel of truth, though he was hardly the first or the last to worry about the labyrinthine tendencies of liberal governance; but his portrait of liberalism was a grotesque caricature — partial, self-serving, hysterical, and exaggerated, just like Deneen’s.

    Deneen is not alone in his Schmittian amen corner. In A World After Liberalism: Philosophers of the Radical Right, Matthew Rose, a contributor to First Things, sounds the death-knell not only of liberalism but also of traditional conservatism. According to Rose, in its place there will now arise “a new conservatism, unlike any in recent memory… Ideas once thought taboo are being reconsidered; authors once banished are being rehabilitated.” The inspiring doctrines of the “Radical Right,” whose ideas Rose seeks to retrofit for contemporary political use, emerged in Germany after World War I. “Known as the ‘Conservative Revolution’… its chief figures included Carl Schmitt, Ernst Jünger, Arthur Moeller van den Bruck, and Oswald Spengler.” Rose exalts these thinkers for their willingness to explore themes of “cultural difference, human inequality, religious authority, and racial biopolitics,” notwithstanding the fact that their approaches “were widely viewed as invitations to xenophobia and even violence.” That view was not only widely held, it was also true.

    In a similar vein, Adrian Vermeule, the Harvard legal scholar and Catholic “integralist,” has invoked Schmitt’s adage that “all political concepts are secularized theological concepts” to indict the theological ferocity with which liberalism unrelentingly (in his account) has advanced its agenda at the expense of traditional communities, belief-systems, and lifeworlds. As with Deneen, what Vermeule’s arguments lack in precision and subtlety, they compensate for in sheer force of rhetorical will. Like Schmitt, Vermeule prioritizes voluntas over ratio. Schmitt’s adage was part of his more general assault on the mentality of modernity. Never mind that, in contrast to theology, modernity relies for its intellectual methods on evidentiary criteria that are “public” and “generalizable.” Its proposals and claims are open to discussion and criticism. Unlike the precepts of “political Catholicism” with which Vermeule strongly identifies, they are, as a matter of principle, fallible and non-dogmatic. Does Vermeule leave his faith at the seminar door or his reason at the church door?

    There is no surer sign of intellectual and moral bankruptcy than an association with the thought of Carl Schmitt. The persistence of his cult into the present day is yet another of our time’s many unhappy omens. But as long as the hard work of a free and fair society feels too onerous for some of its intellectuals, the repulsive Schmitt will live; live again and be repudiated again. 

     

    Surrealism’s Children

         Back when I was an idealistic young soul, I enrolled in a PhD program in French and Comparative Literature, intent on making a career in academia. Those were the days when New Criticism and Semiotics held sway, and texts were to be read without interference from outside influences. The approach we were taught, boiled down, was that all a reader needed to know about a poem or a work of prose could be found on the page, without reference to historical context, authorial biography, or any other distractions. In class after class, we dissected poems by Ronsard and Rimbaud, the Symbolists and the Surrealists, peeling back layer upon layer of manifest and latent meaning. It was intoxicating stuff, but I couldn’t escape a nagging question: What was the point of it? Wasn’t it all a bit too removed from life? Wasn’t literature supposed to tell us about more than just its own internal machinery? Unable to resolve these questions, I handed in my Master’s thesis and said goodbye to all that.

         One effect of having left academia prematurely is that I spent the following decades still grappling with the appropriate balance between art and life, and the role that literature, literary studies, and the humanities in general have to play in our dealings with this fraught and confusing world — a world that, increasingly, seems resistant to the kinds of challenges and provocations that art and literature are best suited to pose. Is literature meant to reinforce our convictions, or to destabilize them? Should art be a safe space or a dangerous space, and what does that mean? What is the role of the off-putting, the upsetting, the offensive, and the shocking in our study and consumption of the humanities? Can art still be shocking in this day and age? And who, exactly, is being shocked?

         The answers used to be fairly straightforward, or so it seemed. The progressive avant-garde duly épaté’d the bourgeois, who duly responded with howls of outrage as their cherished shibboleths — God, king, country, the army, the Establishment, what have you — were dragged through the slime — often in language and aesthetic forms that were themselves a provocation. Provocation was even a kind of social role, an expected feature of the societal landscape. But things are no longer so simple.

         These days we find ourselves in a situation in which supposedly contradictory viewpoints circle each other, ouroboros-like — and become virtually impossible to distinguish. Conservatives vent their offense by banning an increasing number of books in schools and libraries, while college professors are actively discouraged from teaching material that might ruffle student sensibilities and provost’s offices disinvite speakers deemed too hot to handle. Yet what better time than in college to have sensibilities ruffled? When will students ever have a more free and insulated space in which to rub shoulders with controversial ideas, and to develop the skills needed to confront those ideas in the world — that is, to view them with greater insight and deeper understanding, if only to then refute them? College is, or should be, an instruction in controversy and its skills. For this reason, the curricular exclusions on current-day campuses not only curtail what the educational experience has to offer, but, particularly in a study of the humanities, the establishment of these guardrails undermines what is most valuable about the discipline: its challenge to comfort and certainty, its impetus to make us think harder and more independently.

         We are all familiar with the Golden Age of Bourgeois Indignation, in incidents ranging from theatergoers howling at the premiere of Victor Hugo’s Hernani in 1830, to attendees at the Paris Salon in 1863 trying to slash Manet’s Déjeuner sur l’herbe, to audiences throwing tomatoes and raw steaks at Dada performers in the 1920s. Flash forward a century and the dynamic has reversed: as Laura Kipnis has observed in these pages, it’s not the rubes and the philistines who get rattled now, but rather the progressives and the illuminati who find it hard to stomach the provocations. “At some point,” she writes, “offendability moved its offices to the hip side of town.” Nor, even, is outrage the exclusive privilege of the avant-garde: in the current climate, mainstream art and literature can just as likely get dinged as the cutting-edge stuff, in a free-for-all of offense.

         The question is, what do we sacrifice by avoiding such offense? It’s not always pleasant to be rattled out of one’s complacencies — the entire history of the avant-garde banked on it — but in losing the displeasure of injury, are we also losing the pleasures of discovery, and of self-discovery, that can accompany it? The price of comfort is often stagnation.

         And there’s a more immediate concern as well: in this time of anonymous reputation-bashing and swift retaliation against unwelcome opinion — the so-called “cancel culture” — the danger is not so much that people’s ratings will suffer and their speaking engagements will be revoked, but that they will stop saying anything at all for fear of being boycotted or “shamed.” We have too many crises to confront, none of which can be meaningfully addressed in 280-character soundbites, for those who can see beyond partisanship to refrain from making valid contributions. Trying to avoid offense in every instance is a fool’s errand — you can’t please all of the people all of the time — and holding back consequential and constructive insights, even if unpopular, impedes the free exchange of ideas and accomplishes nothing.

         From its tumultuous start, the Surrealist movement was out to shock. The flurry of activity that accompanied its debut in late 1924 and early 1925, including the broadside A Corpse (which spat on the much beloved and recently deceased novelist Anatole France, an act of cultural blasphemy), the aggressive prose and propositions of Andre Breton’s Manifesto of Surrealism (“Beloved imagination, what I most like in you is your unsparing quality”), and the common cause that the group tried to make with the reviled Communist Party, were not only steps toward defining a philosophical program, but also ways of slapping bourgeois proprieties repeatedly across the face.

         This was true not only of their actions but also of their proclaimed choice of role models, many of whom would not have passed a modern-day ethics test. The most glaring case in point is the Marquis de Sade, poster boy for aberrant sexuality and one of Surrealism’s lauded heroes. Many more examples can be found in Breton’s Anthology of Black Humor, a veritable rogue’s gallery of dubious precursors. Sade’s novels are jam-packed with physical and mental abuse, coprophagia, cannibalism, torture, rape, and murder perpetrated indiscriminately against women and men of all ages: he did not lend his name to a major psychopathology lightly. Nor were these acts merely theoretical, for the man practiced what he predicated — not to the extent portrayed in his books, not by a long shot, but enough to keep him behind bars for nearly half his seventy-four years, first under the monarchy, then under the French Revolution, then again under Napoleon. And we must be clear: Sade was no innocent victim. He used his wealth and his privilege to indulge in prodigious sexual predation, like an eighteenth-century Jeffrey Epstein. As such, his life and his books have been a thorn in the side of progressive-minded thinkers for the past two centuries. How can you promote freedom of expression and still defend that?

         And yet his work has been defended, and persuasively so, by such formidable intellectuals as Angela Carter, Roland Barthes, Maurice Blanchot (whose monograph Lautréamont and Sade owes much to Surrealist thinking), Susan Sontag, and Michel Foucault, to name just a few. Why would they do this? One answer comes from Simone de Beauvoir, in her landmark analysis of Sade’s writings from 1953, titled, appropriately, “Must We Burn Sade?” Sade, she declared, “drained to the dregs the moment of selfishness, injustice, and misery, and he insisted upon its truth. The supreme value of his testimony is the fact that it disturbs us. It forces us to reexamine thoroughly the basic problem which haunts our age in different forms: the true relation between man and man.” In a rare moment of convergence between De Beauvoir and the Surrealists, the poet Paul Eluard anticipated this view in 1937 in his book L’Evidence poétique, writing: “Sade wanted to restore to civilized man the power of his primitive instincts . . . He believed that out of this, and this alone, true equality would come. Since virtue is its own reward, he labored, in the name of everything that suffers, to drag it down and humiliate it… with no illusions and no lies, so that those it normally condemns might build here on earth a world on the immense scale of mankind.”

         The arguments for and against Sade are many and complex, and they are also largely familiar. On one side, De Beauvoir defends him as a cold-eyed realist. On the other, writers such as Andrea Dworkin warn that his books, like any form of pornography, could incite acts of violence, especially against women. Both arguments have validity, and it is not a cop-out to admit that there is no single answer. If anything, the true energy of art and literature might reside in the questions they pose rather than the certainties they offer. What I will say is that, more than the events portrayed in his books, which are so over-the-top that they often become plainly absurd, perhaps the most shocking thing about reading Sade is how un-titillating so much of it is. As you plow though description upon minute description of various improbable scenarios, their inflexible regulation ultimately undermines any eroticism they might have been meant to contain. Ironically, the parts of Sade’s work for which he is the most infamous are actually the most tedious.

         The intriguing part, to pick up from De Beauvoir, is Sade’s lucidity. Interspersed with the horrific actions are many pages of philosophy — I would go so far as to call it moral philosophy — that take an unsparing and admirably honest view of human interactions, and strip away the pieties with which we have comforted ourselves for two millennia. Here, for instance, is one of Sade’s fictional stand-ins instructing an eager young pupil about vice and virtue:

    Nature has endowed each of us with a capacity for kindly feelings: let us not squander them on others . . . Let us feel when it is to [our] advantage; and when it is not, let us be absolutely unbending. From this exact economy of feeling, from this judicious use of sensibility, there results a kind of cruelty which is sometimes not without its delights. One cannot always do evil; deprived of the pleasure if affords, we can at least find the sensation’s equivalent in the minor but piquant wickedness of never doing good.

    Nietzsche is not too far away.

         It is worth recalling that what landed Sade in the Bastille was not so much his acts of cruelty as the fact that he had sex with both men and women, and perhaps even more so that he was accused (though never convicted) of blasphemy against the highly influential and politically connected Church, which did not countenance challenges to its celestial worldview, or its sovereignty. It is also worth remembering that his books (such as the above-quoted Philosophy in the Bedroom) were written during the long years he spent in various prisons and asylums, and that in many ways their vindictive savagery — their sadism, if you will — can be read as a howl of rage against captivity, no matter which political regime was in charge. It is again De Beauvoir who notes that “Sade does not give us the work of a free man. He makes us participate in his efforts at liberation. But it is precisely for this reason that he holds our attention.” The value of a figure such as Sade, in other words, lies not in the acts that he describes but in the ethical challenges that he poses. It is one thing to create from a position of moral good, as many great writers and artists have done. But a steady diet of such work gives you only half the story, and arguably not the more necessary half.

         So the question remains: Is Sade worth reading on these grounds, or should he indeed be burned? Is the offense that his writings constitute a reason for locking his work away, as he himself was locked away — and as his manuscripts were locked away for decades in the so-called “Hell” section of the French National Library? Or do his “efforts at liberation,” or his decidedly unsentimental views on society, offer reasons to look beyond the parts we find distasteful? Is the threat posed by Sade that his books beget actual horrors, as Dworkin argued? Or that, long before Freud came along, they upended our complacent belief in the basic altruism of people and forced us to observe human nature at its ugliest and most unnerving?

         The questions are all the more relevant in that, unlike so many of his contemporaries, Sade’s impact did not fade with time. In 1959, the Canadian conceptual artist Jean Benoît, under the auspices of the Surrealist group, performed a piece called The Execution of the Testament of the Marquis de Sade. Benoît was well aware that his invitation-only audience would not be easy to impress: gathered that evening in the commodious Paris apartment of the poet Joyce Mansour were some one hundred “writers, poets, painters, filmmakers, critics . . . women in evening gowns . . . their nails painted blue or green… As well as a woman in black velvet whose nipple fit through a small hole in her dress.” At a prearranged signal, the attendees stood in a semicircle facing a stage area, their ears assaulted by the prerecorded sounds of an erupting volcano and readings from Sade’s works. Benoît then appeared, dressed in an ornate black costume with sharp protrusions over his chest and legs, a grotesquely extended erection, and a cape from which blood seemed to be dripping. Piece by piece his costume was slowly removed by his wife, the artist Mimi Parent, revealing his nude body to be painted all in black, his heart covered by a red star (Sade’s emblem). With a shrill cry, Benoît grabbed a red-hot iron placed nearby and branded the word “Sade” into his flesh, squarely over his heart. He then held out the still-smoking iron to his audience and demanded, “Who’s next?” The Chilean painter Roberto Matta was so carried away by the performance that he spontaneously rushed up, tore open his shirt, and seared his own left breast.

         Benoît’s performance marks a relatively anomalous point on the timeline of outrage that I alluded to above: the members of the audience were clearly rattled — in Matta’s case, to the point of voluntarily broiling his own flesh — but they were not offended. They had come in search of provocation and they were not disappointed. How that performance might fare sixty years on, in our own over-cautious day, is another story altogether.

         Sade, as I have noted, was one of Surrealism’s foundational pillars. Another was his spiritual great-grandson, the Montevideo-born poet the Comte de Lautréamont. Lautréamont, who died in 1870 at the age of twenty-four under mysterious circumstances, is mainly remembered as the author of the prose poem The Cantos of Maldoror. The narrative, such as it is, follows the picaresque and often hallucinatory adventures of its eponymous anti-hero, who styles himself the personification of evil and who is engaged throughout most of the book with a battle to the death against God, or as he calls him, the Creator (when he isn’t calling him worse). Maldoror is full of wild invention and grimly exhilarating humor over an underlayer of deep torment; the list of writers, artists, musicians, and filmmakers it influenced stretches from Dalí and Godard to Jim Morrison, John Ashbery, and the Beats. It is also full of what we would now consider child abuse, misogyny, sadism (that word again), animal cruelty, and various other atrocities. No doubt it would have been banned when it was first printed in 1869, had anyone actually noticed its existence at the time.

         But then, shortly before his death, the author pulled an about-face. Immediately after celebrating evil in Maldoror, Lautréamont — this time under his birth name, Isidore Ducasse — published a slim pamphlet called Poésies, consisting of brief aphorisms in prose. On closer examination, it turned out that many of these aphorisms were actually canonical maxims by moralists such as Pascal and La Rochefoucauld, familiar to any French schoolchild, but turned on their heads to celebrate positivity and humanism. So, for instance, where Pascal had written, “Man is only a reed, the weakest in nature . . . a vapor, a drop of water is enough to kill him,” Ducasse countered with, “Man is an oak. Nature contains nothing sturdier,” and so on. As he announced in a programmatic headnote, “I replace melancholy with courage, doubt with certainty, despair with hope, wickedness with good… skepticism with faith.”

         Many, Ducasse included, have described Poésies as a “correction” of Maldoror, but we might also see it as a counterpoint. Ducasse began with passion and rage, the sparks needed to light the fire and set revolutions in motion, then continued with the more sober and optimistic reflection needed to bring them to fruition. Faced with the cynicism and the defeatism of his own frenzied fantasies, as well as with the suppression of human grandeur in moralists such as Pascal, Ducasse replaced “despair with hope,” offering a lesson of agency and uplift from within the very corpus of repression. Like Rimbaud, whose renunciation of poetry and disappearance into the African desert sealed his literary reputation, Ducasse’s repudiation of his own jet-black apoplexies highlighted both the torment and its negation in an endless dialectical spin-cycle, an enigma forever to be read and pondered, never to be solved.

         In embracing transgressive figures such as Sade and Lautréamont, the Surrealists were not merely straining for provocative effect. More substantially, the movement promoted itself as one of the great currents of liberation in the twentieth century, the resolute enemy of stifling social and moral conventions, and this extended far beyond literature and art. Its calls, in the 1920s and 1930s, for rethinking the status of women and people of different races, for freedom of imagination and sexuality, and for political revolution directly influenced the protests of May ’68 and are not unlike calls that have sounded more and more urgently in recent years. It is also at the origin of many things we now take for granted, from the imagery that we respond to, to the humor that we appreciate, to the sense of strangeness that we unthinkingly call “surreal”; and it laid the groundwork for a larger degree of candor and personal engagement in artistic expression, to which today’s productions owe a great deal.

         In 1922, André Breton, Surrealism’s founder and primary theorist, declared: “Poetry, which is all I have ever appreciated in literature, emanates more from the lives of human beings — whether writers or not — than from what they have written or from what we might imagine they could write.” In other words, poetry was less about words on a page than about a living attitude, or, as Breton said, a “specific solution to the problem of our lives.” It was an early version of what in the 1960s would be expressed as “the personal is political,” and it implicated the poet and the artist in the work they created. It was not enough for your art to say the right things; if you were going to contribute a “specific solution,” you also had to walk the walk.

         What makes Surrealism such a good case study for our present context is precisely the disparity between the ideals that it promoted and the results that it often delivered, or the behavior that its members displayed. On the one hand, unlike their more sulfuric role models, the Surrealists might vehemently advocate for a positive and synthetic vision: the “point of the mind,” as Breton famously put it, “at which life and death, the real and the imagined, past and future, the communicable and the incommunicable, high and low, cease to be perceived as contradictions.” But they could also prove just as corrosive as Sade or Lautréamont, and were never shy about expressing their dislikes.

         Their early version of “cancel culture” could take various forms, from the twin lists headlined “Read / Don’t Read” — the “Don’t Reads” being the longer of the two and ending in “etc., etc., etc.” — to the so-called trial of Maurice Barrès. A renowned novelist, Barrès had been admired by Breton, Louis Aragon, and other future Surrealists in their formative years as a paragon of personal freedom, the “prince of youth,” but with World War I he had hardened into a super-patriot and arch-conservative. Feeling betrayed, Breton and his friends staged a mock trial of Barrès in May, 1921 before a packed house at the Hôtel des Sociétés Savantes, an ornate late-nineteenth-century edifice on the aptly named Rue Danton in Paris’ sixth arrondissement. The charge: “conspiracy against the security of the Mind.” When the “court” ultimately handed down a sentence of twenty years’ hard labor for Barrès, Breton, who had pushed for the death penalty, was disappointed. Barrès himself, of course, was nowhere near the proceedings and probably couldn’t have cared less — this was, after all, before social media existed — but Breton had prosecuted his case with such ferocity that some wondered what might have happened had Barres been present. (Fifteen years later, they would get their answer from Moscow.)

         Much less symbolic were the Surrealists’ true cancelations: the periodic exclusions of writers from within their own ranks. In 1926, not long after the movement’s founding, several of the original personnel were drummed out of the group for not embracing its turn toward Communist politics. This was followed by other purges over the succeeding years, resulting in the loss of many prominent members, some of whom were pushed to the brink of suicide, or beyond it, for failings that ranged from practicing hack journalism to political waffling to not condemning vehemently enough those whom the group had condemned. While Surrealism promised many freedoms, the one freedom it apparently could not abide, and toward which Breton could react with remarkable savagery, was the freedom to dissent. In the name of emancipation it practiced excommunication.

         These excommunications were in part a referendum on loyalty — are you with the program or not? — and sometimes simply an outlet for personal spite, but more fundamentally they were an interrogation of identity: you are defined by the company you keep, by both your actions and theirs, and tolerating intolerable elements reflects back on you. Returning to Breton’s statement about “poetry emanating from life,” what happens when the life no longer lives up to the demands of the poetry? And more to the point, how successfully did Surrealism as a collective live up to its own demands? Let us consider three aspects of Surrealism’s engagements: with race, with sexuality, and with gender.

         In matters of race, the Surrealists publicly aligned themselves with people of color, denouncing French colonialism and the inequities it fostered at home and abroad. In 1941, under the collaborationist Vichy government of Marshal Pétain, Breton invited the Afro-Cuban artist Wifredo Lam to illustrate one of his books, telling a journalist from the conservative Le Figaro that the choice of Lam was meant “to make clear just how sympathetic I am to Marshal Pétain’s racist concepts.” (To Breton’s disgust, the newspaper omitted that particular quote from the published interview.) Not long after, in Martinique, he met and collaborated with the poets Aimé and Suzanne Césaire, future founders of the Négritude movement. And in 1946, on a visit to Haiti, Breton told a student audience: “Surrealism is allied with people of color … because it has always been on their side against every form of white imperialism and banditry.” By many accounts, the popular uprising that deposed the repressive Haitian president Elie Lescot several weeks later derived some of its inspiration from Breton’s statements.

         All well and good. But figures such as Lam and the Césaires — as well as Hector Hippolyte, Hervé Télémaque, René Menil, Léopold Senghor, Jules Monnerot, Pierre Yoyotte, and Ted Joans — remain an oft-neglected minority among the many who passed through Surrealism, and only recently have such artists and writers begun receiving serious attention from historians of the movement. The most meaningful blend of Surrealism with a specifically black vision, Afro-surrealism, did not originate within the European groups, but had to develop on its own, independent of them, and with distinct differences.

         And this does not take into account other non-European Surrealists, such as César Moro, Fernando Lemos, María Izquierdo, and Frances del Valle; or Mahmoud Sa’id, Fouad Kamel, and Georges Henein; or Kansuke Yamamoto, Toshiko Okanoue, and Shūzō Takiguchi. Most of these figures, while retaining a greater or lesser identification with the movement, ultimately had to craft a version of Surrealism that spoke to their own cultural realities; whereas the Paris, Brussels, and London groups, even while promoting a broad internationalism, still kept their focus on a predominately white European set of references. There is more than a trace of exasperation in this remark by the Japanese Surrealist Takenaka Kyūshichi, from 1930, only six years after the movement was launched: “True Surrealists take a step beyond Breton. They are not confined by the Surrealism of Breton’s ‘Manifesto.’”

         When it comes to sex, while Surrealism enjoys a libertine reputation in the popular imagination, all licentiousness all the time, the reality was that these bourgeois young men were rather prudish. Yes, they believed in “free union” and “mad love” and abhorred marriage as an institution (even though many of them were married); and yes, they promoted the idea that a grand passion could excuse anything — though, not surprisingly, that particular freedom generally ran in only one direction. The inquiries on sex that the Surrealists conducted in the late 1920s and early 1930s were remarkably frank for the time, but also rife with the prejudices of the era, and quite a few strictures were voiced — particularly by Breton, who pontifically denounced sex workers, multiple partners, promiscuity, women’s orgasms, and male homosexuality. Needless to say, most of these sessions were among men only; in one of the few that a woman did attend, she listened for a while, then remarked, “You boys need to learn a few things.”

         Which brings us to the most blatant and the most complex of the Surrealist double-standards, the status of women in this allegedly emancipatory and revolutionary movement. Where to begin? We could start with René Magritte’s well-known canvas from 1929, I Do Not See the [Woman] Hidden in the Forest, which depicts a fairly classic painted nude framed by photographic portraits of seventeen Surrealist men: the fact that the men have their eyes closed does not make their gaze any less male. Or we could start with the many Surrealist visual works depicting the female form dismembered, disfigured, or otherwise deformed. Or with the many written works in which women appear as muses, lovers, inspirations, torturers, enigmas, enlighteners, or sorceresses, but almost never as autonomous individuals.

         Conversely, we could cite this passage by Breton, written as World War II was drawing to a close, in which he calls for a matriarchal social order: “May we be ruled by the idea of the salvation of the earth by woman, of the transcendent vocation of woman…The time has come to value the ideas of women at the expense of those of men, whose bankruptcy has become tumultuously evident today.” But since, good intentions aside, this still sounds a lot like mansplaining, let us bring in some other voices – for example, the British Surrealist painter Ithell Colquhoun, who reflected that despite pronouncements such as the one by Breton quoted above, “most of [his] followers were no less chauvinist for all that. Among them, women as human beings tended to be ‘permitted not required.’” The painter Leonora Carrington, when I once asked her for her opinion of Surrealism, called it simply “another bullshit role for women.”

         We can easily understand the resentment and the rage. On the one hand, Surrealism claimed to be one of the most woman-focused and erotically free movements in the history of literature and art. Alongside the paintings and photographs that mangled women were an equal number that exalted them, or at least the idea of them, as superior beings attuned to natural and supernatural forces beyond the reach of men. Within the movement, women were honored as muses and creative inspirations, and they were promised an alternative to the stifling roles that mainstream society expected them to fill.

         But all too often these grand promises simply fell flat, and the women who came to Surrealism as artists and writers with their own talents and ambitions — Leonora Carrington, Dorothea Tanning, Remedios Varo, Joyce Mansour, Lee Miller, Leonor Fini, Meret Oppenheim, Eileen Agar, Jacqueline Lamba, Gisele Prassinos, Kay Sage, Nelly Kaplan, and others — or who claimed for themselves the same freedoms in lifestyle and beliefs that the men did were disappointed to find themselves facing obstacles from their own peers that hardly differed from those of society at large. Colquhoun, for instance, was expelled from the British Surrealist group because her interest in witchcraft was deemed inappropriate by her male colleagues, who otherwise championed all kinds of transgressive anti-rationalisms in their philosophy and their work.

         Given all this, it would be too easy to brush away Surrealism as just another narcissistic patriarchal exercise that failed to live up to its big claims. The more seductive the promise, the more painful the letdown. But there might be a better lesson to be drawn from the experience of women in the movement. Constituting a “minority within a minority,” as Eileen Agar put it, they forged their own freedom and tilled their own ground, mapping out — as the critic Sacha Llewellyn writes — “their own autonomous identities… transforming and appropriating the iconography of women that male Surrealists ascribed to them… to produce radical works centered on the female condition.” The point here is not to discard Surrealism, but instead to identify the moments it offers in which prohibition becomes opportunity. Rather than repudiating Surrealism for its failures, the women and the artists of color who gravitated toward it took its promise of liberation and made it their own. The best criticism is neither rejection nor apology, but constant reevaluation and regeneration.

         What does it mean for the humanities now if art, writings, and philosophies deemed unworthy simply get thrown onto the trash heap of history because they fail to conform to prevailing notions of truth, goodness, or beauty? Where will fruitful challenges originate, if not in studying and debating works that offend and shock, or that fail to keep their promises? Must we “burn” everyone who, in their poetry or their person, does not live up to the ideals that we wish them to embody?

         Over the years, as I continued to ponder my philosophical quandary from graduate school, I came to realize that the point of reading literature so closely was to learn how to read the world, that is, the fine print of the world, the signs and underlying messages encoded in people, things, events, exchanges, and surroundings. Paradoxically, given the emphasis on hermetic concentration, what this training really provided was a wider and deeper sensitivity to the context surrounding these people and events. This involved opening my eyes and my mind to ways of thinking and expressing that I had not yet encountered — some of which troubled or upset me, but all of which were crucial steps in my learning how to understand my environments with discernment and with broadmindedness.

         Writers such as Sade, Lautréamont, and the Surrealists are problematic in that they challenge the values we like to think we celebrate, or confront us with frustrations and disappointments; but as such they also open doors toward a meaningful response. They were desperate people living and creating in desperate and pivotal times, whether the French Revolution, the Franco-Prussian War, or the aftermath of World War I. As we live through our own desperate and pivotal era, we might do well to ask what work such as theirs, however removed or outdated it might seem at first, can tell us about our own volatile experience.

         One of the most pernicious aspects of our fractious political and cultural landscape — alongside the decimation of our personal liberties, the erosion of civic discourse, governmental paralysis in the face of rising gun violence, and so much else — is the intolerance that it has fostered: not only the caricaturish intolerance for the values of diversity and inclusion that the liberal arts are meant to promote, but a resistance, even a fear — all along the political spectrum — toward engaging with viewpoints that we find alien and distressing, precisely because they are alien and distressing. As if we had somehow lost our ability to speak to things that we abhor in other than extreme ways. We demand, we shout, we insist, which is sometimes the necessary response. But what we must not lose is the ability to talk, and more than that, to listen, to weigh, to ponder, to empathize.

         If we are to be full-fledged human beings, we must learn the ability to enter into other points of view, including those that antagonize our deepest beliefs. Even as we disagree with or fight them, we must recognize them as human expressions. Saving ourselves from a reality soiled by ever more entrenched parochialism and flattened by defeatism and despair will depend on our capacity to evaluate lucidly, to look beyond buzzwords and bubbles, to see past labels that are often just a surrogate to thinking. There are no viable surrogates to thinking. It will depend on our ability and our willingness to contemplate the lessons offered by the humanities with minds wide open, and to interrogate our own and others’ beliefs with honesty, compassion, and courage. Knowing how to read the world with an open mind is not just a life hack; it is also a tool for survival.

         The Surrealists, for all their obsession with grasping the unconscious, were not particularly known for empathy. While some of their pronouncements, such as Breton’s remark about the “point of the mind,” might be read as aspirational, they generally preferred to operate in the vituperative register — which some, like Robert Desnos, elevated to a fine art. And yet they knew how to hear, and to hear effectively. Before they went on the attack, they did the necessary work. In 1949, for example, Breton unmasked a forged poem by Rimbaud strictly on the basis of intuitive affinity. His 68-page pamphlet, Caught Red-Handed, takes aim at the literary critics who fell for the hoax (basically all of them), dismantling their arguments point by point in a humiliating show of superior understanding. Not very friendly, perhaps, but in its wounding way more respectful: Breton actively listened to what those critics had to say before offering his withering refutation. There is a lesson in that. We can, if we must, endure a culture of intellectual incivility; sometimes it might even be beneficial. What we must not abide is a culture of intellectual abdication, or intellectual cowardice. 

     

    Memoirs of a White Savior

    Last year, a student came to my office hours to discuss her post-graduation plans. She said she wanted to travel, teach, and write.

     “How about joining the Peace Corps?” I suggested.

    She grimaced. “The Peace Corps is problematic,” she said. 

    I replied the way I always do when a student uses that all-purpose put-down. “What’s the problem?” I asked.

     “I don’t want to be a white savior,” she explained. “That’s pretty much the worst thing you can be.”

    Indeed it is. The term “white savior” became commonplace in 2012, when the Nigerian-American writer and photographer Teju Cole issued a series of tweets — later expanded into an article in The Atlantic — denouncing American do-gooder campaigns overseas, especially in Africa. His immediate target was the “KONY 2012” video of that year, a slickly produced film — by a white moviemaker — demanding the arrest of Ugandan warlord Joseph Kony. But Cole’s larger goal was to indict the entire “White-Savior Industrial Complex,” as he called it, which allowed Westerners to imagine themselves as heroic protectors of defenseless Africans. Conveniently, Cole added, it also let them ignore the deep structural and historical inequities that had enriched the West at the expense of everybody else. “The White-Savior Industrial Complex is not about justice,” Cole wrote. “It is about having a big emotional experience that validates privilege.” Instead of assuming that they know what is best, he urged, Americans should ask other people what they want. And instead of engaging in feel-good volunteer projects that do not do any actual good, we should challenge “a system built on pillage” and “the money-driven villainy at the heart of American foreign policy.”

             The Peace Corps is a volunteer agency as well as an agent of foreign policy. So it has also become a frequent punching bag on several popular Instagram accounts that have echoed — and amplified — Cole’s critique. No White Saviors (906,000 followers) denounces the Peace Corps as “imperialism in action”; at the parody account Barbie Savior (154,000 followers), you can thrill to the pseudo-adventures of a Peace-Corps-like doll who takes selfies with orphans, squats over a pit latrine, and invokes famous humanitarians. (“If you put an inspirational quote under your selfie, no one can see your narcissism — M. Gandhi.”) Never mind that Cole’s original posts mocked digital activism such as the KONY 2012 video, which featured “fresh-faced Americans using the power of YouTube, Facebook, and pure enthusiasm to change the world,” as he observed. In the Age of the iPhone, apparently, the only answer to a misguided social-media campaign is another social-media campaign.

             And now the campaign has spread into the Peace Corps itself, as my student noted. She alerted me to Decolonizing Peace Corps (9300 followers), which was started by three returned volunteers from Mozambique after the agency evacuated them — and the other 7300 volunteers around the world — amid the COVID pandemic in March 2020. Later that spring, following the police murder of George Floyd, the Mozambique trio circulated a petition urging the Peace Corps to reckon with its allegedly racist and colonialist roots. They sent it to No White Saviors, who told them a petition “wasn’t going to be enough”; what the volunteers needed was, yes, their own Instagram account. Decolonizing Peace Corps went live shortly after. Inspired by campaigns to abolish the police, it demanded the abolition of the Peace Corps. “When you look at the Peace Corps and you look at the police and you see the origins, you ask yourself, can this really be reformed?” one of the account’s founders asked. “How can you reform a system that was founded on neocolonialism and imperialism by a country built on genocide and slavery?” The question answers itself.

             Meanwhile, as the pandemic continued to surge, No White Saviors stepped up its own attacks on the Peace Corps. Now that all the volunteers had come home, it wrote, the agency should permanently close up shop. “No more pretending inexperienced young people are actually useful in countries and cultures they are alien to,” No White Saviors wrote in 2021. “Instead you could pay skilled local volunteers to work more effectively. No more spending money on flights or evacuations, no need to teach language or culture.” Indeed, a volunteer back from Nepal added, the worldwide evacuation was itself a “gross display of resource privileges.” She still couldn’t figure out why the Nepalese people in her community had wanted her there, “other than maybe for ‘cultural exchange.’” But, as her air-quotes indicated, that “is not a good enough reason to invest so many resources into mostly fresh out of college, inexperienced Americans.”

             I served in the Peace Corps in Nepal, fresh out of college in 1983. My father was a Peace Corps director in India and Iran in the 1960s, when I was a child. The story of the Peace Corps is, in many ways, the story of my life. Now my student wanted to know: was it worth it? And for whom? In reply, I related an experience from my years in Nepal. It’s all about who gets saved, and from what, and why.

    I was teaching one day when a kid bounded into my classroom, breathless from running. “John-Sir,” he panted, “your friend is in the valley!”

    “Your friend” meant another white guy, a very rare sight in that part of Nepal. I taught at the top of a hill — we would call it a mountain, but in Nepal it was a hill — about two hundred and fifty miles west of Kathmandu. To get there, you took an overnight bus across the plains that bordered India and then walked north for three days. When I first arrived, some of the children thought I was a ghost. They had never seen someone so pale.

    I peered down into the valley, shading my eyes against the sun. It was a picture-perfect, blue-sky afternoon in the Himalayas. And sure enough, there he was: another white guy. Everyone wanted to know what he was doing there. So we cancelled the rest of the school day, and all of us — teachers, students, and curious hangers-on — started walking down the hill.

             As we neared the valley, a villager approached me excitedly. He was holding a crudely printed pamphlet, which flapped in the mountain breeze. “John-Sir, your friend sold me this book!” he exclaimed. “Only five rupees!” Five rupees was what a man earned for chipping away all day at the tractor road, until it got washed away by the monsoon and you started all over again the next year. A woman got three rupees, and a child got one. It was a day’s wage. I took one look at the pamphlet and right away I knew what it was. There was a figure that looked like Greg Allman — long straight hair, close-cropped beard — nailed to a cross. And he was white, of course.

             We found “my friend” standing on the top of a rock at the bottom of the hill, selling dozens of pamphlets and collecting a large stack of cash. I walked straight up to him, ready for a fight. “What are you doing?” I demanded.

             “I’m saving these people for the Lord,” he said.

             Saving! Of course he was. I told him that they already have a Lord — about twenty thousand Lords, in fact. He said that Hindus were thieves and murderers. Recognizing his accent as German, I asked him why his own church had rolled over and played dead for Hitler. He replied that the Holocaust was a tragedy, but mostly because of the gypsies who perished.

             I told him to be fruitful and multiply, but not in those words. 

             He told me to fuck off, too. (So much for turning the other cheek, which was a pretty big deal to the white guy on the cross.) I told him that it was illegal to mission in Nepal, and that I would call the cop if he continued to sell the pamphlets. This was a total bluff, because “the cop” was a day’s walk away and — based on my encounters with him — most likely drunk. But the missionary didn’t know any of that, of course. So he hoisted his backpack, told me to fuck off yet again, and started climbing into the hills.

             By this time, a great sea of humanity had gathered. Nobody knew what the missionary and I were saying, but they could tell it wasn’t good. “John-Sir, you were angry with your friend,” someone said, as the missionary walked away. “Why don’t you like your friend?”

             “He’s a very bad man,” I said. “He doesn’t like your religion.”

             Then I heard one guy say, “Hey, John-Sir’s friend said if I believe in his religion, I’ll go to heaven and won’t be reborn over and over again.” And someone else said, “Hey, can I buy your book? I’ll give you six rupees for it.” “Run after John-Sir’s friend,” another guy said. “Maybe he has some books left over.”

             I cursed the missionary again, and then I cursed myself. Many years later I figured out why: we were both white saviors, in ways that still mortify me today. His saviorism was more direct and straightforward: Hindus were thieves and murderers, so he was saving them for his Lord. My own brand of saviorism was dressed up in the liberal multicultural dogmas of the era: I had to protect my villagers’ fragile and endangered belief system from an evil Western interloper. But I knew what was best for them, every bit as much as the missionary did.

             When you get on your high horse, you can disregard everything below you. There have been Christians on the sub-continent for nearly two millennia. But I didn’t know — or care — about that. Nor did I care what the villagers thought about what the missionary was saying (and selling). I knew what was “indigenous” to their “culture,” or so I imagined. And Christianity wasn’t.

             Most of all, I wanted to defend their “authenticity” against inauthentic outsiders. That echoes the type of white savior that you can find on Instagram today, condemning white saviorism. The classic version of white saviorism assumes that people in other parts of the world would be just like us, if they only had a better upbringing. The Instagram critique of white saviorism reverses that formula, insisting that they do not — or must not — embrace or imitate anything we do and say. That prescription is saviorist in its own right, wrongly and patronizingly ignoring the same local autonomy and agency that it claims to uphold. No White Saviors tells Western do-gooders to take account of what other people want, but it dismisses those wants if they do not correspond to its idea of the good.

             Eventually, the Peace Corps would save me from that kind of white saviorism, too – from the idea that people who seemed so different from me should forever stay the same. But to see how, as I told my student, we have to go back to the beginning.

    My father met John F. Kennedy by happenstance in August 1956, on a beach in the French Riviera. It was the day after the Democratic convention, when Adlai Stevenson — nominated for the second time for president — let the delegates select his running mate. Kennedy was hoping to join the ticket, but he was edged out by the Tennessee anti-mob crusader Estes Kefauver. Kennedy flew out that night to France, where my dad encountered him the next morning. When he heard that my father was at Yale Law School, Kennedy urged him to come work for the federal government. Most of the other students would head off to white-shoe law firms on Wall Street and elsewhere, Kennedy said. But my dad should go to Washington, JFK urged, because that’s where the action was going to be.

             I have told that story to my students many times, because it highlights the cavernous gap between Kennedy’s time and our own. Today, across the political spectrum, “Washington” is a symbol of dysfunction and decay. It is where action — and idealism — go to die. But not then. My father did go to Washington, where he held several government jobs. He also volunteered for Kennedy’s presidential campaign in 1960. At a meet-and-greet, he waited on a long line to shake the candidate’s hand. “Paul Zimmerman!” Kennedy exclaimed, to my dad’s shock and delight. “I met you on a beach in France, right after the 1956 convention.”

             Kennedy proposed a new volunteer corps during a famously extemporized 2 a.m. campaign speech in Ann Arbor, Michigan in October 1960. Between that night and his inauguration, over twenty-five thousand Americans sent letters to Washington asking how they could join the agency. The Peace Corps sent its first volunteers out in August 1961, to Ghana. By the end of 1963, seven thousand volunteers — known around the world as “Kennedy’s Kids” — were serving in forty-four countries. The agency received its largest weekly number of applications in the seven days after JFK’s assassination.

             My father signed on a few years after that. He made one phone call to the Peace Corps, which somehow patched him through to Bill Moyers. A fellow Texan and top aide to Lyndon Johnson, Moyers had been an assistant director of the Peace Corps under Sargent Shriver, the president’s brother-in-law. My dad recalled that Moyers asked him a single question: where he went to law school. In the era of the best and the brightest, that was all you needed to know. Moyers told my father to walk over to the Peace Corps office, where he was offered a job on the spot. His international “experience” was limited to two summer vacation jaunts in Europe, including the trip where he met Kennedy. But somehow, attending Yale Law School — and, I would imagine, working on a Kennedy campaign — qualified him to be the director of the Peace Corps in South India. He was thirty-two years old.

             So off we went to Bangalore. My mother took my brother (almost seven) and me (four and a half) first, stopping along the way in Israel to visit relatives. My dad followed with my ten-month-old sister, whom he fed with a bottle on the long Pan Am flight. We camped at a hotel for a few weeks until my parents found a house on a lovely compound adjacent to Cubbon Park, the city’s green oasis. When we got to Bangalore, in 1966, it had 1.4 million people. Today it is roughly ten times that size.

             My memories of India are dim, obviously. But the family albums show scenes that would fit nicely on the No White Saviors Instagram account. Here I am, riding an elephant at what looks to be a birthday party. Here’s my baby sister, in the arms of her Indian nanny. Here’s the rest of our large household staff: cook, cleaner, security guard. Here’s the whole family decked out in traditional Indian white cotton dhotis. The most remarkable photo shows my brother beaming next to Indira Gandhi, who towers a bit stiffly over him. Our family was visiting a volunteer somewhere on the countryside, and word spread that the prime minister was going to be disembarking at a nearby railway station. So we went down there to see if we could catch a glimpse of her. At such scenes, a child was customarily chosen to greet the arriving dignitary. The volunteer we were visiting held my brother aloft, urging officials to select him. Eventually young Jeffrey Zimmerman was ushered to the front of the crowd to hang a garland on Mrs. Gandhi. Talk about white privilege! It doesn’t get more privileged than that.

    What were we doing in India, other than making it — to quote Teju Cole — a “backdrop for white fantasies of conquest and heroism”? A few months before we arrived, Indira Gandhi met with Lyndon Johnson at the White House to discuss food shortages in India. Under its “Food for Peace” program, the United States agreed to send 3.5 million tons of wheat, corn, and other crops to India. Gandhi’s government also partnered with the Rockefeller and Ford Foundations to bring new methods and products to Indian farmers. Many of the Peace Corps volunteers whom my father directed were engaged in programs to improve agricultural practices, too. I can’t say I know what effect they had on farming, or whether local workers could have performed the Americans’ jobs more effectively or efficiently. But here’s what I do know: India kept requesting more and more Peace Corps volunteers, until the Bangladesh war in 1971 soured India-U.S. relations and led to their withdrawal. 

    That has been the pattern for sixty years: when the Peace Corps is asked to leave a country, it is for geopolitical reasons rather than programmatic ones. The people who actually host volunteers — school principals, health-clinic directors, and government officials — almost always want more of them, which isn’t something you see mentioned very often by the critics of white saviors. If the volunteers are so clueless and useless — or, worse, if they are actively visiting harm on other countries — why would these countries continue to invite them? When the question is asked at all, on anti-savior social media, the most common answer is that non-white hosts have “internalized” white-saviorism themselves. “I will always carry the assumptions of villagers that I am better, smarter, and work harder than my local counterparts,” a Peace Corps volunteer in Cambodia wrote in 2019, before the pandemic brought everyone home. “Does my contribution have enough value to outweigh the perpetuation of the white savior complex on the local people?” Note the choice of words here: his very presence foists white-saviorism on his hosts, who are feeble and powerless to resist its seductive wiles. That patronizes “local people” in the guise of protecting them.

             And here is something else I know: my parents forged deep and lasting human bonds in India. Their closest friends were the parents of an Indian kid I met at school. (It was a girls’ school, which is a whole other story.) The anti-savior critics might dismiss that as mere “cultural exchange,” which doesn’t do anything to change lives. They are wrong. At the Peace Corps’ twenty-fifth anniversary march in Washington in 1986, a man accosted my father and declared, “I’ve been waiting two decades to talk to you.” My dad had a self-deprecating sense of humor — that is to say, he was Jewish — and he told me he half-expected that the guy would raise his middle finger and shout, “Fuck you!” Instead the man said that he had gotten sick during the first months of his Peace Corps service in South India. He made his way to Bangalore, presented himself at my dad’s office, and announced that he wanted to go home. My father suggested that he go back to his village and give it one more try. He did just that, and he had lived there ever since. He married an Indian woman, had a bunch of kids, and worked in several different jobs in the health and education sectors. Perhaps he had once believed that he would save India, from disease or destitution or something else. Instead, he said, India saved him from the dull and altogether predictable life that awaited him back in the United States. America was his birthplace, but it wasn’t his home. India was.

    Our next stop was Teheran, where we arrived in January in 1969. It was flush with oil money and foreign workers, which gave the city a lively boomtown feel. Out in the countryside, a different set of changes were underway. The “White Revolution,” declared a few years earlier by Shah Mohammad Reza Pahlavi, redistributed land and sent health and literacy workers into rural areas. It also enfranchised and educated women, which outraged some Islamic religious leaders. So did the Shah’s development of secular courts and schools, which reduced the clergy’s power and influence in both realms.

             Peace Corps volunteers participated deeply in all of these modernizing efforts. In the cities, they taught at universities, advised local governments, and worked in the arts; a handful of American professional musicians were even recruited by the Peace Corps as volunteers in Teheran’s symphony orchestra and opera. Rural volunteers taught in primary and secondary schools or served in health clinics, sometimes alongside teams of urban Iranians who were doing the same. The Peace Corps volunteers tended to be slightly older and more skilled than earlier cohorts, owing to political changes back home. In 1969, shortly after Richard Nixon entered the White House, a group of returned Peace Corps volunteers called for the abolition of the agency: as the war in Vietnam raged, they argued, the Peace Corps provided cover for American violence and imperialism around the world. (Ours is not the first college generation to look askance at the Peace Corps.) Nixon was only too happy to get rid of the agency, especially after volunteers briefly occupied its headquarters in 1970 to protest the bombing of Cambodia and the slaughter of antiwar protesters at Kent State. Nixon told his aide Lamar Alexander — later the Secretary of Education and then a senator from Tennessee — to seek an appropriations cut in Congress as the first step towards zeroing out the Peace Corps. But Patrick Buchanan, another rising young GOP politico, warned Nixon that slashing the Peace Corps’ budget at that point would provoke a “real storm” among the “Kennedyites.” Buchanan instead suggested publicizing drug arrests and other overseas blunders by immature volunteers, who would eventually dig the agency’s grave.

             No such luck. To the chagrin of Nixon and his cronies, the Peace Corps retained its strong bipartisan support in Washington. But it did shift its priorities towards older and more professionally prepared volunteers, to head off the charge that it was a haven for pot-smokers and draft-dodgers. (“‘Technically qualified’ was a euphemism for ‘not liberal,’” quipped Peace Corps evaluator Charlie Peters, who went on to a distinguished career in journalism.) Several married volunteers who worked for my father even brought children with them, which was unheard of before that time. Yet this was hardly a straightlaced crowd; if anything, antiwar protest and youth culture back home probably made them more radical in their politics — and more hippyish in their habits — than earlier volunteers. My mother tells a great story about taking a long trip with my dad to visit a group of volunteers up in Tabriz, in the northwestern part of Iran. Famished upon her arrival at their house, she noticed a tray of brownies and asked if she could have one. You won’t like them, the volunteers said; they’re old and stale. My mother, characteristically, would not take no for an answer. Biting into a brownie, she deemed it delicious. She ate another, and then another. Eventually, the terrified volunteers drew her aside and explained that the brownies were laced with marijuana. They pleaded with her not to tell my dad, who didn’t hear about it until many years later. 

             Nor were the volunteers mere mouthpieces for the Shah, whom they mocked privately as “George” (as in “George Bernard Shah”). For a variety of reasons, including his recognition of Israel, the Shah was a strong ally of the United States. But he was also a brutal dictator, torturing dissidents and muzzling the independent press. Back in the States, protesters imagined that the Peace Corps — like other parts of the foreign policy apparatus — was helping to prop up pro-American tyrants. Around the world, though, a much more complicated pattern emerged. As the historian Beatrice Wayne has shown, the Marxist revolutionaries who overthrew Ethiopian ruler Haile Selassie often credited their changed political consciousness to the Peace Corps volunteers who taught them in the early 1960s, including the future senator Paul Tsongas. Most of the volunteers were middle-of-the-road liberals, not wild-eyed Marxists. But they led debates about Marx — and many other controversial matters — in their classrooms, which opened students’ minds to new ways of thinking and acting. Likewise, Peace Corps volunteers in Iran subtly undermined the same regime that their government was supporting. Opponents of the Shah befriended several volunteers, who quietly cheered the campaign against him. And as enmity towards the United States mounted, fueled by the simmering Islamic revolution, the volunteers reminded Iranians that all Americans were not anti-Muslim bigots or stooges for the Shah.

             But some of them were. The best decision my parents ever made was to send my brother and me to the Community School, which had been founded in the 1930s by American missionaries. It evolved into a thoroughly secular and cosmopolitan institution, patronized by families from dozens of countries — including many Iranians. That made it very different from the Teheran American School, where most of the American military and corporate types sent their kids. They didn’t speak any Farsi, and they called Iranians “ragheads.” I played Little League with them at the U.S. army base in the heart of Teheran’s Ugly American bubble, where they munched on hamburgers and made fun of the servants. Even as a nine-year-old, I knew that wasn’t right.

             Still, I was enormously proud of America. We were in Teheran for the Apollo 11 moon landing, a moment of huge American celebration — and, yes, American conceit — around the globe. And on United Nations Day, when everyone at Community School put on a short play about their country, we Americans staged the Thanksgiving story. We made a big Plymouth Rock out of papier-mâché, which we painted gray. I played Miles Standish, standing astride the Mayflower and shouting “Land ho!” as we approached the virgin wilderness. The folks over at No White Saviors would have a field day with this: it was a way to explain American power without addressing American conquest. A neat trick, if you can pull it off. But still I learned some important history lessons at U.N. Day. When I came home, I told my mother about the large Israeli delegation that marched under a blue Star of David. She laughed. The “Israelis,” she explained, were Iraqi Jews who had fled the Baathist revolution. They didn’t want to identify as Iraqis, which wouldn’t go over well in Iran. So they were Israelis instead. Problem solved.

             We were also in Iran for the epochal heavyweight boxing match between Muhammad Ali and Joe Frazier. As a Muslim, Ali was a huge celebrity across the Middle East. One evening, when my parents were on the road, our cook asked me why Ali was American but didn’t look like me. My Persian had gotten pretty good by then, and I explained to him—as best I could—that Africans had been enslaved by white people and transported across the Atlantic. “Really?” he asked. “You did that?” Well, yes. Not me personally, I said, but people who resembled me. The cook frowned and his brow furrowed. If our real purpose was to burnish white superiority in the non-white world, we weren’t doing a very good job of it.        

    Neither of my siblings ever evinced any interest in joining the Peace Corps. My brother doesn’t have as fond memories of those years that I do. My parents were away a lot, and I think he felt responsible for us while they were gone. And my sister was too young to recall much of anything. But I can’t remember a time when I did not think that I would be a volunteer. 

             In 1983, when I graduated from college, I applied for exactly one job: the Peace Corps. The agency was weathering another round of assaults from another Republican administration. David Stockman, Ronald Reagan’s famously Scrooge-like budget director, recommended a big cut for the Peace Corps. Meanwhile, GOP apparatchiks at the Heritage Foundation charged that the agency had lost sight of its Cold War mission: to provide a “counterweight” to America’s Soviet foe. Indeed, the original Peace Corps charter in 1961 required volunteers to receive training in the “philosophy, strategy, tactics, and menace” of communism. To head off the Republican attacks — and, of course, to protect its appropriation — the agency reinstituted anti-communist lessons for trainees. When my cohort met in West Virginia, before we left for Nepal, a Peace Corps staffer wrote the charter language about anti-communism on a whiteboard and said, “This is what the law requires us to teach.” That seemed like something that might happen in a communist country, someone replied, and everybody laughed: the lesson was a joke, and we all got it. Maybe such instruction made sense when the agency was founded, but the Soviet Union was on its last legs by that point, and the entire exercise seemed anachronistic as well as propagandistic. 

             As in the Nixon years, meanwhile, the Peace Corps announced that it was recruiting volunteers who were “older and better-trained” (read: sober and politically moderate) rather than the presumably radical and irresponsible youngsters of the Kennedy-Kids prototype. I couldn’t find much evidence of that shift in my own group. We were still mostly “B.A.-generalists” (as the Peace Corps called us) coming straight from college, who didn’t know how to do much of anything. Out at my village, in the Pyuthan district of western Nepal, my closest Peace Corps neighbor was a woman who had earned an education degree and actually taught (imagine!) in an elementary school back in America. It took me three hours to walk to the house where she and her husband lived. On the weekends, I hiked over there and picked her brain. I still think she taught me more about teaching than anyone I have ever met.

             But some of the work was intuitive, at least to an American. As a friend once told me, teaching is a lot like parenting: for the most part, you do it as it was done to you. In Nepal, most elementary-level teachers taught via repetitive chants. You knew you were approaching a school if you heard a chorus of kids singing in Nepali, “Two times two is four, two times four is eight, two times eight is sixteen,” and so on. English instruction happened in the same manner. The kids would say an English phrase and then its Nepali translation, over and over again: “This is a cat. Yo biralo ho. This is a dog. Yo kukur ho.” That wasn’t the way I learned, and it wasn’t the way I taught. Writing with a rock on the charcoal-covered piece of wood that served as our blackboard, I drew a cat. Everyone said the Nepali word for it: biralo. I said, “This is a cat.” Then, pointing to the board, I asked a kid, “What is it?” With a bit of coaxing, she said, “This is a cat.”

    Soon I had the kids drawing their own pictures on the board and asking each other — in English — to identify them. I wrote little plays for them to stage, in which Ram and Sita (the Jack and Jill of Nepal) discussed feeding water buffaloes, cooking rice, and other day-to-day activities in their lives. Eventually the students wrote their own dialogues, introducing new characters — Gopal, Durga, Soorya — and different topics: marriage, childrearing, and school itself. A student played John-Sir in one memorable exchange, donning a makeshift straw wig to mimic my Jew-fro. It brought the house down.

             Was my way of teaching “better” than the standard Nepalese method? Call me a white savior, but I believe it was. And here is how I know: my students told me. Nothing I did was particularly creative or innovative; it just made sense to me. But to the students it was a revelation. They had never experienced a classroom that required them to engage and imagine in the ways mine did. Eventually my Peace Corps neighbor and I designed trainings in these methods for other teachers in the region. We held three week-long sessions, in different parts of the district, where teachers put on plays and wrote songs and — most of all — laughed and laughed and laughed some more, in big collective guffaws that echoed out of the schoolhouse and into the hills. Sometimes, I’m sure, they were laughing at the weird Americans and our goofy, smiley mix of informality and exuberance. (“Why are you people always so happy?” a Nepalese friend once asked me.) But I also think they were having fun, and learning. I can’t tell you what effect these trainings had on their own instruction, or on anything else that happened in Nepal. But I also can’t tell you whether the history course I taught last semester at Penn will make any difference in the lives of my American students, either. At some level, all education is an act of faith. You throw a whole bunch of stuff at a wall — or, in Nepal, at a charcoaled piece of wood — and you hope that something sticks.

             And you never know what will. About fifteen years after I left Nepal, I got a call from Akron, Ohio. “John-Sir?” a voice on the other end said. That could only mean one thing: it was one of my Nepalese students. He had passed the national school-leaving exam and had gotten a job teaching in another district, where there was a female Peace Corps volunteer. The first thing he said to her, she later told me, was “Hello, Miss. Do you know John Zimmerman?” You know the rest. They had fallen in love, married, and moved to Ohio to be near her family. My student happily recounted our classroom dialogues as well as the “Steal the Bacon” game (which I rendered, he said, as “Steal the Pig’s Meat”) that we played outside. But he also said he had hesitated to contact me, because he feared that I was still “angry” with him. “Angry?” I asked. “About what?” Near the end of a class, my student recounted, he stood up and started erasing the girls’ blackboard before they had finished copying the evening homework assignment. (Girls and boys sat on separate sides of the classroom in Nepal, each with its own blackboard, so I wrote the assignment on both of them.) I screamed at him, he recalled, the only time the students had ever heard me do that. “You should be ashamed of yourself!” he remembered me yelling. “The girls have as much right to learn as you do!”

             I’d love to know how No White Saviors would parse this little tale. At first glance, it is perfect fodder for their critique: heroic white dude rides into town, spreading truth — his truth, that is — to the benighted brown people. Who was I, they might ask, to police how the Nepalese thought or acted around gender? At the same time, though, I’m sure that many people asking that question fashion themselves feminists of some shape or form. If I had failed to censure my student for erasing the girls’ blackboard, wouldn’t I have been “complicit” — to quote another of our current platitudes — in sexism and misogyny? When I tell my American students this story, they often say that my real mistake was raising my voice; I should have spoken to the student afterwards, calmly explaining the error of his ways, instead of humiliating him in front of his peers. Perhaps so. But whatever means I used, was it OK — or even necessary — for me to correct him at all? If he actually believed that the girls were second-class citizens, who says he was wrong?

             Here’s my answer: the girls did. When my student started erasing their blackboard, he told me, they raised their voices in protest. Indeed, he said, that’s what drew my attention to him in the first place. You can say that I was behaving like a white savior, and you might be right. But I don’t see how you can condemn me for imposing my views of gender, without regard for local tradition, and then disregard the expressed wishes of the girls. That sets you up as the arbiter of what is really “traditional” in Nepal, even as it throws half of the country — its female half, of course — under the proverbial bus. It is imperial anti-imperialism, the saviorism of No White Saviors.

    I fell victim to that, too, in my fateful meeting with the missionary. Unlike the blackboard incident in class, where the girls objected to my student’s behavior, I didn’t hear anyone complain about the missionary. The complaints were in my head, and I projected them onto Nepal. Perhaps that’s inevitable, to some degree or another, when human beings from different parts of the world encounter each other. Every living person’s perspective is partial. People in Nepal projected onto America, too, in ways that I found endlessly fascinating. Americans made lots of money, I was told, and they also had sex 24/7. How are you so rich, a guy once asked me with a straight face, when all you do is screw? I laughed, but it was no joke for female Peace Corps volunteers who had to fend off the men who believed it. Racial minorities and LGBTQ volunteers have faced special challenges, too. Taboos against dark skin run rampant around the globe, as does discrimination against gays; in several Peace Corps countries, indeed, same-sex love is illegal.

             Should Peace Corps volunteers try to challenge that kind of prejudice, as best they can? I think so. And if that sounds like white-saviorism, I have news for you: the Peace Corps isn’t as white anymore. In 1990, four years after I returned from Nepal, just seven percent of Peace Corps volunteers were racial minorities; by 2020, when volunteers around the world were evacuated, 34 percent were non-white. That doesn’t mean that they will be perceived as such outside of America, of course. Back in the 1990s, I conducted oral histories with black Peace Corps volunteers who had served in Africa. Many of them were called anomalous nicknames like “black white” or “native foreigner,” which acknowledged the black volunteers’ connection to the continent while simultaneously underscoring their difference from it. Most of all, the volunteers learned how much difference a category like “black” could contain. Countries such as Kenya and Nigeria were as diverse as the United States, volunteers discovered, containing dozens of different ethnic and language groups. And these people often hated each other, too, just like racists back in the States. “The color of the foot on my neck doesn’t matter, as long as there’s a foot on my neck,” one black ex-volunteer told me, describing the ethnic prejudices he observed in Africa. “Discrimination is discrimination.”

             And love is love, as another black returned volunteer added. “I got an appreciation of where I came from, and the beauty of that,” he said. “Basically, we’re all the same.” Although it sounds hopelessly trite in these jaded times, love for our shared humanity, our universal commonality, is the only way to appreciate our differences and to communicate (literally: to make common) across them. Six decades after it was founded, the Peace Corps remains a foremost symbol of that ideal: love, respect, friendship, and cooperation across oceans and borders, cultures and races. So it will always be an easy target for politicians and ideologues who see it as either starry-eyed nonsense or blind propaganda. In 2017, the Trump Administration proposed slashing the Peace Corps’ budget by fifteen percent; two years later, Republicans in Congress introduced a measure to eliminate the agency’s appropriation altogether. Channeling his inner Donald Trump — or, perhaps, his subconscious Richard Nixon — co-sponsor Congressman Walker (R-NC) said we should “put America first” by using the saved dollars to support disaster relief at home. The bill got 110 votes, all from the GOP, which wasn’t enough to get it through. By June of this year, as the pandemic abated, Peace Corps volunteers had returned to eleven countries. The agency is recruiting for about twenty other nations, too.

             Who will hear that call, especially on our increasingly particularist and identity-inebriated campuses? Very few people, I fear, if No White Saviors and their allies have their way. They use a different vocabulary than conservative critics, of course, sounding more like Noam Chomsky than Newt Gingrich: the agency is racist, neocolonialist, and so on. But the upshot is the same: the Peace Corps is a bad deal, so it’s time to bring everyone home for good. “If we actually solved the world’s problems, who would pay us?” a volunteer in Zambia asked in 2019, shortly before volunteers were evacuated worldwide. The Peace Corps is a junket, she said, wasting scarce resources that could be better spent in America. Mark Walker couldn’t have put it better himself. 

             But that spectacularly misses the point. I have never met a volunteer who thought that they solved the world’s problems, and I’m very sure that I didn’t. I wanted to help, and to learn, and to live. That is all. The scorn and the distrust of our present-day politics—on both sides of the aisle—cannot refute or erase the simple desire for connection. I left Nepal feeling humbled, not privileged. I learned how many different ways there are to be human, and how difficult it is to connect across the differences. But I also learned that we can do it, by trial and — mostly — by error. I learned that universalism is real, that it can be verified by experience, and that difference is not the last word on human life; that one is made wiser by the people one teaches, even when the differences are colossal; that there is more to the human world than power relations, even as we work to make them more equitable; and most importantly, that we mock altruism at our peril. The greatest danger of our moment is not racism or sexism, ableism or cisgenderism. It is cynicism, which tricks us into believing that we cannot overcome the differences—and, yes, the prejudices—that separate us from each other.

             When my service ended in Nepal, the teachers held a party for me at our school. The rice wine flowed, and everyone recounted ridiculous things I had said when I first got there and didn’t know any better. My favorite one: a teacher asked me to visit his house on the weekend, and I replied, in Nepali, “Yes, I’d like to meet your wife.” That’s what you might say in America, if a colleague invited you over, but nobody told me that the verb “to meet” in Nepal had another, earthier connotation. It was horrifying, humiliating, and altogether human: what used to be known as an honest mistake. “It doesn’t matter,” the teacher reassured me, patting my shoulder. And everyone urged me to “come back soon,” a common Nepali farewell.

             It wasn’t soon, but I did come back. Twenty-five years later, in 2010, I returned to the village with my seventeen-year-old daughter. The tractor road was finally finished, so the three-day walk had been trimmed to about eight hours. The first guy I saw said, “Hey, John-Sir, where have you been?” Everywhere, I wanted to say. Nowhere. It doesn’t matter. The school sponsored an impromptu “Welcome Home” reception, at which my daughter and I were pelted with flowers and doused in red tikka powder. I stood up to give a speech in my broken Nepali, which was slowly coming back to me. But I broke down in tears, overwhelmed by my good fortune to have loved — and been loved by — these good people. Was this sentimental? Sure. Was it condescending? Not for a second. Perhaps I had acted, in my worst moments, as their white savior – but they saved me too, from the rigid categories that we now deploy, loudly and dogmatically, to define and divide us. Black and white, Jew and Gentile, Nepalese and American: we are all different and we are all the same. Come back soon, John-Sir. Come back home. 

    Rapunzelania

    It is not wholly myself, this shadow tugging itself loose 

    as though it knew better where to go from here

    what to do and see

    before the ship leaves with the tide. Not a thousand ships, you understand,

    just the one. Tall and proud, I suppose, and in a dreadful hurry,

    what with the wind so uncertain.

     

    But we are not near the sea, my shadow and I,

    we are in a high, quiet place, one of those improbable towers —

    Rapunzel’s, if you will —

    and the prince is nowhere to be found, he is perhaps out sunning himself

    on a rock, like a gecko

    with destiny in the flick of his tail.

     

    Every love story ends badly: we know this,

    as we know that leaves will grow toward the sun, that the gods will die, 

    and the floodwaters rise. 

     

    I recall meeting a prince once or twice,

    and a few princesses, slender and lethargic,

    casting about for something to do.

    One of them made luxury backgammon boards, boxes 

    of fine walnut wood inlaid with bees and cicadas.

     

    Well. I will stand at the window, counting myself lucky. It is dark here, 

    and peaceful. Out flies my restless shadow. Let it. 

    You can only leave the tower if a prince climbs up your hair, and this is a challenge

    because you know princes are quite heavy, heavy as earth, 

    they come bearing happiness which swells as they climb,

    takes on its own shape, like a herald 

    riding the prince’s shoulder.