Honey and Poison: On Corruption

    I

    For as long as human beings have had governments, they have worried about public corruption. The Hebrew Bible warns repeatedly that those in authority — especially judges — should not take bribes, “for bribes blind the clear-sighted and upset the pleas of those in the right.” The Arthashastra, a third-century Indian text on the art of statecraft, cautioned that just as one cannot avoid “tasting honey or poison on the tip of the tongue,” government officials will inevitably be tempted to steal public money for themselves. Countless other examples — from classical Greece and Rome to Imperial China to the Islamic empires of the Near East — testify to the pervasiveness of public corruption across cultures and across time. Indeed, from the ancient world up through today, corruption has been a central concern of statesmen, philosophers, and journalists — and the undoing of powerful figures and the catalyst for major reform movements. Anxiety over corruption also figures prominently in culture across the centuries — from Shakespeare’s Brutus accusing his friend Cassius of having “an itching palm, to sell and mart your offices for gold to undeservers” to Lin-Manuel Miranda’s Alexander Hamilton rapping that “corruption’s such an old song that we can sing along in harmony.”

    And yet the problem of corruption, for all its ubiquity, is often neglected. Perhaps most strikingly, for a very long time the international development community — a shorthand term for the various government agencies, multilateral institutions, and non-governmental organizations focused on improving the well-being and opportunities of the residents of poorer countries — paid scant attention to corruption. This may have been due in part to the belief that corruption, while immoral and unjust, was only marginally relevant to economic development. The comparative lack of attention to corruption was also related to concerns about the political sensitivity of the issue: to talk about corruption is almost always to talk about politics. Indeed, at the World Bank in the 1980s and early 1990s, officials rarely uttered the word “corruption” in public, and referred to it behind closed doors as “the C-word” — a nod to the fact that this was a problem that everyone knew existed but agreed should not be discussed openly.

    Roughly a quarter-century ago, this began to change. As is often the case, the process was gradual and the causes complex, so it would be a mistake to attribute the emergence of anticorruption as a central international development issue to any one person or event. Still, at least symbolically, a breakthrough moment occurred in October 1996, when James Wolfensohn, then president of the World Bank, gave what came to be known as the “cancer of corruption” speech. Addressing the annual meeting of the World Bank and International Monetary Fund, Wolfensohn declared in no uncertain terms that to fight global poverty, organizations such as the Bank needed to promote “transparency, accountability, and institutional capacity,” and, more specifically, “to deal with the cancer of corruption.” Though Wolfensohn did not dwell on the issue — his remarks on corruption took up less than two minutes of his address — he did provide a succinct explanation of why corruption was a development issue: “Corruption diverts resources from the poor to the rich, increases the cost of running businesses, distorts public expenditures, and deters foreign investors.” Today such a statement would be unremarkable. But back in 1996 it was a big deal, especially since, as Wolfensohn later recounted, he had been warned shortly after the start of his presidency not to talk about “the C-word.” He ignored that warning — and, crucially, he did so by reframing corruption not as a purely political or moral issue, but as an issue that directly affected economic development. Corruption was now squarely on the international development agenda, and it remains so to this day.

    Over the generation since Wolfensohn’s speech, leading multilateral organizations, including the World Bank, IMF, United Nations, and OECD, have paid increasing attention to this issue, forming divisions and sponsoring projects devoted to anticorruption activities. We now have an international anticorruption agreement, the UN Convention Against Corruption (UNCAC), to which most countries in the world are parties (even if compliance is uneven at best); there are also regional anticorruption agreements in the Americas, Europe, Africa, and elsewhere. Donor agencies, such as USAID, the UK’s Department for International Development, Germany’s GIZ, Sweden’s SIDA, and many others, support extensive anticorruption programming. Anticorruption, in short, is on the map.

    But despite this progress in the cause of anticorruption — the more sophisticated conception of it, the protocols and agreements — the idea that the international development community should make the fight against corruption a high priority is not universally accepted. Among the many objections to the emphasis on anticorruption as an integral part of international development, I want to highlight — and debunk — three quasi-myths that have gained more traction in these debates than they deserve. I call these ideas quasi-myths, rather than simply myths, because each of these them does have a kernel of truth. But each of these three arguments is, on the whole, more false than true, and more misleading than helpful.

    The first quasi-myth is that corruption is a culturally relative concept, such that practices that wealthy Western countries consider corrupt are acceptable in other societies. The familiar refrain here is that in some cultures what “we” would consider a bribe, “they” would consider a gift — or more generally that Western norms regarding the line between the public sphere and the private sphere do not apply in many non-Western societies. Thus, the argument continues, when organizations such as the World Bank or USAID or the OECD promote an anticorruption agenda in developing countries, they are in fact imposing a set of values that are inconsistent with local customs and traditions. In its strongest form, the argument accuses those promoting an international anticorruption agenda of engaging in a form of “moral imperialism,” or even that these efforts are intended to advance Western economic interests (say, in lowering trade and investment costs) while stigmatizing non-Western modes of government and social practices as morally inferior.

    The idea that “corruption” is a culturally specific concept, such that practices that would be seen as outrageously corrupt in the West are considered legitimate elsewhere, has a long history. Consider, as one particularly infamous example, the impeachment trial of Warren Hastings, the first British Governor-General of India, which began in 1788 (and dragged on, with frequent delays and interruptions, until 1795). Hastings was impeached for mismanagement and corruption. In his defense, he argued, among other things, that it would be inappropriate to apply British moral standards to his conduct in India, because practices that Englishmen would deem corrupt were part of the normal operation of government in Asia. Edmund Burke, who served as chief prosecutor at Hastings’ trial, denounced this argument as “geographical morality.” To the contrary, Burke insisted, “there is no action which would pass for an action of extortion, of peculation, of bribery and of oppression, in England, that is not an act of extortion, or peculation, of bribery and oppression, in Europe, Asia, Africa, and all the world over.” Despite Burke’s pleas, the House of Lords acquitted Hastings. To be sure, many of those who accuse modern anticorruption campaigners of moral imperialism would condemn Hastings for his conduct in India — he was, after all, a literal imperialist. Yet Hastings’ appeal to “geographical morality” — what we might today call “moral relativism” — has a strong family resemblance to this more modern critique of the international anticorruption agenda.

    Is there any truth to the argument that the international anticorruption campaign seeks to impose — deliberately or unintentionally — a set of values, practices, and institutions that are inconsistent with local norms and cultures in non-Western countries? The short answer is no. The overwhelming weight of the empirical evidence — derived from surveys, interviews, in-depth case studies, and other sources — indicates quite clearly that Burke was right and Hastings was wrong. At least when it comes to what we might think of as the “core” forms of corruption — bribery, embezzlement (what Burke called “peculation”), and the like — there is actually remarkably little variation in attitudes and moral evaluations across societies and cultures: these practices are broadly understood as corrupt and wrong, and are roundly and nearly universally condemned.

    To be sure, there is more variation across societies with respect to what we might think of as “grey area” corruption, as well as with respect to when certain kinds of corrupt acts might be justifiable given the circumstances. This is the kernel of truth to the argument that different cultures have different attitudes toward what counts as (wrongful) corruption. Even here, though, we need to be careful about the implicit cultural condescension of assuming that what “we” consider corrupt, “they” would consider appropriate. Very often the difference runs in the other direction. After all, campaign contributions and lobbying activities that many countries would consider blatantly corrupt are treated in the United States as not only lawful but as constitutionally protected. More importantly, the extent of cultural variation in the understanding of corruption is relatively modest, and not pertinent to the forms of corruption that Wolfensohn and others like him have in mind. The idea that bribery and embezzlement are the concerns only of wealthy Western countries, and that the prominence of an anticorruption agenda is therefore a form of Western neo-imperialism, finds essentially no support in the extensive research on what the residents of non-Western countries actually think.

    Where, then, does this myth come from, and why does it persist? I have three conjectures. First, some who push this idea are as blatantly self-serving as Warren Hastings. The employees of Western multinationals who pay bribes to government officials in non-Western developing countries, and the officials who take those bribes, have an incentive to suggest that in the countries in question these so-called “bribes” are actually a manifestation of a rich and longstanding cultural tradition of offering gifts as a sign of respect. Second, some of those who advance the “moral imperialism” critique of the anticorruption agenda seem to harbor a deep (and perhaps understandable) mistrust of Western governments and multilateral institutions generally; these skeptics are primed to be receptive to the idea that an anticorruption campaign spearheaded by such entities is likely to have a hidden agenda.

    The third possible explanation, which I would guess to be the most important, is that it is easy to mistake cynical or fatalistic resignation about corruption — an attitude that is quite widespread in much of the developing world — for the cultural legitimacy of corrupt practices. But these are not at all the same thing. Many people tolerate corruption, or even participate in petty corruption themselves, out of a feeling that they are trapped, that corruption is inevitable, that the system is rigged and there is nothing they can do about it. Such tolerance and participation can be misperceived as “cultural acceptance,” especially when accompanied by rationalizations that invoke venerable cultural tropes. But grudging tolerance and rationalization are not the same thing as moral assent and legitimacy. And when it comes to corruption—at least to core forms of corruption like bribery and embezzlement — there is far less variation in attitudes across countries and cultures than one might expect.

    The second quasi-myth that is sometimes offered up as a reason to object to the international development community’s focus on anticorruption is the idea that, at least in developing countries, corruption can actually help the process of economic development. Rather than being sand in the wheels of the economy, the argument goes, corruption may instead grease those wheels, enabling entrepreneurs and investors to cut through burdensome red tape and enter markets that would otherwise be inaccessible. This idea was nicely captured by Samuel Huntington, in 1968 in his book Political Order in Changing Societies. “In terms of economic growth,” Huntington wrote, “the only thing worse than a society with a rigid, over-centralized, dishonest bureaucracy is one with a rigid, over-centralized, honest bureaucracy.” What he meant was this: if the government has put in place excessive and inefficient rules and regulations that stifle economic activity, then the economy will be better off if the public officials charged with enforcing those regulations take “grease” payments to look the other way, rather than insisting on rigorous and scrupulous enforcement of the misguided rules. It follows from this that vigorous action to suppress corruption — for example, by monitoring bureaucrats more closely and imposing stiffer penalties on those caught offering or accepting bribes — may, if unaccompanied by other reforms to the regulatory system, actually worsen a society’s economic prospects.

    We should give the argument its due, because it too contains a kernel of truth. When the formal rules are inefficient — for example, when securing a business operating permit through the normal channels would take an inordinate amount of time and expense — then corruption may indeed function as an efficiency-enhancing grease. That said, even proponents of this view would acknowledge that in such circumstances corruption is at most what economists would call a “second-best” solution. Those who endorse the efficient grease hypothesis would also presumably acknowledge that other forms of corruption can have substantial negative impacts on the economy — for example, when corruption facilitates the subversion of government programs that enhance productivity and welfare. So the question whether corruption is, on the whole, more likely to grease or to sand the wheels of economic development is ultimately an empirical question.

    There has been quite a bit of research on this empirical question since Wolfensohn delivered his “cancer of corruption” speech, and while the issue is not entirely settled (issues like this rarely are, given the challenges of isolating causal relationships in the available data), the overwhelming weight of the evidence suggests that Wolfensohn was more correct than Huntington. Corruption is far more often an impediment to economic development than a facilitator of economic development. One of the reasons for this was identified by Gunnar Myrdal in 1968 in his classic Asian Drama: An Inquiry into the Poverty of Nations. Writing in response to Huntington and others who had argued that corruption was a way for entrepreneurs to cut through bureaucratic red tape, Myrdal pointed out that much of this red tape had been deliberately imposed precisely to create more opportunities for extracting bribes. While excessive red tape may lead to corruption, corruption also leads to the proliferation of red tape — which suggests that effective anticorruption measures can make it politically easier to eliminate needless regulations.

    An even more important reason why corruption is more often associated with worse economic outcomes is that, while it may be true in certain contexts that corruption enables the circumvention of excessive business licensing requirements and other inefficient rules, there are a whole lot of other things that governments do that are important to economic development — things such as investing in infrastructure, supporting health and education, maintaining order, providing impartial courts and dispute resolution services, and enforcing rules that protect the integrity and efficiency of markets — that are undermined by widespread corruption. So while we should not dismiss out of hand the idea that corruption can sometimes function as an efficient grease, and we should certainly be mindful of the fact that excessive, inefficient regulations can both encourage corruption and inhibit economic growth, the idea that fighting corruption in developing countries will prove counterproductive because of corruption’s supposed efficiency-enhancing properties seems by and large inconsistent with the best available evidence.

    The third quasi-myth that sometimes comes up in debates over whether anticorruption should be a high priority for the international development community suggests a quite different reason for skepticism. Even if one believes that endemic corruption in developing countries is both immoral and economically detrimental in those countries, some critics contend that there is still no point in making anticorruption a central agenda item, because there is nothing that can realistically be done about corruption, at least in the short to medium term. On this view, cultures of corruption are so deeply embedded in certain societies that corruption should be treated as an unfortunate but unavoidable constraint on a society’s development prospects — like being landlocked, or located in the tropics, or having deep ethnic cleavages. In this pessimistic view, even though widespread corruption is a problem, few of the reforms or initiatives championed by anticorruption advocates have a realistic chance of making more than a trivial difference. It is just not a problem that can be effectively addressed through new policies or institutional reforms; the best one can do is to hope that long-term historical trends eventually produce a cultural change — but even that might be optimistic, given the alleged persistence of cultures of corruption across extended historical time periods.

    Here again, there is an element of truth to the argument. Systemic corruption often does have self-reinforcing and self-perpetuating tendencies — corruption begets corruption, creating a vicious cycle that can make entrenched corruption very difficult to dislodge. But the idea that corruption is the inevitable product of some deep cultural tradition that developed centuries ago, and that only those countries lucky enough to have inherited a “good” cultural tradition (say, from northern Europe) have much hope of making headway against the corruption problem, is not only tinged with racism, but is inconsistent with most of the available evidence.

    For starters, there is no systematic evidence — when one controls for other factors (like per capita GDP) — that particular cultural traditions have a robust correlation with present-day corruption. I should acknowledge an interesting caveat: some researchers have found that, all else equal, majority-Protestant countries have lower levels of perceived corruption than majority-Catholic or majority-Muslim countries. But when one looks more closely at individual-level data, there is no strong evidence that individual Protestants have different attitudes toward corruption than do individual Catholics, Muslims, or others, and it seems much more likely that the apparent correlation between Protestantism and perceived corruption at the country level is spurious.

    But the more important evidence against the notion that countries are locked into particular levels of corruption by their cultural heritage is that this view is inconsistent with the historical experience of those countries that today are viewed as relatively less corrupt. Consider Scandinavia. Today we have a stereotype of the Scandinavian countries as being very clean — which is fairly accurate, at least if we compare Scandinavia to other parts of the world. We sometimes imagine that this is because of some longstanding and essential feature of Scandinavian culture. But if we were to step into a time machine and go back a couple of centuries, things would look quite different. Things were rotten in the state of Denmark not just in Shakespeare’s imagined medieval period, but up through much of the eighteenth century and beyond. The fight against corruption in Denmark got underway after the establishment of an absolute monarchy in 1660, but the Danish state did not get corruption under control until a series of reforms adopted over the course of the eighteenth and early nineteenth centuries. As for Sweden, at the turn of the nineteenth century the Swedish state was extremely corrupt, particularly with respect to rampant nepotism and the purchase and sale of offices (and there was a fair amount of garden-variety bribery as well). To observers at the time, before the significant reform processes that took place in these and other countries, it might well have seemed that corruption was deeply embedded in these countries’ cultures. The “clean” Scandinavian culture that informs our contemporary stereotypes really only emerged in the mid-to-late nineteenth century.

    The extent to which modern “good performers” did not simply inherit a cultural tradition of clean government is illustrated even more vividly by the United States. To be sure, even today the United States is no paragon of government integrity. In comparison to most other countries in the world, though, American government has relatively low levels of bribery, embezzlement, and similar forms of corruption. But it was not always thus. Throughout much of the nineteenth century, corruption in the United States was rampant — especially at the state and local level, but at the national level as well. We do not possess international corruption indexes that go back to the nineteenth century, but if we did, and if such indexes were on a common scale, it is quite likely that the United States in the 1840s and 1850s, and perhaps as late as the 1890s, would receive corruption scores comparable to the scores that developing democracies like India, Brazil, South Africa, and Ukraine receive today.

    Certainly it seemed that way to domestic and foreign observers at the time. In 1849, in his book describing his travels in the United States, the Scottish journalist Alexander Mackay remarked that he had heard several Americans declare “that they believe their own government to be the most corrupt on earth.” Nearly a decade later, another foreign observer, the British MP William Edward Baxter, expressed his shock at the level of corruption in New York, reporting that “as great corruption exists [there] as was ever brought to light in the days of the Stuarts.” And it was not just foreign observers who made such damning comparisons. In 1858, the same year Baxter published his book, Senator Robert Toombs of Georgia lamented that while Americans may “speak of the corruptions of Mexico, of Spain, [and] of France, … I do not believe today that there is as corrupt a Government under the heavens as these United States.” A small-town newspaper editor who visited Washington D.C. that same year expressed his shock at the brazen purchase and sale of offices brokered by party leaders in the Senate, the White House, and various government bureaus, with “the actual sum of money to be paid for an office … as publicly named … as the price of dry goods are named between a dealer … and his customers.” These anecdotal observations, though perhaps a bit hyperbolic, have been largely corroborated by historians. The United States was, for much of the nineteenth and early twentieth century, mired in forms of systemic corruption not so different from those afflicting much of the developing world today.

    Why does this matter? It matters because if countries such as Denmark, Sweden, the United States, and others suffered from widespread and systemic government corruption at an earlier point in their histories, then we should question the notion that countries that today have corruption more under control have achieved this because of some deep and immutable cultural inheritance, by a lucky historical break, and that countries where corruption is entrenched and systemic are likely trapped in that state due to inherited norms that are too deeply ingrained in their cultures to be dislodged through institutional and political reform. If countries such as the United States managed to make the transition from a state of endemic corruption to one in which corruption, while of course still present, is aberrational and manageable, then this warrants the hope that other countries that today face a seemingly intractable regime of corruption might be able to make a similar transition.

    II

    But how can modern developing countries make such a transition? The historical examples are encouraging illustrations that such changes are possible and help to debunk the view that cultures of corruption are immutable — but they are less useful in supplying a template for modern reformers, given the dramatic differences in historical, economic, and political context. Even if we endorse the view that James Wolfensohn laid out back in 1996 — that anticorruption should be front and center in the international development agenda — we still need to ascertain what institutional reforms and policy initiatives can help to effect, under modern conditions in the contemporary developing world, the sort of transition that took place in wealthy Western countries much earlier.

    One popular prescription — though not one that institutions like the World Bank can openly advocate — is democratization. The logic here is straightforward and compelling. Corruption thrives when those who wield power are not accountable to those they are supposed to serve; and since democracy makes public officials accountable to the populace through regular elections — and because people in just about every country find corruption objectionable — more democratic countries are likely to have substantially lower levels of corruption than less democratic countries, all else equal. The global democratization project, on this view, is also a global anticorruption project.

    Matters are more complicated, however. It turns out that the evidence that democratization regularly results in improvements in government integrity is not terribly strong. It is certainly the case that, even when one controls for national wealth and other confounding factors, countries that have been strong democracies for quite a long time are less corrupt (or at least are perceived as less corrupt) than other countries. But if one excludes from the analysis the relatively small number of countries that have been full democracies for over forty years (mostly though not exclusively Western European countries and the former European settler colonies in North America and Oceania), there does not appear to be a robust and consistent relationship between the level of democracy and the level of (perceived) corruption. In other words, while old, established democracies do seem to be notably cleaner than other countries, new democracies and partial democracies do not appear less corrupt, on average, than non-democracies. (Some studies have even suggested that new democracies and partial democracies might be somewhat more corrupt than autocracies, though the evidence here is not as strong.) More anecdotally, it is not hard to come up with numerous examples of politicians suspected or known to be quite corrupt who regularly win elections.

    What explains the puzzling lack of strong empirical support for the view that democratization reduces corruption? One possibility is that democracy does indeed have corruption-suppressing effects, but only when democracy has become fully entrenched and institutionalized. If that is so, then we would have cause for optimism that some of the newer democracies will eventually see significant improvements in public integrity; we just need to be patient, and work to further strengthen democratic institutions. It could also be the case that new and partial democracies actually do have lower levels of corruption than autocracies, but autocracies are, on average, better able to suppress evidence of corruption, thus leading the international corruption indexes, which rely substantially on perceptions, to systematically underestimate the extent of corruption in autocracies relative to (newer) democracies.

    Maybe. But there is also a more pessimistic reading of the available evidence. It may be that “first wave” democracies are systematically different from second- and third-wave democracies, perhaps because the defining characteristic of a “democracy” (reasonably free and fair elections held on a regular basis) is not, by itself, sufficient to promote clean government. Without other norms and institutions in place, elections may become little more than competitions between rival patronage networks. Moreover, the expense of running a modern election campaign, coupled with the fact that electoral victory may be the key to (illicit) access to state resources, may actually encourage certain kinds of corruption. Indeed, many new democracies have seen an upsurge in corruption that is directly associated with the democratic process. Even though most citizens claim to find corruption reprehensible — and this claim may well be sincere — it may nevertheless be the case that in newer democracies in less developed countries, certain forms of corruption are helpful, perhaps essential, to winning elections and maintaining coalitions. If this more pessimistic view is accurate, then even though we can and should support democratization, we need to be more circumspect about the impact of democratization upon the attempt to eliminate corruption, at least in the short to medium term.

    Another common prescription for addressing systemic corruption through broad institutional reform focuses on shrinking the size and the scope of government. The idea here is that a root cause of much of the corruption that afflicts developing and developed countries alike is a bloated state sector, and an outsize role for the government — rather than impersonal market forces — in allocating resources. This is allegedly a recipe for corruption, because those with power over the allocation of resources (politicians and bureaucrats) will seek to leverage that power to obtain wealth, while those with wealth or connections will leverage those advantages to influence the public officials responsible for allocating public resources. Perhaps the most well-known proponent of the view that reducing government size is the key to taming systemic corruption was the economist Gary Becker, whose views were succinctly captured in the titles of two opinion pieces he penned in the mid-1990s: “To Root Out Corruption, Boot Out Big Government,” and “If You Want To Cut Corruption, Cut Government.”

    Yet this hypothesis fares even worse than the hypothesis that democratization is the key to reducing corruption. It turns out that government size (typically measured by government consumption spending and/or government revenue as a percentage of gross domestic product) is strongly and consistently correlated with lower levels of perceived corruption, even if one controls for other factors like per capita GDP. In other words, those countries with governments that do more taxing and spending, relative to the overall size of the national economy, are generally perceived as cleaner than otherwise similar countries with smaller governments.

    While this relationship is quite well established, the reasons for it remain unclear. One possibility is that a larger government is associated with a stronger social safety net and lower levels of economic inequality, and this may reduce forms of corruption that are driven by economic insecurity. Another possibility is that larger governments tend to spend more on things such as education, which may have corruption-reducing effects. Larger governments may also have better paid, more professional, and more effective bureaucracies, and may invest more in effective law enforcement and judicial institutions. These and other hypotheses essentially suggest that the projects on which larger governments are spending public money are often things with corruption reducing-effects that tend to outweigh whatever increase in corruption may be associated with giving government officials a greater role in resource allocation.

    Alternatively, or in addition, perhaps citizens demand more integrity from their governments when those governments are doing more, and when a broader and more affluent set of citizens are the beneficiaries of government programs. Put another way, larger governments may be associated with stronger citizen demand for clean government. That possibility also suggests that the relationship between larger governments and lower levels of public corruption may be due, at least in part, to a kind of reverse causation: perhaps citizens are only willing to support significant growth in the size of government when the public sector has a reputation for integrity; where corruption is widespread, expanding government programs may be less popular and therefore less likely. Relatedly, government decision-makers may not try to tax or spend as much if they expect a greater share of the revenue or expenditures to be stolen or misappropriated. On this account, political support for government expansion tends to rise as (perceived) public corruption declines.

    While there remains considerable uncertainty about the mechanisms and direction of causation — whether expanding government tends to reduce corruption, or whether cleaner governments are more likely to expand — the simple hypothesis, propounded by Becker and others, that cutting government size is likely to be associated with significant reductions in public corruption finds little support in the available evidence. That prescription also does not accord well with the historical experience of countries such as the United States, where the period of greatest success in fighting corruption — roughly from the turn of the twentieth century through the end of the New Deal — was also a period of extraordinary expansion in the size and scope of government, especially at the national level. None of this is to deny that there are some governments, and some specific government programs, that are bloated and consequently vulnerable to corruption. It is also not to deny that other forms of excessive government intervention — such as the needless red tape and inefficient regulations that Huntington and others worried about — may contribute to corruption, and that certain forms of deregulation might therefore have corruption-reducing effects. But as a general matter, the neo-libertarian view that smaller governments are more honest governments finds precious little empirical support.

    If democratizing the political system and shrinking the state are not the magic keys to promoting government integrity, what sorts of measures are more promising? There is probably no single answer. Corruption is not one thing — it is an umbrella term that covers many related but distinct forms of misconduct, which may require different remedial approaches. Corruption takes different forms, and has different roots, in different countries. Therefore, as is true in so many areas of public policy, effective anticorruption efforts must be appropriately tailored to the local context. Those who work in this field are well aware of all this. Indeed, the assertion that there is no “one-size-fits-all” solution to corruption is repeated so often that it has become a truism.

    Still, while we should be cautious about making strong and unqualified claims about “what works” in fighting corruption, it would be a mistake to err in the opposite direction by neglecting the lessons that we can draw from the last twenty-five-odd years of academic research and practical experience in this field. And what emerges from that combination of research and experience is that, while there is not One Big Thing that can transform a corrupt system, there are a number of smaller things that often help.

    The items on this list are not all that surprising. Strong laws, enforced by efficient, effective, and nonpartisan prosecutors and courts, while not sufficient, are very important. (A brief digression here: Many countries have created specialized anticorruption agencies with investigative and prosecutorial powers; some countries have even created specialized anticorruption courts or judicial divisions. These measures may sometimes be useful, if the creation of a separate entity helps ensure both institutional independence and sufficient capacity, but the track record of specialized anticorruption bodies is mixed. Operational independence and capacity seem to be what’s most important; formal specialization is, at most, a means to those ends.) Effective law enforcement is essential not just with respect to those laws specifically targeting corrupt acts such as bribery and embezzlement, but also in what we might call corruption-adjacent areas such as money laundering and corporate secrecy. “Follow the money” is especially good advice when it comes to addressing grand corruption, as it is often easier for bad actors to cover up evidence of their underlying corruption — or to escape legal accountability in their home countries — than it is for them to hide or explain away the proceeds of their illicit activity. So, some of the most effective anticorruption measures of  the last decade or so target the money, seeking to locate, freeze, seize, and eventually return or otherwise redistribute stolen assets.

    Within the bureaucracy, in addition to the enforcement of criminal laws and ethical rules, audits of government programs turn out to be quite effective in reducing theft and other forms of misappropriation. This might not sound that surprising, but it is nonetheless important. Rigorous empirical research has found that even in what we might think of as challenging environments for fighting bureaucratic corruption — such as Indonesia, Brazil, and Mexico — the knowledge that a local government or department’s accounts will be subject to an audit substantially reduces irregularities that are likely attributable to corruption. More generally, the promotion of a professionalized, semi-autonomous civil service, with personnel decisions insulated from partisan political actors, appears to be consistently associated with lower levels of corruption. (Interestingly, civil service salaries do not seem to have as clear or strong an association with corruption levels, though at the extremes — when civil servants are paid very well or very poorly — there is anecdotal evidence of an effect on integrity.) What we might think of as the classic Weberian vision of bureaucracy — characterized by autonomy (at least from partisan politics), norms of professionalism, regular monitoring and oversight, and merit-based personnel decisions — is associated with less corruption.

    Other measures that can help reduce corruption are those that facilitate the discovery of what would otherwise remain hidden misconduct. In this regard, legal and institutional protections for whistleblowers, both inside and outside of government, are extremely important. At the very least, potential whistleblowers need to have access to reliable reporting channels that can credibly guarantee confidentiality, and whistleblowers must be protected against retaliation. Some countries, including the United States, have gone further, instituting schemes for paying whistleblowers whose tips lead to significant fines or other monetary recoveries. While systematic evidence on the effectiveness of such reward programs is not yet available, the anecdotal evidence thus far is encouraging.

    In addition to protecting and rewarding whistleblowers, governments can help to reduce corruption by promoting transparency more generally. For instance, appropriately designed freedom of information laws, though by themselves insufficient, can facilitate monitoring by outside groups, such as the press and civil society organizations. Speaking of which, while the evidence that democracy reduces corruption is mixed, there does seem to be fairly strong evidence that within democracies, a freer and denser media environment, with more newspapers and radio and TV stations, tends to be associated with lower levels of corruption, and higher probabilities that corrupt incumbent politicians will be punished at the polls. (Perhaps unsurprisingly, there has been a fair bit of optimism in some quarters that modern information technology — especially the internet and social media — will have an even greater corruption-reducing effects, but the impact so far seems relatively modest.) Transparency may be especially important with respect to government management of valuable resources, such as oil and mineral wealth.

    These, then, are some of the ingredients for an effective anticorruption strategy. Strong laws enforced by effective and impartial prosecutors and courts; a professional Weberian bureaucracy subject to regular oversight but insulated from partisan politicians; and measures that promote transparency and facilitate monitoring and accountability. This list is incomplete, of course, and there is still a lot that we do not understand, but the basic components sketched above provide the appropriate foundation for an effective anticorruption framework.

    III

    If we know, at least in broad terms, a fair amount about the kinds of tools and techniques that are effective in reducing corruption, why does corruption remain such a systematic and pervasive scourge in so much of the world?

    Though part of the problem may be a lack of capacity, the fundamental problem is political rather than technical. On the one hand, those with the greatest ability to change a corrupt system — those in positions of political and economic power — have the weakest incentives to do so. After all, these political and economic elites are, almost by definition, the winners under the prevailing system. On the other hand, those with the strongest incentive to change a corrupt system have the least ability to do so, because — again, almost by definition — these are the people who have been denied access to political and economic power. This is a familiar problem, hardly unique to the fight against corruption. There is an inherent small-c conservative bias built into most political systems, insofar as those who achieve power typically benefit more from the status quo than those who do not. But the problem may be especially acute with respect to endemic corruption, not only because many of the “winners” owe their power and wealth to the corrupt system that they have mastered, but also because they (and their supporters and associates) might be held personally accountable for their misdeeds if that system changes. Convincing the elites in a corrupt system to support genuine and effective anticorruption reform is a bit like trying to convince turkeys to support Thanksgiving.

    We know far less about how to overcome this fundamental political challenge than we do about the tools and techniques that, if fully and faithfully implemented, can substantially mitigate corruption. That said, the political hurdles to anticorruption reform are not insurmountable — after all, some countries have surmounted them, at least in part. A look at those (partial) success stories suggests at least three models for overcoming the inherent political resistance to genuine anticorruption reform.

    One possibility, which we might call the “wise king” approach, is to centralize power in a strong and far-sighted leader who can push through dramatic and comprehensive anticorruption reforms without much resistance. If this leader is both a person of high integrity (or at least someone who wants to have that reputation) and sufficiently secure in his or her power, then the leader may be willing to take drastic action to transform a corrupt system. Such a leader would gain much more — in reputation, authority, and legacy — from fighting corruption than from letting corruption persist. Former Singaporean Prime Minister Lee Kwan Yew, who for all his faults deserves credit for the clean-up of Singapore in the 1950s and 1960s, is the most well-known modern example of this model in action. President Xi Jinping of China, who has made anticorruption a central theme of his presidency, seems to be trying to follow in Lee’s footsteps, though his methods are unacceptable to supporters of a liberal order. Yet this approach is not limited to autocratic countries. Some democratic systems concentrate substantial power in the chief executive, giving that person the ability to mandate sweeping anticorruption reforms that might be difficult or impossible in a system with more dispersed power and greater checks and balances. (Mikhail Saakashvili, who became president of the Republic of Georgia in 2004 after the Rose Revolution, is one of the most well-known examples.) And many a populist politician has campaigned on the pledge that, if the voters just give him power — and allow him to sideline or co-opt other institutions that might obstruct his initiatives — then he will clean up the system from top to bottom.

    The “wise king” model is understandably attractive to those who are frustrated with the ability of corrupt elites to work the system to block anticorruption reforms. But it is an awfully high-risk strategy. Monarchy under a wise king may be among the best systems of government (at least if one cares only about outcomes rather than process), but monarchy under a bad king, or a mad king, is catastrophic. And the track record of powerful chief executives who have pledged to clean up corruption is not great. Many of the populists who emphasized the fight against corruption both as a campaign theme and as a justification for eroding institutional checks on their authority have proven not only ineffective in fighting corruption, but just as corrupt, or worse, than their predecessors. (Jair Bolsonaro in Brazil and Viktor Orbán in Hungary are particularly prominent examples, though there are plenty of others.) And when anticorruption policy is driven solely by the chief executive and his or her inner circle, there is a substantially greater risk that the fight against corruption will be weaponized, disproportionately targeting political opponents of the regime. So while it is possible that the political impediments to meaningful anticorruption reform can be overcome by a sufficiently far-sighted and skillful leader, the risks of reliance on a powerful leader likely outweigh the benefits.

    A second way in which it may be possible to overcome the political headwinds that make it so hard, under ordinary conditions, to sustain effective action against entrenched corruption is to take advantage of moments of crisis that give outsiders, or previously overmatched internal reformers, the opportunity to effect genuine change. We have a number of historical and modern examples of this model in action. The catalyst for Sweden’s significant efforts to promote clean government over the course of the nineteenth century was Sweden’s military defeat by Russia in 1909, and the resulting fear that Sweden’s very existence was in peril. The widespread view that the country’s disastrous military performance was due in large part to the corruption and dysfunction of the Swedish state (especially the role that nepotism and sale of offices played in placing incompetent people in key positions) inspired Sweden’s liberal reformers and gave them more influence with the monarchy, and produced opportunities for reforms that threatened the interests of the entrenched nobility. A more recent example comes from Indonesia, where the Asian financial crisis of 1997 finally brought down Suharto, the corrupt autocrat who had ruled Indonesia for three decades. Suharto’s fall ushered in the so-called reformasi period, which saw not only the democratization of Indonesia’s government, but also numerous significant anticorruption reforms, including the passage of new laws and the creation of an unusually powerful and independent anticorruption commission, as well as a specialized anticorruption court. When a popular movement manages to topple the government, this may also be a moment when good governance reforms that were previously politically unthinkable become feasible.

    Of course, for reformers desperate to take on the corruption that is strangling their economies and undermining their governments, it is not so helpful to be told to wait for a major disruptive event like a disastrous military defeat or a financial crisis or a popular uprising. Still, there is an important lesson here, one that recalls the adage that fortune favors the prepared. Pressing for anticorruption reforms in a politically inhospitable environment can be frustrating, and developing carefully crafted legal or institutional reform proposals can seem pointless when the powers that be have little appetite for doing anything that could threaten the sources of their wealth and privilege. But one never knows when a moment of crisis — and opportunity — will emerge. What may have seemed like fruitless efforts to design and to promote politically infeasible changes can pay enormous dividends when the window of opportunity suddenly opens.

    The third model for achieving a transition from endemic corruption to manageable corruption might be termed the Long Slow Slog. Rather than a single transformative moment — driven by a wise and powerful leader or a disruptive crisis — a movement away from endemic corruption can result from the slow accumulation of smaller victories and incremental reforms that gradually squeeze pervasive corruption out of the system and alter the norms of politics in a healthier direction. This process typically involves a combination of top-down and bottom-up efforts, with shifting coalitions of activists, journalists, professional elites, and business interests making common cause with reformist politicians who — out of genuine interest, strategic calculation, or some combination — make cleaning up corruption, or improving government more generally, a high priority. The process can be frustratingly uneven and slow, with periods of progress followed by periods of stagnation or backsliding. And it is not really one political struggle, but rather a series of struggles involving various reforms, not all of which are explicitly or primarily about corruption. Yet the cumulative effect of these reforms, if they are sustained and expanded over a sufficiently long period, can be extraordinary.

    There are those who are skeptical that the Long Slow Slog model for fighting systemic corruption can possibly work, given how corrupt systems tend to be self-perpetuating. But in fact this model may be the one that holds the most long-term promise. For what it’s worth, the Long Slow Slog is the model that best captures the transformation that occurred in the United States starting around the end of the Civil War in 1865. As noted above, the United States in the mid-nineteenth century was, in many respects, a developing country, with levels and forms of political corruption not too different from modern-day developing democracies such as India or Brazil. By the start of World War II, corruption in the United States — though still very much a problem — was much less pervasive. And in the post-war decades, though corruption scandals continued to make headlines, matters continued to improve. This was not a quick or easy process — there was certainly no “big bang” moment. True, there were a few periods of particularly intense reform activity, especially the Progressive Era at the dawn of the twentieth century; there were also a handful of especially influential reformist leaders, including Theodore Roosevelt, Woodrow Wilson, and Franklin Roosevelt at the national level and Governors Charles Evans Hughes of New York and Robert La Follette of Wisconsin at the state level. But there was no dramatic moment of radical change comparable to Indonesia’s reformasi period in the late 1990s, nor a single transformative leader comparable to Singapore’s Lee Kwan Yew. Rather, the reform process in the United States was a struggle on many fronts, one that was spread out over at least three generations.

    Take, as one example, the struggle for civil service reform. Civil service reform is of course not only about fighting corruption, but it is an important aspect of the anticorruption agenda. The so-called “spoils system” — a form of what political scientists sometimes call “clientalism” — is arguably corrupt in itself, and indisputably facilitates and encourages other forms of corruption. In the United States, the first serious civil service reform bill was introduced in Congress in December 1865, by Representative Thomas Jenckes of Rhode Island. It went nowhere. President Grant did implement some modest internal reforms to the executive branch during his first term, and he formed a commission to recommend broader changes to the civil service, but the commission’s ambitious recommendations provoked vigorous resistance, and by 1875 the push for federal civil service reform looked dead. But it rebounded, thanks in large part to a revitalized coalition of reformist activists and sympathetic politicians. The reformers’ efforts received an unexpected boost in 1881 from the assassination of President Garfield by a disappointed campaign worker bitter over the fact that he had not received a government job in return for his political services. The political momentum for reform — boosted by the rallying cry that President Garfield had been, in effect, murdered by the spoils system — finally led to the passage of the landmark Pendleton Civil Service Reform Act in January 1883, just over seventeen years since Representative Jenckes had introduced his first bill.

    Yet the Pendleton Act, while hugely significant in its creation of a “merit system” for a portion of the federal civil service, was quite limited in its coverage, applying to only about ten percent of federal employees — and even that limited reform provoked a great deal of hostility from politicians and party operatives. Notwithstanding this resistance, the merit system gradually expanded, albeit in fits and starts, over the next several decades, thanks to a combination of ongoing pressure from reformers and the political calculations of successive administrations. By the time Theodore Roosevelt took office in 1901, the merit system covered roughly 46 percent of the federal civilian workforce; by the time he left office in 1909, that figure had risen to roughly 66 percent. And by the time the United States entered World War II in December 1941 — almost exactly seventy-six years since Representative Jenckes had introduced his first civil service reform bill — the merit system covered approximately 90 percent of federal civilian employees, and almost every state had adopted and implemented a comparable system that covered most state employees. The U.S. civil service merit system was certainly not perfect, but if one compares the shameless and corrupt clientalism of the 1840s and 1850s to the much more professional administration that was established a century later, even the most jaded cynic would be hard-pressed to deny the extent or the significance of the progress that had taken place.

    This is but one example of something we see in many countries and in many historical periods. Measures to fight corruption, and to improve the integrity and performance of government more broadly, may take a long time. But not only is incremental progress often possible, these incremental changes can also add up to a meaningful transition away from a system that runs on endemic corruption to one in which corruption, though always a serious problem, is at least manageable. That observation — that the Long Slow Slog approach to fighting systemic corruption can be effective — may seem obvious, even banal. But this conclusion contradicts both the widespread fatalism about the impossibility of making significant progress against entrenched corruption and the view that making progress against entrenched corruption requires a “big bang.” And though the conclusion that the fight against entrenched corruption may often require a Long Slow Slog might seem disheartening, the fact that such a strategy can ultimately bear fruit should be cause for optimism. Those who are engaged in the battle against systemic corruption — including both those who are fighting for change within their own countries and communities, and also those who are seeking to support change from outside — can take heart from the fact that although the problem may often seem intractable, a series of small victories, which may not seem by themselves to do much to change the fundamentals of a corrupt system, can add up to something bigger and more transformational.

    This is not to say that corruption can ever be defeated. The temptations to use one’s entrusted power for private advantage, or to use one’s wealth and influence to improperly influence public decisions, are just too strong. And even as the “core” forms of corruption, such as bribery and embezzlement, become harder and riskier, those who want to use their power to acquire wealth, or to use their wealth to exercise power, will find other ways to do so — a point emphasized by those who criticize the political finance and lobbying systems in the United States and elsewhere as forms of “legal corruption.” Yet it would be a mistake to be fatalistic or cynical. The battle against corruption may not be a battle in which we will ever declare final victory — this cancer is not one that can be cured — but progress against this chronic disease of the body politic is possible, so long as those engaged in the fight do not lose heart.

    The Enigmatical Beauty of Each Beautiful Enigma

    Above the forest of the parakeets,
    A parakeet of parakeets prevails,
    A pip of life amid a mort of tails.

    (The rudiments of tropics are around,
    Aloe of ivory, pear of rusty rind.)
    His lids are white because his eyes are blind.

    He is not paradise of parakeets,
    Of his gold ether, golden alguazil,
    Except because he broods there and is still.

    Panache upon panache, his tails deploy
    Upward and outward, in green-vented forms,
    His tip a drop of water full of storms.

    But though the turbulent tinges undulate
    As his pure intellect applies its laws,
    He moves not on his coppery, keen claws.

    He munches a dry shell while he exerts
    His will, yet never ceases, perfect cock,
    To flare, in the sun-pallor of his rock.

    THE BIRD WITH THE COPPERY, KEEN CLAWS
    WALLACE STEVENS

    When I was a girl in my twenties, I had no idea what to make of Wallace Stevens’ mid-life poem “The Bird with the Coppery, Keen Claws.” I had come to feel indebted to Stevens’ work; I knew there was always a valuable presence inside every poem. But I postponed thinking about “The Bird” because it seemed too surreal, too unrelated to life as I understood it. The birds I knew in verse, from Shakespeare’s lark to Keats’ swallows. were mostly “real” birds, easily metaphorical birds, flying and singing. Stevens’ enigmatic bird, by contrast, was not recognizably drawn from the real thing. The bird is offered as a parakeet, but resembles no real parakeet, if only because he is the “parakeet of parakeets,” a Hebrew form of title for a supreme ruler (“King of Kings, Lord of Lords”) and because he is characterized, Platonically, as “perfect.” I couldn’t make sense of the described qualities of the “bird” because they were wholly inconsistent with those of real birds, with those of any imaginable “perfect” bird, and with each other. Stevens’ bird (a “he,” not an “it”) is especially disturbing, because he possesses the powers of intellect and will, powers thought to distinguish human beings from the “lower animals”: his “pure intellect” applies its complement of “laws” and he consciously “exerts / His will.” And it is only late in the poem that we learn that the bird has intellect and will. What strikes us more immediately is that the bird lacks almost everything we expect in birds: he cannot fly, or see, or mate, or form part of a flock; he remains blind, perched immobile “above the forest” on “his rock.” Put to such puzzlement, I fled, at first, the enigma.

    And there was also the problem of the peculiar stanza-form: three five-beat lines per stanza, rhyming in no form I had even seen before — an unrhymed line followed by two lines that rhymed (abb). I had seen tercets in Stevens and other poets, but never this kind. In those other tercets, sometimes all three lines would rhyme (aaa), or sometimes they would interweave to form Dante’s terza rima (aba, bcb, etc.). There were reasons behind the rhymes — aaa becomes emblematic in George Herbert’s “Trinity Sunday,” and terza rima was chosen to point to Dante in Shelley’s “The Triumph of Life.” But what could be the reason for this strange abb? There were abba poems, there were aabb poems, but there were no abb poems. It was an emblem of a lack of something, but of what? I was left guessing about content and form alike. And the stanzas were peculiar in another way: each of the six stopped dead at a terminal period. The reader is instructed, by the insistent conclusive period closing each stanza, to take a full breath between stanzas. Stiffly isolated, stopped after each venture, they did not seem to belong together, nor was there any ongoing narrative to connect them. Most stanzaic poems are more fluid than these representing the bird. Here, one encounters obstruction after obstruction.

    “The Bird with the Coppery, Keen Claws” made me ask why a poet would write a poem that seemed unintelligible even to a habitual reader of poetry. Why, I wondered with some resentment, would a poet offer me a poem that presented such obstacles? Only later did I learn that Stevens had said that “the poem must resist the intelligence almost successfully”, with the “almost” saving the day by its compliment to the persistent reader. There was, then, work to be done by the reader before the linear string of stanzas could be wound up into a perfect sphere. I knew Blake’s promise from Jerusalem:

    I give you the end of a golden string
    Only wind it into a ball
    It will lead you in at Heaven’s gate
    Built in Jerusalem’s wall

    No memorable poem is devoid of art — and the art in the artless is often as difficult to find as the solution of the enigmatic. The “work” of the reader is normally a joyous one; but I was recalcitrant before “The Bird” because I did not yet know how to do the work Stevens expected of me. To be at ease in the poem seemed impossible.

    What was the work the poet was demanding of me? It was to inhabit the poem, to live willingly in its world. To do that, one must believe that every word in a poem is, within the poem, literally true, and the first step must be to collect the literal facts from the words. From the title, we know that there is a bird, and the bird has claws. Facts of absence are as important as facts of presence. It is clear that the bird — because he does not sing in the poem — cannot sing, that the bird — because he does not fly in the poem — cannot fly. Although some catastrophe has massacred all the other parakeets of the forest, the parakeet of parakeets has escaped that collective death. Once I understood that I had to take the bird literally, peculiar as that seemed, I became his ornithologist, recording the bird’s traits, his present and past habitats, his powers, his hindrances, and his actions.

    Stevens’ bird strangely possesses an “intellect” and a “will,” powers traditionally ascribed solely to human beings precisely in order to differentiate them from “the lower animals.” But in spite of his possession of these formidable powers, the bird is strikingly deprived of the actions we most expect in birds: singing and flying. He remains mute, fixed “above the forest” on “his rock.” Horribly, he also lacks a bird’s keen sight; in a diagnostic logic inferring an inner disease from a bodily deformity, the poet declares dispassionately that “His lids are white because his eyes are blind.” “Real” birds, like all organic beings, seek sustenance; they peck for food like the sparrow in Keats’ letters, or sip water like George Herbert’s birds which “drink, and straight lift up their head.” But Stevens’ bird is starving, and for lack of anything else feeds on a nutritionless “dry shell” (making it last as long as possible by “munching” it in slow motion). And for Stevens’ parakeet there is no mate in this dreadful landscape of parakeet carcasses, this “mort of tails.” (The dictionary reveals that the infrequent word “mort” means “a large quantity. . .usually with of,” but it also hints, via “mortal,” at the French mort, “death.”) The bird is, in fact, the only “pip” of life remaining in his “paradise”. (Keats, in a letter, wrote that “I am sorted to a pip,” where “pip” means an ordinary numerical playing card, not a court card.) Although the bird’s southern atmosphere, his “gold ether,” is indeed paradisal, he, although he is its “aguazil” (a minor Latin-American official), cannot be its “golden aguazil,” a fit inhabitant of his gold air.

    The only thing paradisal about him is that “he broods there and is still,” like Milton’s Holy Spirit at the opening of Paradise Lost, who broods “o’er the vast abyss.” Although the bird is the only living presence (with no rivals as well as no mate or progeny), this exotic creature, despite the gold ether, lives in radically imperfect surroundings. Of possible tropics-yet-to-be, there exist for him only a few unpromising “rudiments”: an ivory species of aloe (a succulent that grows in arid soil) and an unappetizing pear with a rind made “rusty” by lethal pear mites. It is doubtful that these “rudiments” can ever again blossom into golden fruit and flowers.

    Yet this flightless, blind, and starving bird is — as one continues to encounter his qualities — surprisingly active. The verbs describing his internal motions render them perhaps even superior to flight. He broods in his golden atmosphere, he applies laws with his “pure intellect,” he spreads his tails, he munches (even if fruitlessly) on his shell, he exerts his will, and he brilliantly and unceasingly flares (“to display oneself conspicuously,” says the dictionary). His flaring outdoes in radiance the sun itself, making the “real” sunlight on his rock seem merely a “sun-pallor.” The bird’s glory lies in his capacity to “deploy,” to fan out, his splendid tail-feathers, which undulate in hue — by command of his psychedelic will — in “turbulent tinges.” His obscure “tip” is an omen of the future: it is now merely a drop of water but is potentially “full of storms.”

    Although the bird is externally so immobile, mute, blind, and starved as to seem almost dead, he has begun to experience the feathery stirring of a new creation, in which a genera-tive green turbulence will expand “upward and outward,” populating the desert of carcasses with resurrected golden companions and a regenerated golden self. The powerful golden parakeet-to-be will be able to command, to sing, to see, to fly, to mate, as he did in the paradisal past before all his earlier parakeet-companions were reduced, by some as yet unspecified agent, to a heap of corpses.

    Stevens writes such a resistant poem in order, for once, to speak in his “native tongue,” to offer not so much an intended communication as a private display. In general, writers want, in at least one work, to express in an unfettered way what it is like to possess a unique mind and speak a unique idiolect (think of Finnegans Wake or Raymond Roussel). An unforgettable account of the difficult gestation of such a “resistant” poem can be found in the Romanian-Jewish poet Paul Celan’s “Etwas,” or “Something,” which narrates the undertaking and completion of a poem and speculates on its future in posterity. The poet, writing in German, painfully senses within himself an invisible and chaotic residue of excruciating feelings, splintered thoughts, piercing memories, and memorable words — shards of a lost whole broken into pieces by a catastrophe. Celan names the past catastrophe “Wahn,” or “madness.” Amid the shattered fragments of his former state, the poet rises to the task of creation, of bringing his past whole to life. And his hand, intent on conveying through words the almost unintelligible contour of his broken internal state, brings into a destined proximity the multiple “crazed” fragments of past wholeness, thereby creating on the flat page a hitherto absent unity, the archetypical perfect geometrical form, a circle, symbol of an indisputable completed whole. A circle cannot have parts; it is indivisible, without beginning or end:

    Aus dem zerscherbten Out of shattered
    Wahn madness
    steh ich auf I raise myself
    und seh meiner Hand zu, and watch my hand
    wie sie den einen as it draws the one
    einzigen single
    Kreis zieht circle

    That is the poet’s account of the silent period during which he watches his scribal hand — intent on retrievement — as it goes about its work of selection, consolidation, and abstract shaping, through which it finds its perfection. The hand magisterially draws the fragments together into a new shape — a shape that does not mimetically resemble the lost past; rather, it reflects the arduous work of bringing the past into intelligible form. In that moment of rapt success, of sequestered achievement, there is no one else present, no audience — no thought, even, of audience.

    But there will be posterity, and Celan prophesies what the perfect silent drawn circle will become when, later, by the alchemy of a reader’s thirst, it mutates from its two-dimensional visual form into an unprecedented “something” (“etwas”) miraculously aural. From its flat two dimensions that singing “something” will in posterity lift itself into three dimensions, like a fountain, toward a thirsting mouth that will, by a unique reversal of the original silent writing, speak the dead poet’s own words aloud in the reader’s mouth:

    Es wird etwas sein, später, Something shall be, later,
    das füllt sich mit dir that fills itself with you
    und hebt sich and lifts itself
    an einen Mund to a mouth

    To slake our thirst, the circle on the page reforms itself into the mysterious future fountain transmitting the lines on the page into nourishment for us. A life, shattered into fragments, has been re-constituted by the poet’s drive to make not a reminiscence of the past but a work of art, a pure geometrical abstraction (Celan’s “something”), powerfully satisfying a human reader’s insatiable thirst for aesthetic and emotional accuracy.

    I must confess that I have presented Celan’s stripped poem in narrative order: first the creative assembling under the scribal hand, then (in posterity) a formed refreshment as the reader speaks its sounds. But my narrative order violated Celan’s own chosen order: he puts first the poem’s astonishing anonymous survival into futurity, and then looks back, now inserting the first person “I,” into his own ecstatic work in creating the impregnable unshattered and unshatterable circle:

    Es wird etwas sein, später, Something shall be, later,
    das füllt sich mit dir that fills itself with you
    und hebt sich and lifts itself
    an einen Mund to a mouth
    Aus dem zerscherbten Out of shattered
    Wahn madness
    steh ich auf I raise myself
    und seh meiner Hand zu, and watch my hand
    wie sie den einen as it draws the one
    einzigen single
    Kreis zieht circle

    Celan asks his reader, implicitly, Has my poem not been for you a relief of an unapprehended thirst? Wordsworth conveys a comparable relief in a poignant passage from Book IV of “The Prelude”:

    Strength came where weakness was not known to be
    At least not felt; and restoration came
    Like an intruder knocking at the door
    Of unacknowledged weariness.

    Stevens’ native language in “The Bird with the Coppery, Keen Claws” is one of diction archaic and modern, of unsettling images, of strange assertions, resulting in a startling idiom. Like the language of any poetic style, it can be learned by “foreigners” such as ourselves, and, relieving our demanding thirst, can sound out aloud from our lips. The symbol will recreate the shattered. A poet composing a hermetic poem believes — as Celan here intimates — that posterity, helped by time, will make its sounds “come alive” again.

    We can infer, from Stevens’ self-portrait as a bird, the crux animating his creation: the shock of having to regard himself in his forties as the survivor of a “madness” of his own. Appalled, he sees that he had been, as a youth, desperately mistaken about himself, his judgment, his marriage, and his aesthetic ideals. His biography — when we look to it — confirms the psychological story of the youth-be-come-bird. Stevens had to leave Harvard without a degree because his lawyer-father would pay for only three years of schooling — the equivalent of the law school program he himself had followed. The young poet tried ill-paying apprentice journalism, but wanting to marry, he eventually conceded (like both of his brothers) to his father’s wishes and went to law school. He encountered no ready success in his first jobs as a lawyer, but nonetheless married (after a five-year courtship conducted mostly in letters) a beautiful girl, Elsie Kachel, to whom he had been introduced in his native Reading.

    She had left school at thirteen and, barely educated, was employed to play new pieces on the piano in a music store so that customers would buy the sheet music: Stevens had idealistic dreams of educating her to his own tastes for Emerson and Beethoven. Elsie’s parents had married only shortly before her birth, and although her mother remarried after her first husband’s death, Elsie was never adopted by her stepfather and retained (as her grave shows) her birth surname Kachel. Stevens’ father disapproved of his son’s choice of wife, and unforgiv-ably neither parent attended the wedding. Stevens never again spoke to his father or visited the family home until after his father’s death; at thirty, he was left fundamentally alone with Elsie. The marriage was an unhappy one, and Elsie, according to their daughter Holly, declined into mental illness. She did not permit visitors to the house, not even children to play with her daughter. Stevens did his entertaining at the Hartford Canoe Club; Elsie gardened and cooked at home. She did not visit her husband during his ten-day dying of cancer in a local hospital.

    Several of Stevens’ poems reflect both anger and sadness at the failure of the marriage: “Your yes her no, her no your yes” (“Red Loves Kit”); “She can corrode your world, if never you” (“Good Man, Bad Woman”). The poet suppressed many of those lyrics; they did not appear in his Collected Poems in 1955. But he left, in “Le Monocle de Mon Oncle,” one transparent account of a marriage in which sex has occurred but there has been no meeting of minds or hearts:

    If sex were all, then every trembling hand
    Could make us squeak, like dolls, the wished-for words. . . .
    [Love] comes, it blooms, it bears its fruit and dies. . . .
    The laughing sky will see the two of us
    Washed into rinds by rotting winter rains.

    Humiliating realizations seeped in over the years of the erroneous marriage, eroding Stevens’ youthful belief that his thinking was reliable, his personal judgment trustworthy, his aesthetic confidence well-founded, his religious faith solid, and marital happiness attainable. Such a crushing extinction of youthful selves left the poet immobilized in his marriage (he never complained publicly of Elsie, nor contemplated divorce). Starved of sexual or emotional satisfaction at home, working hard at the law, without the company of fellow-artists, unable to sing or soar, brooding in an arid world in which a lost paradise seemed to preclude any domestic hope, Stevens stopped writing poetry (publishing only a few minor pieces) for six years. Although he resumed writing, he did not publish his first book, Harmonium, until he was forty-four.

    As time went on, Stevens’ bitterness became occasionally ungovernable: even the Muse had become deformed and mad. In “Outside of Wedlock,” when Stevens is sixty-six, the muse is  an unrecognizable Fate:

    The old woman that knocks at the door
    Is not our grandiose destiny.
    It is an old bitch, an old drunk,
    That has been yelling in the dark.

    And in 1944, in “This as Including That,” a poem of self-address, he lives on a rock and is attended by “The priest of nothingness”: “It is true that you live on this rock/ And in it. It is wholly you.” When at length he exchanged profitless bitterness for stoic resignation, he could, he discovered, still exert intellect and will in a single remaining channel — a hampered but energetic aesthetic expression. Against the “flaring” of beautiful tumultuous undulations Stevens sets the cruel portrait of himself as a bird living on a rock, isolating in his title — of all possible aspects of the bird — only the harsh successive sounds conveying its grating predatory talons, its “Coppery Keen Claws.”

    Eventually I became at home in Stevens’ poem, and could ask why it took the strange shape I had found so off-putting. The first half of Stevens’ self-portrait reproduces a bitterness and hopelessness untranscribable in ordinary language, as he had discovered in trying to write it down literally, jeering (in one suppressed poem) at his youthful romantic mistake with the graffito-title “Red Loves Kit.” None of the specific facts of Stevens’ life can be deduced from his poetic lines: his discretion and his taste required a departure from any transcriptive candor. Yet this allegorical leaf from a modern bestiary dryly transfuses into the reader the living state of its author — a blind starving bird in a charnel-house of former selves who nonetheless has not lost his brooding spirit.

    The reader concludes that the massacre of the former forest-parakeets was carried out (since no other agent is mentioned) by their own ruler, the “perfect” parakeet of parakeets, his claws demanding their predatory use. In 1947, almost a quarter-century after “The Bird With Coppery, Keen Claws,” the sixty-seven year-old Stevens, in the sequence “Credences of Summer,” bids farewell to the “slaughtered” selves of past infatuations and the raging misleading forces of his springtime. By this self-slaughter of memories and past actions he can even imagine a new fertile Indian summer, created by resuscitated generative flares:

    Now in midsummer come and all fools slaughtered
    And spring’s infuriations over and a long way
    To the first autumnal inhalations, young broods
    Are in the grass, the roses are heavy with a weight
    Of fragrance and the mind lays by its trouble.

    The parental and marital relationships that had as they occurred seemed so disastrous, causing that “trouble” in the mind, are now seen to be “false disasters,” as, in the eternal return of the seasonal cycle, new energies promise to resurrect the lost parents and the lost lovers:

    There is nothing more inscribed nor thought nor felt
    And this must comfort the heart’s core against
    Its false disasters — these fathers standing round,
    These mothers touching, speaking, being near,
    These lovers waiting in the soft dry grass.

    Stevens could not always muster the laying aside of trouble. Three years later, in “World Without Familiarity,” he rediscovers the very troubles he thought he had banished:

    The day is great and strong —
    But his father was strong,
    that lies now In the poverty of dirt.

    Nothing could be more hushed than the way
    The moon moves toward the night.
    But what his mother was returns and cries on his breast.

    The red ripeness of round leaves is thick
    With the spices of red summer,
    But she that he loved turns cold at his light touch.

    A few lines later, he gathers together those troubles: they become “the poverty of dirt, the thing upon his breast, / The hating woman, the meaningless place.” At seventy, the poet, speaking in ordinary language, can permit himself the literal truths that were so impossible to reveal in 1923. The outspoken words — “the hating woman, the meaningless place” — have become natural only because he has abandoned the old disasters as false ones: he sees they are in fact only what always happens in the everyday world, disasters not peculiar to oneself but held in common with all mortals.

    The tumultuous green undulations of the bird never cease, but they ceaselessly modify their angle of motion.  They are produced by the inevitable and necessary fluctuation of the mind in time, “that which cannot be fixed” (the subtitle of “Two Versions of the Same Poem”). Stevens’ bird is so unhappy because he is fixed miserably everywhere in his life except in his plumes. He is a modern and depleted and clawed descendant of Marvell’s beautiful bird in  “The Garden”:

    Casting the body’s vest aside,
    My soul into the boughs does glide;
    There like a bird it sits and sings,
    Then whets, and combs its silver wings;
    And, till prepared for longer flight,
    Waves in its plumes the various light.

    Upward and outward (one could say) Stevens’ mythological self-bird waves in its vivid plumes the various light of its pallid sun.

    What Stevens had before his eyes at forty-three, as, in his loneliness, he inspected his middle-aged marital and landlocked destiny, was a person immobilized in a life he would never be able to abandon, isolated from his birth-family, unable to see any rewarding emotional future, starved of erotic nourishment and companionship with others, brooding in the spectral company of his past foolish or infuriated selves, looking down at the inert heap of corpses over which he presides, yet still living in the desolate hope of a possible renewed paradise arising from those pitiful rudiments of aloe and pear. That person still possesses intellect and will, knowledge and memory, but is capable, trapped as he is, of interior actions only. Those internal actions awaken sensory, emotional, and intellectual desire: the bird’s imagination is still stormy and turbulent, ever-capable of infinite creative variations in energy and hue, ever-flaring, obedient to his will; in its realm, he exercises his ultimate function, to “flare.”

    How carefully, searching for their symbolic counterpart, Stevens tallied each diagnosed deprivation, finding a convincing equivalent of each! One can only imagine the inventive rapture as each of his personal throng of deprivations found its chimerical name, one after another. The whole abjection of a past and present existence lies on the page transformed into words of sharp-featured literalness and self-lacerating implication.

    Most poems that touch a reader originate in a pang. (As Stevens said, “One reads poetry with one’s nerves.”) The pang is the nucleus generating the poet’s literal bird. The pang is not “hidden.” It is usually — as it is here — in plain view. Inhabit the literal world of this bird who is now you, as you recognize your emotions and write them down: you are immobile and alone, your companions are gone, you lack a mate, you cannot see at all, and you cannot sing. Yet this silent but tumultuous poet is witty in his correspondences: in the world of symbol, metaphor is true; the world is everything that is the case. There is no “hidden meaning”: the poem is its own expression of a state of affairs, embodying actuality as its words come alive in our mouths. The poet longs for a depiction of reality as he has known it, and finds that he must resort to representing himself as an enigmatic figure in his own imagined forest, the supreme ruler of nobody. It is the unconcealed chill of the bird, transmitted by its cruelly sonic claws, that convinces us that this parakeet of parakeets slaughtered the fellow-parakeets of his youth when they proved delusory; yet the authorial distance and the cartoon-assemblage of the bird, in its “antic comedy,” prevent Stevens’ self-portrait from a transcriptive self-pity.

    The enigmatical beauty of each beautiful enigma — says Stevens in “An Ordinary Evening in New Haven” — replaces (like Celan’s perfect circle) openly revelatory autobiography. In the art of creating and displaying symbolic selves, men willingly lose “that power to conceal they had as men”:

                         It is as if
    Men turning into things, as comedy,
    Stood, dressed in antic symbols, to display
    The truth about themselves, having lost, as things,
    That power to conceal they had as men.

    As Stevens assembles the shattered fragments of his youthful delusions, he invents the mimetic geometrical form of an “incomplete” three-line stanza, one that can neither make its three lines rhyme nor find a fourth to make the stanza whole. If seeing the bird’s plight makes us claim the misery — and self-reproach — in Stevens’ words as they become our own, and feel the ever-available cruelty of our own keen claws of intellect and will, and ratify the necessity of our slaughtering the fallacies of youth for authenticity in later life, then we know we have thirsted for the chilly truth welling up as the beautiful enigma unveils its enigmatical beauty. In suppressing his own domestic history, Stevens avoids the misogyny of his complaints in earlier poems: by suppressing Elsie, he assumes sole responsibility for his own condition. When Desdemona is asked who killed her, she says “Nobody: myself.”  The once paradisal, now ugly world of the bird contains no company.

    Illusions of Immunity

    In an already classic episode of Black Mirror, called “Arkangel” and directed by Jodie Foster, a single mother has her daugh- ter grafted with a cerebral implant connected to a screen. The system, known as Arkangel, allows Marie to monitor Sarah’s every action, and also to suppress stimuli that might cause her daughter distress. The system is equipped with a filter that can blur any troubling vision or sound in order to make her perfectly “safe.” In this way Sarah grows up absolutely unaware of all the dangers that lurk along her way — starting with the barking of the neighborhood dog, which Arkangel prevents her from hearing.

    When Sarah turns ten, a classmate entices her to watch graphic violence and porn. With the system still operating in her mind, she is unable to experience the attendant mental pain, and decides to draw blood from her finger in order to figure out what the fascinating fluid really is. At this point, realizing the harm that her own extreme worry about her daughter’s vulnerability has caused her daughter, Marie disposes of Arkangel. (A psychologist tells her that it is anyway soon to be banned.) But it is too late. The implant cannot be removed from Sarah’s brain. There is no way back. Years later — Sarah is now fifteen — Marie suspects that she has been lied to about a party that her daughter was supposed to attend. Crushed with anxiety, she turns on Arkangel, on the pretext of checking that her daughter is safe — only to witness Sarah’s first sexual experience, peppered with the clichéd vocabulary gathered from the porn movies that she has been free to watch since she was left to her own devices.

    This horrific tale — a parable, really — addresses helicopter parenting as the symptom of a broader and more formidable malaise. The real subject of “Arkangel” is the ideology of safety. To live a longer but narrower life; to see risk and its inherent poetry as a secularized version of sin; to renounce death and danger, hence renouncing life as well — those are its objectives, its prescriptions for happiness. “Everything in the modern world functions as if death did not exist,” Octavio Paz wrote in The Labyrinth of Solitude, but the problem is that “a civilization that denies death ends by denying life.” As a result of our rejection of death, Paz observed, it has slipped in through the interstices of the walls that we have built against its power. “Death enters everything we undertake… The century of health, hygiene, and contraceptives, miracle drugs and synthetic foods, is also the century of the concentration camp and the police state, Hiroshima and the murder story.”

    Our dream of perfect immunity does not strengthen us, it leaves us weaker. On the assumption that, by scientific and technological and political means, we can make endurance moot, there is more and more that we find it harder and harder to endure. Safety, remember, is the opposite of resilience. We believe we can lock up life, but death resurfaces in the most brutal manner — in campus shootings or on Capitol Hill. A certain sense of hygiene, a kind of demiurgic pretension to complete control, has made us forget that we are animals, not robots. It is the animal within men that snarls and finally kills. We have chosen plastic over blood; but blood will strike back, and in such a ferocious way that we will wish we had found the balance between perfect safety and utter barbarity for which we were too lazy to search, too enamored of our fantasy of total protection.

    To Roosevelt’s Four Freedoms, we have added a fifth: the freedom from risk. In practice, it is the negation of freedom. Thus we have rid ourselves of privacy, as well as of humor and free-thinking, “wokeness” being in part a political avatar of the same phenomenon. In my view, there is nothing specifically liberal or progressive about it, and, to paraphrase the French title of Michel Houellebecq’s first novel, it is merely a society-wide and culture-wide extension of the domain of safety. While many African-American or gay students may embrace it in order to foreclose freedom of speech on campus, it is certainly white in its provenance — if we are to understand this word as referring to a certain ethos rather than an actual incidence of melanin. That white ethos is, in a word, Puritanism, or more generally the long ferocious tradition of forbidding knowledge and protecting people from sin and contamination, from any idea or experience that is dogmatically defined as an evil or a peril. In this regard, frantic suburban parents are no different than outraged BIPOC undergraduates. African-American or Native American activists show just how well they are now able to speak the language of any Long Island Sunday-lawn-mowing family man. (Cultural appropriation!) As in Ishmael Reed’s Mumbo Jumbo, the vital — swinging and dissonant — force of the minority has surreptitiously given way to the majority’s (or the noisier minority’s) stiffened morality and exemplary nature. Puritan and white is the obsession with hygiene and toxicity (as in “toxic masculinity”), with stainlessness and spotlessness replacing salvation from sin. Puritan and white is the obsession with safety, with unbreachable havens, with “safe spaces” on campuses that are the spiritual successors of the believers’ equally ardent craving for “being safe and secure in Christ.”

    But being safe is not enough. Even more urgently, we need to feel safe. America, after all, is the world capital of emotionalism. This desire is so widespread and well accepted that “feeling safe” has become a magical expression in everyday America. Your upstairs neighbor awakes you at four every morning? The landlord will not lift a finger to remedy the inconvenience unless you notify him that you feel unsafe, which will prompt him to act swiftly on your behalf. An Amazon package that you were expecting has not yet arrived? Write them that you “don’t feel safe,” and they will make every possible effort to assist you. People feel safe or unsafe being invited here or strolling there, entering a place that is or is not nut-free, discussing this subject or that. Our infamous “trigger warnings” stem from the idea that students may not only be offended, but actually imperiled, or at least feel so.

    Some of these feelings, while not exactly rigorous calculations of risks and probabilities, have a basis in reality. Fear is not always an illusion. There are indeed threats which we need to recognize and to measure objectively. When we speak of the security provided to its citizens — or denied to them — by the state, we mean such things as objective “threat assessment” and the empirically grounded procedures developed to counter the elementary exposure of people to harm. But “insecurity” is also, and in our language more often, a reference to an inner fragility, a subjective inability to master fear, an incapacity for self-defense. Those who beg for “safe spaces” crave not the objective stability that one might legitimately expect from the state, but an inner insulation, a demarcation from challenge and disturbance, a locked-in and publicly guaranteed peace of mind. And because they want to be free from fear and contra- diction, they mean to ensure it by a rigid regulation of speech. One of the instruments of safety is silence.

    Conservatives limit their scorn for the contemporary crusade for safety to the political instances that suit their own purposes — “safe spaces” or campaigns for “cancellation” — while ignoring their own obsession with safety. Liberals, to be sure, are not consistent when they ultimately call for a new social approach to sex and love, which includes the suggestion of such authoritarian measures as getting rid of the statute of limitations on sexual harassment or assault; some of them advocate a quasi-totalitarian vision of safety while criticizing the Patriot Act as a fascist infringement on their freedom. In this sphere, ironies and hypocrisies abound. Yet conservatives are no more consistent, since many of them endorse every surveillance measure across the board, refusing to see that these are a significant part of the general expansion of the domain of safety that they otherwise deplore.

    Those “petites règles compliquées, minutieuses, et uniformes” which, Tocqueville holds, are instrumental to the social despotism that may deform a democratic order, now come down mainly to our obsession with safety. Modern states issue laws and regulations focusing on the most minute details of our private and public lives, from seat belts to smoking. We are living in a renaissance of technocratic paternalism, abetted by new developments in economics and social science. It is not entirely malign, of course. The twentieth century was a golden age of insecurity, with its world wars and wars of decolonization, its mass terrorism and migrations, its cultural uprootings and dislocations, not to mention the breakdown of the family unit, the spread of drugs and AIDS, and a general loss of stability. The fetishization of safety in the twenty-first century seems almost like a response to the decades of fear and dread that preceded it. This is a society in permanent need of reassurance. Some of our recent Covid19 policies had a performative quality that seemed to address a need for mental security rather than the virus itself. Even the anti-vaccine movement relies on a version of the precautionary principle — a baseless and paranoid version, a truly perverse instance of “safetyism.”

    Georges Bernanos saw the danger in the modern refusal of danger. In La France contre les robots, which appeared in 1947, he addressed the rise of generalized surveillance, observing that, although it had been initially promoted by democratic states, it resulted in the crushing of civil freedom by the fascists, the Nazis, and the Bolsheviks — and the destruction of millions of lives. To a nostalgic monarchist such as Bernanos, the fact that totalitarianism came to power in the West as the dark face of democracy was no paradox: it promised the protec- tion that “the people” desperately wanted. Bernanos made a striking observation about something we all take for granted — passports and fingerprints. Twenty years earlier, he wrote, no respectable Frenchman would have been fingerprinted, such a formality being reserved for labor camp prisoners. One could even travel with a simple business card, and someone else’s name could well be printed on it. The ideal of safety could never justify the sacrifice of a person’s right not to be spied on.

    The quasi-anarchism that Bernanos admired was a matter of dignity and trust. Today his grievance sounds utterly archaic. Every citizen is, for his or her own safety, treated as if he or she were a soldier or a convict. It is awful — and in the case of the tech companies, it is nothing more than corporate avarice — but most of us accept it. If, on the one hand, we are more of a global village, with borders seemingly obsolete (in Europe, for example), those borders are on the other hand more hermetic than ever — as if a robotic kind of control were actually replacing traditional humane borders, just as an infantilizing and intrusive pedagogy has replaced more rigid yet also more empowering forms of education. Our panic about immigration is not unrelated to this terror of a loss of perfect control.

    Our society has chosen surveillance over overt violence. This has deeper ramifications than the Patriot Act or biometrics. Your fingerprints are not only checked at the airport, where a certain degree of anxiety may be justified. As they are now able to monitor and to control — in a painless and almost invisible manner — their children’s comings and goings, modern parents often prove to be exceptionally intrusive. The surveillance state is made up of surveillance families. Parents who would gladly accept seeing their nine-year-old children change sex do not hesitate to regulate strictly, through sweet and safe suggestions, the way they interact with other children and learn how to live. Not in centuries have parents — and educators — so tightly swaddled their little ones’ minds and wills: smooth-voiced policing is more manipulative and effective than the rod. Go to any playground. Back in the day — as many films, photographs, and books attest — parents would encourage their kids to go off and play with their peers, to brawl, scratch themselves, kiss one another, graciously lend toys and clothes to their little friends. Children were encouraged to experience life at their own immature and delightful level. Nowadays the perfect mom never loses track of her child, and harasses him or her with pointless words of caution or congratulation: “Good job, Cassidy!” Because, you know, Cassidy needs to feel empowered. Empowerment used to prepare one for risk. Now it prepares one for the avoidance of risk.

    From their earliest age, children are deprived of time to daydream as well as of the opportunity to graze their elbow. Curiosity is regarded as just an invitation to trouble. “It’s not polite to stare at people!” “Don’t touch this: it’s full of germs!” “Don’t go there, you’ll hurt yourself!” And once the children become teenagers, their parents will gently try to geo-locate them: after all, didn’t they hide a camera in their nursery? They will be safe and healthy — healthy kids, then “healthy sexual citizens.” “Healthy,” meaning that you do not know what disease is, and “safe”, meaning that you have never experienced freedom. From the vantage point of any philosopher, writer, or artist that lived during the past fifty centuries, the children will grow into half-dead human beings. They will be as dull as a plastic lid.

    When you teach these grown-up children at university, it is no mean feat to make them realize that risk is not always the same thing as danger, nor is being challenged in one’s instincts or beliefs the same as being threatened. After having been constantly praised for their prowess in the playground, they will have a hard time accepting that their research is unorig- inal or flawed, or that they have shortcomings in art history, literature, or even pop culture — which is often more their parents’ or instructors’ fault than their own. Judith Shulevitz memorably reported in 2015 that students at Brown claimed they had been “triggered” by the presence on campus of a right-wing anarchist pundit named Wendy McElroy. In order to recover from their “trauma,” they set up a “safe space” equipped with cookies, coloring books, pillows, and videos of frolicking dogs. In other words, they recreated a nursery.

    What one has to understand is that the problem with “safe spaces” is not so much an ideological as an educational one. An entire generation — there are exceptions, of course — seems to have grown up in a paradise where God’s universal mercy has the wondrous ability to make everyone the elect, and their every whim just so many strokes of genius. Parents of all political persuasions have sacrificed everything on the altar of safety, raising their children like Marie raises Sarah, sheltered from barking dogs. In an ethos in which failures and painful experiences are deemed harmful in themselves and lacking in any benefit or reward, high intellectual standards are fatally perceived as cruelty. Interestingly enough, elite parents are probably those least likely to see that our mind is like our immune system, or like any political entity — that it must be challenged to keep from ossifying. As a result, universities are teeming with demi-habiles who discover on arriving that a few seductive ideas could spare them the challenges of a more thorough education. Affirm them, protect them, promote them, congratulate them.

    But is there any more encompassing definition of privilege than this walled world of phony positivity? When one thinks of the many millions of children growing up in places that are not “unsafe” but unsafe, the present-day cult of protectiveness looks very unattractive indeed.

    This puritanical safety is sought everywhere now, although it remains, by definition, out of reach. Once deemed wrong, sex is now deemed unsafe. From the rise of AIDS to the MeToo movement, we have been incessantly told that there is no such thing as safe sex — safer sex, maybe. In fact, the idea that the sexual revolution drew to a close with AIDS can only be understood against the backdrop of previous attempts at demonizing the sexual revolution. A few years earlier the curse was the spread of herpes, described as “the dark underside” of the fleshpots of the 1960s and 1970s. For many commentators, AIDS was a new opportunity for a sex panic, and to impose their prudish agenda. Venereal disease — or STD, as we less poetically call it — had a moral meaning. Sexual guilt was part of the previous generation’s frenzy. There was a sense that it had all gone too far, that the end would therefore come, that people might even get killed, like Sharon Tate, for the havoc that their hedonism had wrought. Disease, rape, and murder were now numbered among the results of freedom. The AIDS-related obsession with safe sex made a kind of sense in the general atmosphere of extreme anxiety.

    Obviously, it was not only Patrick Buchanan who believed that “the sexual revolution has begun to devour its children.” More recently, this was made unexpectedly clear by the wide acceptance, in enlightened circles, of the reactionary interpretation of the sexual revolution as being “sexual predation,” and, as such, the key to understanding Harvey Weinstein’s and other crimes. As if no Sardana- palus had existed in previous centuries, freedom has been described as the true culprit. Desire is on trial. Lust is in the dock. As a response to the naïve Aquarian belief that eros could redeem mankind, there emerged the equally naïve but more depressing dogma that caution would save us. Abandon, wantonness, spontaneity, brazenness, audacity: in sexual relations these are now the slippery slopes to violation and injury. Our ongoing prudish moment is the Augustinian response to the sola fide cheerfulness of the 1960s. The ubiqui- tous insecurity of our era, the insistence upon omnipresent sexual jeopardy, is a backlash against that candid optimism. In both cases, the risks inherent to life are either ignored or bluntly delegitimated, with a new kind of temperance being now more or less officially advocated. Unlike in the Victorian age, this abstinence is espoused on behalf of safety rather than virtue.

    None of these observations are meant to deny that there are risks that are real, and not worth taking; and that many men have treated many women, or many other men, odiously; and that rape is hateful and rapists are monsters; and that power has hitherto not been distributed to the advantage, or the parity, of women; and that offices should be more character- ized by respect. But these alarms should not be confused with other concerns, which may seem similar but are emphatically distinct, in kind and in degree. We have carried our proper horror of abuse into regions of human ambiguity and excitement where it does not belong. It is not only rape, it is also lust and its sort of love, that scares us. What has always made sex an occasion for awe is precisely its disruptive and destabilizing power, its inalienable riskiness. (Observers such as Norman Mailer in America and Romain Gary in France noted that this sense of awe was missing in the sexual revolution of the 1960s.) Sex is ringed with the dangers — of pregnancy, of death, or, more simply, of frenzy and fury and the loss of self. Spinoza distrusted titillatio because it shattered the soul-body equilibrium. The moral strictness of the Victorian age, the public rigidity of the 1950s (the private reality was more complicated), and our own cult of safety only point to a deep fear of wildness and confusion, of the instinctual life, of the explosive confrontation with finitude — of ourselves.

    Moreover, we love to be scared and we love to be repulsed. “J’aime avoir peur,” Belle suggestively tells her monstrous lover at the end of Jean Cocteau’s adaptation of Beauty and the Beast. Fear and disgust are part of the erotic experience, of life itself. When asked by the Beast if she does not find it repulsive to have him drink from her cupped hands, Belle replies: “Non, cela me plait.” It delights me! Those transgressive words bring to my mind another aesthetic memory. In Octave Mirbeau’s Diary of a Chambermaid, Célestine, the eponymous heroine, 197 sleeps with a young bourgeois, a certain Monsieur Georges, who is dying from tuberculosis. The passage in which she begs him to spit his bloody saliva into her mouth is probably one of the most ghastly in French literature — and also one of the most sublime. I have taught this text to American and other non-French students. Many expressed an understandable revulsion, but I have been struck by the fact that others, especially younger ones, seemed to enjoy the gorgeous peril depicted, the disturbing representation of a fanatical desire. As if a need for extremity was resurfacing in a tepid world.

    In the 1980s, only Republicans and priggish WASP ladies advocated just saying no. A few years ago, I brought to the attention of my students a survey on Generation Z’s sexual mores. It highlighted the fact that young adults — or teens from the age of fifteen on — were less sexually engaged than previous generations, did not take their driving tests, and were unable to understand facial expressions less simplistic than emojis. Many in the classroom laughed at the ingenuousness depicted in the article, but I remember that some self-described progressives did not see any problem with being abstinent until a certain age. Isn’t it safer, they mused? In my literature class, a handful described the lascivious poetry I assigned to them as “objectifying” and therefore “wrong.”

    Many students actually enjoy going against the grain of the new orthodoxy, but what bothers me is that it is graduate students in the humanities who tend to think in the conformist way, and that most of those who share such puritanical qualms are not buttoned-down Episcopalian missionaries but radicals and progressives. The very people who will write a thesis on the question of transgenderism in French Polynesia are disturbed by sex when it appears in a novel. They feel threatened by it, they distrust the flesh, they want representations of sexuality that are sanitized and consistent with their apprehensions and their beliefs. Foucault even taught them that sex did not exist after all (though he certainly lived as if it did). As a consequence, they treat it as they do literature or art — as a social construct, a personal policy.

    The war on passion and its indeterminate consequences extends also elsewhere. In the matter of transgenderism, it is interesting to see that what once undermined the safe borders of gender and identity — Myra Breckinridge-like — through sometimes aggressive and Dionysian rhetoric and art has now become just another identity, as stable and cliché-ridden as all identities. In utilizing the acronym LGBTQ, we not only defend the rights of homosexuals and transgender people, we also disenchant them — inhibiting, in my view, a more genuine fluidity. Androgyny has been the most creative force in art and literature because it subverts the rigid definitions that we have inherited and damages the fences that morals and politics try to put around the sexual abyss. The pseudo-gen- der-fluidity currently promoted on campuses is a new type of fence. For the beauty of masculinity and femininity — which includes flamboyant androgynous men who are actually like women, and garçonnes as well — is indeed a threat to safety. There is something at once divine and demonic in beauty. Our puritans do not want our campuses to look like Plato’s Academy, or our art history classes to be redolent of Eleusinian mysteries. Things have to be strict and neat. They do not want the Stonewall rebellion to happen again between the gates of our universities. The promotion of shapeless — and really asexual rather than transgender — body positivity falls within the scope of those institutions’ increasingly intrusive control of their students’ sex life. It is meant to circumvent the dangers of erotic experience.

    Consent once pertained to the realm of the gift. The medieval belle dame who rewarded her devoted knight with her don de merci would thereby dominate him while also being somehow alienated by his power. It had to do with initiation and ritual, with magic. But the consent that latter-day feminists brandish at every turn resembles a notarial certificate more than this sacred gift of flesh. To be sure, most people still enjoy sex as they see fit, but many of them have been persuaded into believing that every step of a sexual encounter needs to be clearly, mutually, prudently established. The Antioch College Sexual Offense Prevention Policy seemed awkward many years ago, when it was released. These days you are notified on entering college or graduate school that you cannot have a sexual interaction with anyone unless he or she has unequivocally consented. Ambiguity has been banished, as if it were an alibi for misogyny.

    Safe sex, a postmodern Carrie Nation once expounded to me and hundreds of other graduate students, means that you have to ask permission before every step, from touching hands to French-kissing and so on — and I recall the words “enthusiastic yes” flitting through the PowerPoint on the podium. Whereas “no” certainly means no, “yes” is always suspected of not meaning yes. According to the current code, staring at someone in the library (I no longer recall for how many seconds) is inappropriate and threatening. Speaking flirtatiously to someone without having been encouraged to do so might also be an unsafe practice. You would think that Title IX enforcers have shares in dating apps. And this legalistic madness also demands complete prior transparency about oneself: in this climate the bill recently backed by New York Democrats, which would amend New York State’s penal code to define consent as “freely given knowledgeable and informed agreement,” makes much sense. Lying to a prospective sex partner about one’s wealth or one’s intention or even one’s name would be considered an infringement upon consent. Mischief and play and deception will no longer be charitably interpreted.

    One should be careful about strangers, to be sure — but strangers are also exciting, and every social and sexual relationship, beautiful or ugly, can only begin with a first meeting between strangers. There is certainly never a justification for coercion. Yet the unpleasant fact remains that consent, valuable and even sacred as it is, is seldom easy to delineate. Attraction happens so fast. We must be wary of the expansion of the notion of consent, as of the expansion of the notion of safety. Efforts to construe it as more objective and palpable than it may reasonably be are potentially totalitarian, and certainly puritan, and anyway they will not preempt mistakes and misunderstandings. Such an unremitting quest for individual perfection, and for the perfect management of circumstances, is as futile in a secular context as it was in a religious one.

    Germaphobia, the pathological fear of contamination, is a bipartisan disorder. Trump’s wall against Mexican immigration was an expression of right-wing “safetyism,” and theformer President’s description of himself as a germaphobe was consistent with this overemphasis on safety and purity — the alt-right’s version of identity paranoia. If we remember that a great many parts of the American South are originally Mexican (as the names of its cities, mountains, and rivers attest), then Trump’s stigmatization of Latino migrants must be read as 201 an overall distortion, typically puritan, of America’s roots — of the Latin substratum of a country desperately described as white. His contention that Mexican migrants are “rapists” only makes it more obvious.

    Danger, contamination, and sex are interconnected in this worldview of fear and contempt, and they all relate to a certain part of the society that it wishes to hide or to erase. In Trumpian rhetoric, the Mexican plays the same role as the blacks and the Native Americans did one or two centuries ago (and in some quarters still do). He is the Schatten, the Jungian shadow of the “civilized” white, and he is frightening for that very reason. (Jung remarked in Symbols of Transformation that white Americans often imagine the lower part of their own personalities in the guise of a “Negro” or an “Indian”.) It is never safe to stare into one’s own inner abyss. Southern whites championed family values and were obsessed with black rape, but they were merely projecting their own guilt onto the former slaves. Accusations against black and foreign sex maniacs, dropouts, or drug addicts only mirrored the immense and all-too real chaos that they, the whites, had themselves sparked. Had not slavery massively disrupted black families, while also allowing rape and consequently miscegenation?

    To see the excesses of MeToo as, among other things, a revival of the puritan mentality amounts to realizing that the roots of this movement also lie in certain elements of the white American tradition — to recognize the MeToo aspect of those “honest” Southern ladies of previous centuries, shivering with fear (and probably fascination) on seeing a black male, and of Kenneth Starr and his lawyerly prurience, and of those who call President Biden “Creepy Joe” because he is more touchy- feely than their new propriety can tolerate. Trump’s rallies were sometimes described as “safe spaces for Trumpists,” and this was more than just an irony or an analogy: the Trump cult, too, seeks such spaces. For it, too, the world outside is a scary place, teeming with secret murders, pedophilia, conspiracies, and organized evil. State assistance, even in the form of public health measures against a pandemic, amounts for them to a new kind of idol-worship — the Mark of the Beast. Demons are everywhere, preying on our souls and our bodily fluids. A similar obsession with safety, then, with being saved from confusion and pollution and sin, pervades American mentalities across the political spectrum.

    Ideological utterances stand out against the background of deeper and apolitical sensibilities. There are those who see nature as fallen, though originally good, and in seeking safety they are simply looking for a means to overcome their fallen state. They want the assurance that, being among the elect, the correct, the children of light, they will ultimately be redeemed from the realm of darkness. For others, nature is a benevolent mother, the source of goodness and beauty, and evil springs from social causes, from particular social causes, against which goodness and purity must be secured. Everybody seems to believe that they can have the kernel without the husk, that the shell of evil and ugliness can be somehow surgically removed, leaving good wholly unattached to evil, as if the attachment of the kernel to the husk, of light to darkness, is without any intrinsic merit, as if reality can be distilled and clarified, no doubt by radical means, to an unalloyed marvelous essence.

    Yet a different view of the compound nature of human existence is possible. The biblical and kabbalistic mind, for example, envisions things quite differently. The God of Genesis creates light out of darkness, but He never erases the primeval forces of tohu and bohu “welter and waste,”in Robert Alter’s fine translation. Life, whatever else it is, is 203 also horror and chaos. Tohu and bohu still live within us — “in the bowels of these elements,” as Marlowe’s Mephistopheles suggests. That is why to experience life fully amounts to being often unsafe, or, rather, to renouncing any definitive sense of security. It is, in John the Savage’s words, “the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to be lousy; the right to live in constant apprehension of what may happen tomorrow; the right to catch typhoid; the right to be tortured by unspeakable pains of every kind.” A dangerous struggle, a mortal right, for sure, but a vital one, inseparable from poetry, freedom, and even virtue.

    In ancient Judaism, the Israelites had to bring, once a year on Yom Kippur, a sacrifice to Azazel, a desert demon or deity. They tried to placate this evil spirit with the ritual bribe known as the scapegoat. That is, redemption was reached through sin itself, rather than by its negation. The Brazen Serpent likewise cured Israelites who had been bitten by snakes. The waters of the red heifer purified the impure, but also rendered the pure impure. Mixture and the interpenetration of opposites was a kind of ontological structure. Impurity could never be eliminated once and for all. This is a way of thinking in which life extends into death and death into life — in which good and evil are like twins, holding each other’s heel in their mother’s womb.

    This “primitive” mentality entails some measure of dirt-avoidance, but it was the modern age, paradoxically, that saw a broadening of the idea of dirt, even though it relies on more objective and sophisticated means and standards: hygiene and a knowledge of pathogenic organisms. It was moderns who created the illusion of perfect cleanliness. The ancient rituals recognized the potency of disorder even as they endeavored tocreate order. Danger lay in marginal states, and therefore it was both “respected” and combatted, and certainly not ignored. The ancient Israelites knew of Azazel, and they knew how to deal with him. They strived to avoid him, but they did not consider destroying him. Admittedly, their world was in a high sense safe and orderly, since it was supernaturally protected against impurity and uncleanliness. It was not, however, as secured as ours. In a way, they needed the element of insecurity which the neighboring wilderness, Azazel’s realm, embodied.

    In The Birth of Doubt, his recent book about uncertainty in early rabbinical literature, Moshe Halbertal demonstrates that the ancient rabbis responded to the sectarian dread of impurity by granting more room for uncertainty. The Dead Sea sect, in the cloister of the desert, had utterly refused to engage with uncertainty regarding matters of purity — in the case of carcasses, for instance, which in the canonical priestly doctrine put people at risk of becoming impure and contaminating others. The rabbis refused to entertain such a fanatical — and utterly unrealistic — standard of clarity, and instead they expanded the realm of uncertainty, the legitimacy of doubt, in matters of law. This suggests that “safe spaces,” conceived as secluded areas where no one may be endangered by the outside and the unfamiliar, are in some measure akin to structures such as the Qumran communities, or modern-day rigorist yeshivas, or Catholic convents — more or less isolated communities geared toward avoiding all stumbling blocks and precluding all contact with polluting influences. For ideas, like carcasses, can defile us.

    Such rites as the Brazen Serpent worship were performed as late as the eighteenth century in Western Europe — and until today among those Hopi tribes whose beliefs and customs Aby Warburg famously brought to light. Many French and European cities — Bordeaux, Metz, Provins, Tarascon, Barcelona — organized processions to honor a saint who had supposedly vanquished a throng of snakes or a dragon in the remote past. The grateful citizens would parade a dragon made of wood or cloth, as if to show that the former enemy had become a friend and a tutelary power.

    In some cases, the procession was the reenactment of the dragon’s killing at the hands of a revered saint, or rather the ceremony through which evil was transformed into good. The apotropaic symbol of the dragon became ubiquitous in the twelfth century as a protector of cities. Likewise, some prestigious families put themselves under the patronage of a fantastic creature — in the era of the Crusades, for example, a wyvern (a mythical winged dragon) in the case of the renowned Lusignans clan, who claimed that they were descendants of Mélusine, half-woman, half-serpent, a benevo- lent figure of French folklore. After she left their castle, the venerable dragon-fairy would still watch over them, as an evocative illumination from the Très Riches Heures du Duc de Berry makes clear. In this way the dragon, although a symbol of Satan and therefore of ultimate discomfort and danger, could become, in premodern mentalities, an image of protec- tion. Some aspects of the modern age still operate according to this logic. There is something apotropaic about vaccina- tion, for instance, since it exposes us to the very threats it is meant to fight — the germs themselves — rather than merely abolishing them. (I am not suggesting that modern medicine is no different from medieval magic.)

    As the ethos of caution and safety gains ground, one has to visit places where it has not yet triumphed to understand how strange it is, how alienated from our mortal realities, how afraid of flesh and death it is. Here is a familiar example. There is always a moment when, for all their open-minded- ness, Western tourists in, say, Mexico City, or Mumbai, have to choose anxiously between their petty germaphobia and a more generous hedonistic urge. When they are tempted by the fragrant and colorful appeal of the street food, it goes beyond a gastronomic encounter: the decision to taste or not to taste involves their very tolerance for hazard. Now consider some old-fashioned European place such as the Bar San Calisto in Rome’s Trastevere, where one may drink caffè corretto and smoke the sigaretto while peacefully listening to the first chirping and mewing of the pigeons and seagulls, or to the old customers cursing each other in Roman dialect: do not urban pleasures, and even the sense of memory, go hand in hand with such “unsafe” practices? Does not Bohumil Hrabal’s soul seem to rise to the ceiling of Zlateho Tygra, the great café in Prague, entangled in heavy smoke rings and the crisp and even rancid scent of beer? In most cafes and restaurants in Paris and New York, by contrast, our intolerance for anything “unsafe” is starkly visible in the absence of smoke and the decreasing presence of alcohol, in the belles salades, even in the spacing of the tables. And their nice clean walls, on which all traces of previous meals and earlier merriment have been scrubbed away, make them look like places without a past. These places attest to our longing for the impeccable, which amounts to an overall rebuke of history. Has anything human ever been impeccable, which of course means sinless? And so we pursue a safer but more desiccated life, a life of suspicion and surveil- lance, a life organized more around costs than around benefits, in a present secured in the comfort of its increasingly polished horizon, relentlessly riveted by itself.

    Sahara Dust

    The air is sharp with dust: it’s hard to breathe.
    The sky’s scraped white with it, the light turns gold
    And ominous. I cough and cough and cough.
    It blows each year from Africa, a seethe
    That Pollocks the parked cars with ochre, rust,
    The powdered pigments for the nimbus on
    The icon of a not-quite-sainted saint,
    An “osios” perhaps: holy enough.

     

    The air is edged with dust. The black-bird sings
    The riffed cadenzas of all other springs.
    One magpie clatters, black and white as rain,
    Clearing the grey matter of the brain.
    The trees pull thin air out of light. It’s quaint,

     

    We used to think, Sahara dust. On cue,
    Each year, but more and more. Oxymoron:
    The desert empties but it doesn’t shrink.
    Worn carpet shaken out against the sky!
    If sometimes there are clouds half-lined in zinc,
    And tinted with a putto-bottom pink,
    So are the sickly seasons furbelowed,
    Too-late romantic as Rachmaninoff.

     

    Our children know this evening of dust,
    The chalky sky, my iterative cough.
    It’s like the future to them, tired hit song
    They never have not known, so hum along,
    But still I can’t get used to it. Can you?
    Is being newly old what makes it new?—
    The past so fresh—like wet paint—no—like dew.

    Time, Signature

    When I was small, my grandmother, who taught piano, told
    me someday I would learn to “read” music; I was astonished!
    What ogres, what emperors, what gingerbread, what coffins
    of glass? Perched on five telephone wires, birds noted their
    gibberish, like an unspooled Phaistos disk. When grown-ups
    crescendo-ed overhead, when discords tensed for the felted
    hammer, I went pianissimo, and hid, dampened, beneath the
    black-lacquered forte, my fort. Silence was a box of rest, like
    stale licorice. If only I had the clef to the cryptic grammar!
    Every girl, bored, dreams fairytales. Arching its monobrow,
    the supercilious fermata suspended disbelief. When, later,
    I learned that the only thing written in music was music,
    imagine my disappointment, my relief.

    The Caryatid

    Even though she has set down
    The unwieldy entablature
    And walked back into her life,

     

    Her posture,
    Her disheveled intricate coiffure,
    Betray preoccupation.

     

    Preparing dinner, she slices
    The fluted celery stalk into drums,
    The mushrooms into ionic capitals.

     

    She is too old to be young anymore,
    The moonlight petrifies.
    She has left beauty behind, a ruined porch.

     

    It leaves her lightheaded,
    Being freed from history, that long mis-
    Understanding.

    What Brings Bad Luck

    Hat on the bed,
    A peacock feather
    Dragged indoors
    From the blue-eyed weather,

     

    Reflection smashed,
    A baker’s dozen,
    Chain letter from
    An estranged cousin,

     

    The bumbershoot
    Bloomed in the hall,
    The ladder’s lintel,
    The owl’s call,

     

    The horseshoe’s frown,
    The salt knocked over
    & not tossed across
    The left shoulder,

     

    A lone magpie,
    A pre-glimpsed bride,
    Friday next,
    The bed’s wrong side,

     

    The sable cat
    Bisecting your way,
    A crack underfoot,
    The Scottish play,

     

    A gleaming penny
    Not picked up,
    New shoes set
    On a tabletop,

     

    Scissors left open
    From sheer habit:
    A month begun
    Without “rabbit, rabbit,”

     

    The grave (whose?) trodden,
    The wish (hush!) spoken,
    A run of good luck
    Still unbroken.

    Jump Rope Song

    (with a nod to X.J. Kennedy)

     

    The rope that makes of air a sphere,
    Or else a grin from ear to ear,
    Is something earth-bound feet must clear

     

    When the parabola swings round.
    Right before the snapping sound,
    You have to float above the ground.

     

    The trick is tempo, neither slow
    Nor fast, and rhyming as you go,
    And then forgetting what you know

     

    (The end of every glad endeavor).
    You count, until the numbers sever,
    Since nobody can skip forever.

    Our Literature

    On the gloomy days, when the American catastrophes are too much to bear, I turn to my bookcases for solace and even something like friendship, and the shelves throw a welcoming arm over me. The bookcases are organized on the principle of no principle, and nowhere among them is there a section dedicated strictly to the traumas and treasures of life in our unhappy country. Still, scattered here and about are a number of squat volumes in sumptuous black dust jackets, all of the same height, devoted to flights of the American imagination — a sufficient number of those books, such that, if I ever gathered them together, a proper bookcase devoted to them alone would stand before me. These are volumes in the publishing series called the Library of America, which brings out the classics of American literature, Emerson and Mark Twain and little-known names like William Bartram, the ornithologist, and onward to Saul Bellow and beyond, perhaps too much beyond, in a uniform edition.

    One of those volumes plops into my hand. The cover is maybe a little too sumptuous. A glossy ribbon of red, white, and blue traverses the middle, as if it were a military sash on a colonel’s dress uniform. The publisher’s logo at the bottom of the spine is the Stars and Stripes, configured to suggest an open book. When I open the actual book, still more stars and more stripes blink upwards from the endpapers and the flyleaves. It is the Fourth of July. A tiny bugle says hello every time a page turns. Patriotism at the Library of America does not suffer from timidity. But I do like the feel of those books on my fingers, their heft, the cloth binding, the texture of the paper, and the buttery black dust jackets.

    It was Edmund Wilson who proposed the idea for those books, and his way of doing so figures in the lore of the series. He came up with the idea during the Second World War, which was perhaps not an ideal moment for founding a new literary institution. Nor was he in command of major finances, or any finances at all, though significant financing was going to be needed. Still, he talked up his idea, and he persisted in doing so into the 1950s. And only in the 1960s, at a vexing moment in his life, did signs of progress come his way. He was in trouble for non-payment of taxes. The IRS was threatening to jail him. And Arthur M. Schlesinger, Jr., who was a White House assistant to President John F. Kennedy, put in a word on his behalf.

    Kennedy turned out to be an Edmund Wilson admirer. This was not an obvious thing to be, from a White House standpoint. Wilson had lately published a book about the Civil War in which he described Lincoln as a tyrant and the United States as a sea slug. And just then, in 1963, he published a pamphlet, The Cold War and the Income Tax, in which he explained that, given how abhorrent was America’s foreign policy, he was right not to have paid his income tax. But Kennedy was a lucid thinker. Apart from encouraging the tax authorities to show mercy, he decided to award Wilson a Medal of Freedom. The IRS objected. The president responded: “This is not an award for good conduct but for literary merit.” In one of the sections of The Cold War and the Income Tax, Wilson sketched his concept for a publishing series of the American classics. And Kennedy wrote a letter endorsing the idea.

    Wilson had competitors and enemies beyond the IRS, though. These included the academic literary scholars and their professional organization, the Modern Language Association, or MLA. The scholars and the MLA were promoting their own editions of American literary classics at the university presses. The MLA and the universities commanded a degree of political influence, and they stymied Wilson’s project at every turn, even if he did have a letter from the president. Wilson needed to enlist the support of major foundations, and 217 the MLA made sure to get in his way. Finally he wrote up his frustrations in an essay called “The Fruits of the MLA,” which ran in two parts in the New York Review of Books. By then it was 1968, a year of insurrectionary riots, and Wilson, nothing loath, went about staging his own riot in the pages of the essay. He explained that some of the supreme classics of the American literary past had fallen out of print, which was a disgrace. He explained that still other classics had remained in print, but only in editions published by the university presses in conformity with the MLA’s very exacting scholarly principles.

    Wilson was endowed with a regal self-confidence, and he had come to feel that, if anyone could speak on behalf of America’s literature across the centuries, he was that person. And, on behalf of American literature, he had no patience for the MLA and its scholarly principles. He observed that, under the MLA’s supervision, teams of academic scholars had gotten hold of various writings by William Dean Howells, or Melville, or Mark Twain, and had pickled them in footnotes and pointless annotations, which rendered the books unreadable. The labor that had gone into amassing those scholarly annotations was, in Wilson’s word, a “boondoggle.” The book prices of the unreadable university editions, thanks to the boondoggle, rendered the books unaffordable, except to institutions.

    And then, having paused (as his readers may imagine) to pour himself another martini, he threw in a few remarks on the nature of academic life more generally in America. He noted a preposterous quality: “the absurdity of our oppressive PhD system of which we would have been well rid if, at the time of the First World War, when we were renaming our hamburgers Salisbury Steak and our sauerkraut Liberty Cabbage, we had decided to scrap it as a German atrocity.” He noted the consequences of this academic system for American literature: “The indiscriminate greed for this literary garbage on the part of the universities is a sign of the academic pedantry on which American Lit. has been stranded.”

    In other countries, he observed, publishers had figured out how to preserve and present their own national literatures in attractive formats. The model was the Bibliothèque da la Pléiade in France, which published a “library” or series of the leading French writers from across the centuries. The Pléiade volumes were leather-bound and handsomely designed, and, to judge from the displays in the French bookstores, they were a popular success, too. Perhaps if Wilson had been in less of a rush to show the United States at a disadvantage, he might have noted that, in spite of every achievement, the Pléiade had a lore of its own, some of which might bring us to reflect, by way of contrast, on America’s virtues. The Pléiade was not founded by one of the old-time French publishers, but, instead, by an immigrant to France who was Jacques Schiffrin, from Russia. Schiffrin got the project started in 1931 with a little help from André Gide, plus some investments from Peggy Guggenheim. Schiffrin was a crafty publisher, resourceful in his frugality. I know this because one day at the Strand second- hand bookstore in New York I bought an ancient copy of the original volume in his series, an edition of Baudelaire, only to discover that, when I went to read it, the decaying leather cover split apart and revealed a padding of old newspapers. This filled me with respect, once I had gotten over my indignation. Luxury leathers and gilt lettering (another feature of the series) were meant to be alluring. But the series was also meant to be accessible to a large public.

    And, to be sure, with the moderate prices that penny-pinching made possible, the Pléiade right away became too big 219 for Schiffrin to handle on his own. With Gide’s help, he merged it into the much larger publishing house that was Gaston Gallimard’s. Only, a difficulty arose. In 1940, under German rule, the Aryan code went into effect, and, because Schiffrin was not only Russian but Jewish, Gallimard dismissed him from the house and kept the Bibliothèque de la Pléiade for himself. Schiffrin and his family just barely succeeded in escaping to Morocco, and then, once again with Gide’s help, to New York— a story that was told some years ago by Schiffrin’s son, the late André Schiffrin, an eminent American publisher, in his memoir A Political Education. But the German occupation came to an end, and Gaston Gallimard turned out to be content with how things had worked out. Jacques Schiffrin’s keenest desire was to return to France and resume his old responsibilities and prestige at the Gallimard house as the publisher of the Bibliothèqe de la Pléiade. But Gaston Gallimard was not interested in sharing what by then he regarded as his, and he went on publishing the series by himself.

    About the books themselves, though, Wilson was right. The Bibliothèque de la Pléiade was a beautiful series, all in all. A reader today may have to imagine some of the beauty, given that in our own era France’s version of the MLA has achieved an influence over the Pléiade, and various editions in the series have come to resemble the American university editions that Wilson so vigorously detested. Svelte volumes from the early years have swollen into fat little things, bulging with hundreds of pages of arcane scholarly annotations, printed in type fonts so tiny as to resemble dust motes. Endnote numbers creep across the pages like insects. And the book prices have swollen in tandem with the annotations. In Wilson’s time, however, the Pléiade volumes were designed to fit into a man’s coat pocket or a lady’s purse. They were books that a reader could carry to the locations where reading actually takes place, which is not typically the university library, but may be, instead, the garden bench, or the café table, or the subway, or, for people such as Wilson, who used to read while walking the streets, the busy sidewalk. And the texts were uncluttered with notes and footnotes. You could read those books without feeling that a miniature professor was hectoring you from the bottom of the page.

    So he proposed an American series on the model of the early Pléiade. It was a project to bring into print and keep in print first-rate editions of the principal and well-remembered American writers of long ago. And the project was to bring into print the not-so-well-remembered writers, too, if their works were of permanent value — the historical writings, for instance, of Francis Parkman, the author in the late nineteenth century of France and England in North America, about the European colonizers and the Indians. Parkman, in Wilson’s estimation, was one of the American masters of historical literature, whose works had disappeared almost entirely from the bookstores. By presenting his argument for this sort of thing in the New York Review, and by launching his insults at the universities and their indiscriminate greed for academic garbage, and by pointing out academic boondoggles, he succeeded finally in attracting sympathetic attention from people with institutional power.

    His co-conspirator in this enterprise was the publisher Jason Epstein, a generation younger than himself, who was a pioneer of paperback publishing in the United States, not to mention one of the founders of the New York Review. And Epstein began, at last, to make progress. He did this at the Ford Foundation, where President Kennedy’s National Security Advisor, McGeorge Bundy, one of the colossally failed architects of the Vietnam War, had ended up in charge; and again at the National Endowment for the Humanities, during the period when Joseph Duffey (who died not long ago), was President Jimmy Carter’s appointee. It was a tale of bureaucratic machinations and big guns of the Democratic Party,as recounted a few years ago by David Skinner in the NEH magazine, Humanities. The progress was less than rapid.

    By the time the initial volumes made their way into the bookstores, it was 1982, and Wilson was ten years gone. And yet those first volumes turned out to be faithful to Wilson’s original concept in every way, except perhaps in their chunky size, which was not nearly as compact as the Pléiade editions. Also, it is hard to imagine how Wilson might have reacted to the Fourth of July bric-à-brac. But the basics were correct. The volumes were free of disfiguring footnotes and other scholarly intrusions. A small amount of useful information was tucked, instead, discreetly into the back pages, where it belongs. The initial set of books were by authors whose names would define the project for the general public: Whitman, Hawthorne, Melville, Harriet Beecher Stowe. One of the Hawthorne volumes was not up to standard. The editors chopped up Hawthorne’s short-story collections and, in a professorial sneak-attack, rearranged the stories in a pedantic order of their own. The other volumes were superb. And, in homage to Wilson, the initial set was followed right away by two volumes of Parkman’s history of colonial North America, even if Parkman’s volumes were never going to be a commercial success.

    The Library of America adhered to one more concept of Wilson’s, which was fundamental to him, though it was not something that he went on about in The Cold War and the Income Tax or in the New York Review of Books. This was still another inspiration that he drew from the French — in this instance from Hippolyte Taine, the literary critic, who was a great passion of Wilson’s. Taine, back in the 1860s, was the founder of modern literary theory — the founder, that is, of the idea that literature ought to have a theory, and science ought to be its basis. In Taine’s view, the moral sciences and the physical sciences were ultimately the same, and the science of literature ought to be comparable in some way to physiology or chemistry. He explained it in the introduction to his monumental History of English Literature: “No matter if the facts be physical or moral, they all have their causes; there is a cause for ambition, for courage, for truth, as there is for digestion, for muscular movement, for animal heat. Vice and virtue are products, like vitriol and sugar; and every complex phenomenon arises from other, more simple phenomena on which it hangs.”

    The simple phenomena that bore on the science of literature added up, in Taine’s formula, to race, place, and time. This meant, respectively, the influence of ethnic or national traits on literature; the influence of geographical location; and the influence of historical events. It is true that, in elaborating these ideas, Taine arrived sometimes at dubious judgments. The Aryan races, he figured, were beautifully poetic, metaphysically gifted, and scientifically innovative, whereas the Semitic races were faultily poetic, metaphysically stunted, non-scientific, and fanatical — which, then again, perhaps proves Taine’s thesis by showing that he, too, was a man of his race, place, and time.

    The belief that he had come up with something scientific filled him, in any case, with confident energy. Taine was superbly well-read in one language after another, vivid in 223 his descriptions, skilled in his summaries. And the science itself, such as it was, allowed him to conjure a living quality in the history of literature. He pictured a world populated by writers across the generations, neighbors and kinsmen of one another, engaged in the easy conversation of people who share a background and inhabit the same terrain, even if they do not inhabit the same century — writers who produce their individual works, and, at the same time, by entering into colloquy with one another across the ages, produce the cumulative thing that is a national literature.

    Taine saw a triumphal quality in all this. It was the triumph of individual temperament, one writer after another, set against the landscape of national traits and the past; and, then again, a triumph of the national temperament. And he saw a heroic quality in his own enterprise, as the scientific historian of literature. He believed that, by revealing the material factors that enter into literature, and by revealing the literary traits that result, he had uncovered the mechanisms that would permit writers and critics in the future to confer a conscious shape on the evolution of their own national literatures. We could become, in his phrase from an essay in 1866, “masters of our destiny.”

    Wilson adored Taine even as a child. His father kept an English translation of the History of English Literature on his shelves at home in New Jersey, and those volumes appear to have been young Wilson’s introduction to criticism and the history of literature. It was Taine whom Wilson read in the street, as a young man in New York. He wrote at length about Taine in his study of Marxism, To the Finland Station, which came out in 1940. But his principal effort to bring Taine to bear on questions of his own time was in a book that he published three years later, even if, on that next occasion, he did not invoke Taine overtly. This was The Shock of Recognition: The Development of Literature in the United States Recorded by the Men Who Made It — one of Wilson’s most brilliant books, though one of his least known, and long and egregiously out of print.

    The Shock of Recognition is a compendium of commentaries by leading American writers (plus one or two Europeans) in the nineteenth and early twentieth centuries on the work of other leading American writers, perhaps from their own epoch or from an earlier one. These are commentaries by writers who have felt a shock of recognition upon reading one another — commentaries that amount to an ardent and even intimate conversation between one writer and the next. Wilson drew his title from Melville’s essay, “Hawthorne and His Mosses,” which expresses how excited Melville was to discover the profundities of Hawthorne. But there is also a commentary by Henry James on Hawthorne, and by T.S. Eliot on Henry James and Hawthorne both, and so forth, with each entry presented by an omniscient Wilson, who, like Taine, appears to be at home across the entire reach of literature everywhere.

    The immediate purpose of the book was to make accessible a series of lively discussions, some of them not well-known. But the larger purpose was to demonstrate that an American literature does exist — a national literature in Taine’s sense, consisting of conversation across the generations among writers whose social backgrounds and language and dwelling places and shared historical experiences lend coherence to their discussion. The notion that an American literature exists was a nationalist conceit in the decades immediately before the Civil War. Sometimes it was a battle-cry for worldwide democratic revolution, no less, as in Melville’s essay on 225 Hawthorne, or in Whitman’s poetry. But even in those years the idea of a national literature seemed to be more of a slogan or an aspiration than a description of reality.

    For wasn’t American literature mostly a stunted and provincial branch of the fertile literature that was England’s — even if a few blowhards were claiming otherwise? Wasn’t the soil of American civilization too thin to produce a genuine literature — even if now and then a lonely genius wandered out of the forest and published a book? Wasn’t America’s truest talent a gift for inventing industrial gadgets, and not for expanding the literary imagination? Those were Henry James’s beliefs. But Wilson wanted to demonstrate that American writers did recognize one another, and they did so from one generation to the next, and they did not merely look to England and Europe. The American writers understood their project to be in some respect cumulative. And their conversation was itself a literary achievement — an imaginative reflection on the American imagination over the course of time. Here, in sum, was a consciousness that was also a self-consciousness — “the developing self-consciousness of the American genius,” in Wilson’s phrase.

    He himself was a classic representative of the long American tradition, in its original and narrowly tribal version. Wilson was a Mather on his mother’s side, descended collaterally from the Puritan divines, which meant that he came from the same Calvinist background that, half a century before he was born, which was in 1895, had produced all but a few of the writers from those earlier times whom we still read today. It was natural for him to think of the literature in its long history as an extended conversation within a relatively cozy world of families and schoolmates and neighbors, which is how things did use to be in nineteenth-century New England — with an additional branch in a New York that descended from the Dutch, and another branch in the white South. About the African-American branch he was willfully ignorant, which I suppose is likewise the tradition, though in Wilson’s case the willfulness is hard to understand. He was a contributor to V. F. Calverton’s Modern Monthly in the 1930s and friendly with Calverton himself, and he had to go out of his way not to draw something for The Shock of Recognition from Calverton’s much-admired Modern Library compendium from 1929, An Anthology of American Negro Literature, a pioneering work.

    And yet he did understand that, even in his own intimate circles, the old American coziness was giving way to something more capacious. The most talented of his own closest literary companions when he was young — his Princeton schoolmate F. Scott Fitzgerald and their Vassar contemporary Edna St. Vincent Millay — were Irish Catholics, which, ethno-religiously speaking, represented a bit of a novelty in American literature. The novel development suggested still more novelties to come. And they came. These were developments that might have given pause to Hippolyte Taine. Then again, Taine, too, understood that populations change, and national cultures adapt and advance. The great forward step for English literature, in Taine’s interpretation, was achieved, after all, as a result of the Norman conquest of 1066, which allowed the French to civilize the Saxon barbarians. The non-WASP conquest of a previously WASP literature in the United States, then, the arrival not just of Irish Catholics but of everyone else as well, on top of the contributors to Calverton’s anthology, who had been here all along — this did not have to be a bad thing.

    But the old American tradition, the central trunk, did have to be rendered accessible to people who might not come by it in the simple way that Wilson did, amid the marvels of father’s bookshelves, or bestowed upon him by a beloved English teacher at prep school. Here was an obvious purpose of The Shock of Recognition. It was to show writers and readers that a self-aware American tradition did exist, and to display its nature, and to allow everybody to enter into it. Or the purpose, more grandly, was to show Americans what was American civilization, from the standpoint of the writers whom he took to be the central personalities. He had given up Marxism only recently, but he retained something of the deep idea that had animated him during his Marxist years, which was the ambition, Taine’s and Marx’s alike, to put knowledge to practical use and become “masters of our destiny,” which in this instance meant becoming conscious of our own culture and capable of adapting it as we like.

    Wilson came up with his proposal for an American Pléiade in the same period, and it should be obvious that in doing so he was extending what he had done in The Shock of Recognition. His compendium of American commentaries was 1,300 pages and came to an end in 1938. But the proposed new book series of American classics was going to be, in principle, endlessly elastic. In this respect, too, it was going to resemble the Bibliothèque de la Pléiade, whose own series has by now expanded to some eight hundred volumes, which is perhaps too many — a concession to the need for annual commercial successes of some kind, even if the Pléiade editors can reasonably explain that the French literary tradition is, in fact, enormous. (And the editors can explain that, because the French tradition has sometimes claimed to be a universal tradition, they have reason to publish writers from other languages, too, in French translation.) And so it has been with the Library of America, which has likewise expanded.

    There are some three hundred volumes in the Library of America by now, which is more than sufficient in some ways, and less than sufficient in other ways. Among those hundreds of books are boxed sets of science fiction and sports writing, not to mention a volume of commentaries on Peanuts, the Charlie Brown cartoon, which, to be sure, is imaginative. And snobbism is yesterday’s yesterday. And a good essay is a good essay. Still, an extra couple of volumes of Whitman are crying out to be included. There is an original sin to overcome in Whitman’s case, which was that, because he was such an outsider, nobody in his own time ever brought out the shelf-long Complete Works that were standard for the other great writers. But he is, after all, the national poet, and shouldn’t his complete poetry, and not just Leaves of Grass, be easily available, and his literary criticism, and his political essays? The Library of America has brought out two volumes of Wilson himself, as is appropriate. And yet another four or five Wilson volumes are likewise crying out for inclusion: his
    study of the American Indians, his labor reporting, his essays on Canadian literature, his study of Marxism, his novels, his poetry, and his book about the literature of the Civil War, not to mention his private journals, which are marvelous. The original Pléiade ambition was, after all, to publish full editions of the major writers, and not just a couple of representative works, which meant Gide’s journals and not just his novels and essays. But I do not mean to complain. The Library of America has just now brought out an unpublished novel of Richard Wright’s, The Man Who Lived Underground, thereby rescuing Wright, not for the first time, from the unappreciative and 229 censorious publishers of his own era, which is precisely the mission: to preserve and protect the American literature.

    I wonder if something like the Library of America could ever be founded in an era like our own. Wilson’s proposal ran into obstacles that were daunting enough, such that it was forty years before his project came to anything. But at least in those years America enjoyed a degree of self-confidence, culturally speaking, and the Democratic Party was not averse to occasional feats of policy innovation. It should be remembered that, in the Kennedy and Johnson era, the New Frontier and the Great Society were not just programs for social reform at home and democracy promotion abroad, but were also, in a small degree, intended to shore up the national culture. Kennedy made a point of turning the White House into a center for the arts, and Johnson took the impresario instinct to another level by establishing the National Endowment for the Humanities and its sister, the National Endowment for the Arts, quite as if arts and letters were matters equal in dignity, if not in budget, to agriculture and transportation. The Library of America was one more of those new institutions, on a miniature scale. It was different only in that, after the NEH funding had gotten it launched, the winds of independent financing filled its modest sails, and onward it floated, propelled now and then by further gusts of NEH beneficence, and guided not at all by Washington, but, instead, by Wilson’s colleague Epstein, and his own colleagues and successors. Those were cultural achievements of the Democratic Party, directly or indirectly.

    Only, that was long ago. Today everybody understands that anyone who stood up to speak on behalf of the higher zones of the national imagination would get shouted down at once in the name of one or the other of populism’s contemporary demands, which are the leftwing demand for identity representation and the rightwing demand for a dismantling of higher zones in general. And the Democratic Party could hardly be expected to rise above those debates. I wonder even if anybody in the upper ranks of the party has given thought to these matters. If the Democrats did stop to reflect, however, they might observe that America’s political troubles right now appear to descend in some degree from a collapse in cultural understandings, which might suggest that, back in the day, John Kennedy and the Johnson administration were on to something with their cultural promotions, and some kind of policy to shore up the culture and its literature would fit rather well with policies meant to shore up the highways and the climate.

    But I never meant to go on about politics. How did I even get started? I began this reflection standing in front of my bookcases, fondling one book or another as they slip into my hands — here is Emerson: Selected Journals 1820-1842, and here is Emerson: Selected Journals 1841-1877 — precisely in the hope of leaving politics and its miseries behind. I wanted a vacation from the politicians. I wanted to read about the Over-Soul! But there are no vacations. The times are to blame, of course. Or maybe the volumes are to blame, with their Fourth of July covers and the spangled logos. Or the Biden administration is to blame, with its repeated and all-too-thrilling echoings of Democratic Party grandeurs of long ago.

    Where Have You Gone, Baby Face? 

    I watched too much Turner Classic Movies at an early age. It can be a burden: all my celebrity crushes have been dead for at least twenty years, and to this day I think that marcelled hair looks normal. But my obsession with films of the 1930s and 1940s can also instill another bias in a contemporary movie nut: I have no doubt that the depiction of women in Hollywood films has never been as good – that is, as rich, as varied, as focused, as human – as it was during the height of the studio system. The fortunes of women in real life may have gone up, but the fortunes of women in movies have gone down. 

    This is, admittedly, non-obvious: this has been a banner year for movies by, for, and about women. There are more female writers and directors working now than at any point in the past century (more on this later). Modern movie women can pursue careers instead of relationships; they are allowed to talk about racism or raise kids out of wedlock. So what’s missing? Hollywood, for one thing. Women’s pictures in the new ‘10s and ‘20s are mostly produced outside the major studios, and their audience is limited to the kinds of people with a high tolerance for indie movies. At best, this can produce sharp, sincere character studies which would have been impossible a hundred years ago; at worst, there is a self-conscious, artsy unfunness to the whole endeavor: entertainment is for children and comics fans, but we’re here to watch a miscarriage or a rape. A quick look at any top-ten grossing list of the past shows a market dominated by big-budget action movies and pre-existing intellectual property. Almost all of them have male leads, excepting the occasional action heroine or Disney princess. The blockbuster model is hardly only a woman’s issue –I have it from reliable sources that men also like interiority and nuance in their motion pictures – but the marginalization of female roles is one of its more noticeable side effects. 

    The present situation makes it easy to write off the scarcity of women on the big screen as a necessary consequence of commercial, profit-driven Hollywood-style cinema. But this is not so. Molly Haskell first described the problem in 1975, in her classic study From Reverence to Rape: “Women, in the early and middle ages of film, dominated. It is only recently that men have come to monopolize the popularity polls, the credits, and the romantic spotlight … back in the twenties and thirties, and to a lesser extent the forties, women were at the center.” Haskell was writing in response to the New Hollywood movement — a brief, brilliant blip in movie history when major studios let young European-inspired directors make movies about anything they wanted. Most often, they wanted to make movies about themselves, or their alter egos — at any rate, about men. I’m not knocking it. The artist-driven approach produced many great films, which also happened to be overwhelmingly male. Hollywood has changed since then, but the balance has not changed with it. 

    The ironic truth is that it was at the height of the studio system — the great American movie factory — when women ruled the screen. The really important thing to understand about the cinema landscape of the 1930s is how much bigger it was: major studios such as Paramount and MGM would start and finish a new movie every week. Before television and the internet and $15 tickets, people watched more movies. Not only that, but the movies they watched were more varied — and, frankly, much girlier. If modern Hollywood is selling a certain kind of escapism, rooted in the heroic fantasy that a single exceptional individual can save the world, depression-era studios specialized in aspirational elegance and glamour. The super-powered heroes and global stakes that are so universal in today’s blockbusters were unheard of in the 1930s. Instead, the most popular films of the decade include weepy female-led melodramas, musicals (whether the Astaire/ Rogers vehicles at RKO, the scrappy, down-to-earth stage door movies at Warner Brothers, or Lubitsch’s pseudo-European operettas at Paramount), romantic comedies, and literary adaptations from Little Women to The Wizard of Oz and Gone With The Wind. Even the most swashbuckling boys’ adventure films had major roles for their female co-stars – what would Errol Flynn have been without Olivia de Havilland? 

    Since the studios of the 1930s produced many times more movies than their modern equivalents, the range of problems they manage to depict is correspondingly broader. While navigating a stricter set of taboos than their modern counterparts, studio-era filmmakers evinced a much wider curiosity about women’s lives. A list of major dramas would touch on, among other subjects, mothers forced to relinquish their children to give them a better life (Stella Dallas, Lady For A Day, That Certain Woman), elderly couples separated by financial necessity (Make Way For Tomorrow), prostitutes with tuberculosis (Camille), womens’ prisons (Ladies They Talk About, Condemned Women, Ladies Of The Big House), religious chicanery (The Miracle Woman), violent gangster ex-boyfriends (Midnight Mary, The Purchase Price), and romances between upper-class men and working-class women, whether as Cinderella story (Alice Adams) or tragedy (Three Wise Girls, Forbidden). By placing women at their center, these movies were able to analyze an entire society — they were what the Indian cinema of the same era called “social films.” 

    These varied films share some common characteristics. More often than not, economic issues are front and center. This was the great depression: poverty was the background radiation of everyday life. Hard times produced a smart, scrappy, unsentimental breed of heroine. It is impossible to 235 imagine a bimbo in a ‘30s movie; the type simply does not exist. If a woman uses her sex appeal, it is always with intention and purpose. The overwhelming emphasis of these films is what their leading ladies want and what they will do to get it. Consider Gone With The Wind, which is really much more about the 1930s than the 1860s. Scarlett O’Hara is the quintessential ‘30s heroine when she makes a dress out of old curtains to seduce Rhett, and when she brutally claws her way back into a life of luxury, and most especially when she swears that she will never go hungry again. Of course she is completely awful, but she wants to survive, and Vivian Leigh’s performance keeps us magnetized for almost four hours through the sheer force of that desire. 

    Even the most sensitive and brilliant of the women’s pictures of our day can start to feel a bit anemic in comparison. Our heroines are more realistic, more enlightened, and even more profane than their ‘30s counterparts, but also weaker and less ambitious. We like movies to be authentic and personal — sometimes the word “raw” gets tossed around — and in our culture the easiest way to show authenticity is through vulnerability. Pain feels more real to us than grit or desire. As a result, there is a distinct prestige bias for female suffering. Just last year we have had The Assistant (sexual harassment), Promising Young Woman (rape), Never Rarely Sometimes Always (abortion), Pieces Of A Woman (stillbirth), and Beanpole (all of the above and then some). Interestingly, most of the critically acclaimed films that do not revolve around some formative sexual or obstetric trauma are period pieces. The closer these movies get to describing our real present-day experiences, the more likely they are to depict women whose bodily autonomy has been intimately violated. 

    The movies of the 1930s are not at all shy about female suffering, or even sexual violence; at least they were so before the infamous Motion Picture Production code was implemented in 1934. The difference is in how this suffering is depicted: in the old films the suffering is the start of a story rather than its subject. Studio-era filmmakers were less interested in the inner mechanics of trauma than in actions that their heroines took in response — victimization as an engine for agency. One of my favorite examples, Blondie Johnson, begins when the titular protagonist quits her job after being sexually harassed by her boss. After spending one night on the street, her ailing mother promptly dies, because this is 1933 and the concept of laying it on too thick had not yet been invented. Blondie — played by the remarkable and somewhat eponymous Joan Blondell — decides to get rich whatever it takes. She starts running confidence games, dates a gangster, and gradually moves up the ranks. Soon enough, she has bumped off the boss and taken over his lucrative bootlegging operation. In the end, she gets to have her cake and eat it too: as a sop to the censors, she is arrested and convicted, but the boyfriend survives, and they tearfully promise to build a new life together — on the level. The whole thing is great fun. Like Gone With The Wind, it is a clear empowerment fantasy: Blondie gets her own back against a system that failed to protect her, and she does it without relying on men. Blondell specialized in this type of no-nonsense working-class dame, and she does a brilliant job of channeling Blondie’s anger and cynicism as they gradually harden into steely self-sufficiency. We want to see her be ruthless, and we are happy for her when she no longer has to be. 

    Yet Blondie is still a good girl at heart. Things take a darker turn in the infamous Baby Face, also from 1933, starring Barbara Stanwyck. Our heroine, Lily Powers, is the daughter 237 of a slimy speakeasy owner in Pittsburgh. She is regularly groped by her father’s customers, and it is heavily implied that he is pimping her out to them, too. The movie really starts when Lily responds to an everyday instance of sexual harassment by pouring scalding hot coffee on the offender’s lap. What’s really remarkable is the casual way she does it — the same natural motion she would use to pour a drink, with just a hint of a sneer. “Oh, excuse me,” she drawls. “My hand shakes so when I’m around you.” Lily has only two friend in the world: a black waitress, Chico, and an elderly German cobbler. He tells her to read Nietzsche (really — the camera zooms in on the book), from whom she takes away a new credo: “all life, no matter how we idealize it, is nothing more or nor less than exploitation.” (Before the censorship board got to it, the rest of the cobbler’s advice read as follows: “That’s what I’m telling you. Exploit yourself. Go to some big city where you will find opportunities! Use men! Be strong! Defiant! Use men to get the things you want!”) 

    When her father dies in a gruesome explosion, Lily takes the lesson to heart. She heads to the big city, and soon enough is sleeping her way to the top of a major New York bank. There is a clever bit where the camera slowly and salaciously pans up the building as she rises from the office boy to the manager to the bank’s president. Stanwyck plays Lily with a combination of concentrated contempt and barely suppressed rage — at herself, at men, at the world. When an altercation between two of her lovers leads to a murder-suicide, she barely reacts. In the end, she settles down with the bank president’s successor, a smart, rich playboy who has her number and loves her anyway. As in Blondie Johnson, there is a last-minute and somewhat dutiful gesture at redemption: about thirty seconds before the credits roll, she is given the chance to trade in her million-dollar jewel collection for a chance to start a new life with him. We can believe they go on to live a happy life together, but I don’t think it matters either way. Lily Powers can save her husband or she can keep her loot. It is her choice. 

    Every time I watch Baby Face, I find myself thinking about how impossible it would be to make it today. It is not that Lily is a bad girl — we have those. It is how thoroughly she gets away with it. The film offers no judgement. It hides no guilt or embarrassment about its ambitious and amoral heroine. A more modern film might try and build up our sympathy with Lily by giving her a moment of softness, hesitation, or weakness — that de rigueur vulnerability. It might emphasize her trauma or the injustice of the society that enabled it. Baby Face does not bother. This is one of Stanwyck’s most closed-off performances, and it does nothing to weaken her electric and sympathetic bond with the viewer. That’s another thing about ‘30s movies: sympathy is not earned by weakness, but by strength. 

    The same principle is at play in Stella Dallas, a very different Stanwyck tour de force. Here, in 1937, she plays a poor mill-worker’s daughter, Stella Martin, who marries Stephen Dallas, a gentleman currently down on his luck. They have a daughter, Laurel, but gradually they drift apart as Stella’s vulgarity offends her husband’s more refined sensibilities. Stella is loud and garish and associates with race-track gamblers, but she does love her daughter, and she is horrified when her reputation gets in the way of a teenage Laurel making her own high-society match. In the end, she decides to protect Laurel by giving her up to be raised by Stephen and his new equally well-bred second wife. The final shot of the movie is of Stella standing alone in the rain, smiling wrenchingly as she looks through the window at Laurel’s wedding. It is a blatantly manipulative maternal tear-jerker, but it works. 239 Stella’s pain has made her strong. Stanwyck’s performance is so powerful precisely because it is so contained. Filmmakers of the 1930s (and 1940s and 1950s) understood that stoicism is more brutally effective at wringing tears out of an audience than any raw emotional breakdown or drawn-out screaming childbirth. We weep for Stella because she refuses to weep for herself. 

    The same principle — drawing audiences in through strength and self-sufficiency — is equally alive in ‘30s comedies. The sunnier counterparts to films such as Blondie Johnson and Baby Face are the glorious pre-code Warner Brothers musicals — movies such as 42nd Street, Footlight Parade, and Gold Diggers of 1933. They explore many of the same themes — exploitation, sex, capital, Joan Blondell’s legs — but with happy endings and tap dancing instead of gang violence. I can briefly summarize all of them: there is a stage musical production featuring a wide-eyed ingenue (Ruby Keeler) who pursues a romance with a juvenile man (Dick Powell) while surrounded by a rotating cast of wisecracking chorus girls (Joan Blondell, Una Merkel, Ginger Rogers, Aline McMahon). These were some of my favorite movies as a child. I was enthralled by the huge, trippy Busby Berkeley musical numbers; everyone was so snappy and smart and fast on their feet. I did not realize until much later that all of these women were fairly explicitly trading sex for money, although the title of Gold Diggers should really have been a tip-off. 

    The plot of that movie centers on four dancers thrown out of work when their show shuts down. They want the same 240 thing as Lily Powers wants: to marry rich, and as quickly as possible. But what she sets out to get with cruelty and grim determination, they pursue with ingenuity and charm. Since this is the softer side of depression-era cinema, they even get to fall in love. Again, there is no judgement here: only a shared understanding that one does what one must do to get by. If you want to understand what the movie is really about, I recommend pulling up the opening number on YouTube. It involves Ginger Rogers and a line of backup dancers in bikinis made out of coins singing about how we’re all so prosperous now that the depression is over. At the end of the song, the men from the bank show up to repossess the sets. In an early scene, Blondell’s character reminisces about her old Park Avenue penthouse while McMahon casually leans out the window to steal the neighbor’s milk. If they do manage to marry into a life of wealth and security, it is really no more than they deserve. The whole thing is adult fun, and not in the R-rated sense. Rather, it’s that these women are grown-ups: capable, self-possessed, resilient, and realistic, but not without a good sense of humor. 

    This kind of sexual license was impossible after 1934, when the Motion Picture Production Code introduced a strict set of standards for on-screen conduct — there was to be no profanity, miscegenation, glorification of crime, ridicule of the clergy. Licentious nudity was forbidden, as was the implication of cohabitation before marriage, and filmmakers were urged to be cautious about married couples sleeping in the same bed. The really remarkable thing, however, is that the people who came up with this list of rules were less afraid of sex than we are now. The lingering threat of sexual violence is more common than sex as a fulfilling, mutually pleasurable activity. Is there an on-screen couple today who are obviously enjoying themselves half as much as Nick and Nora Charles? After the code, all that sexual energy had to go somewhere, and so it was channeled into a new kind of comedy: the screwball. 

    The enormous success of Frank Capra’s It Happened One Night almost single-handedly launched Columbia Pictures into the ranks of the major studios in 1934 while setting the tone for every screwball comedy that followed. More than any other genre, I think, screwball exemplifies the fluid, lunatic equality of the cinema of the 1930s. The man and the woman in a screwball comedy (and if we can write a cultural phenomenon as significant as the Marx brothers off as an exception, there is always a man and a woman) live in their own private world. They start out embedded in some kind of a civilization — the upper classes, a house full of elderly encyclopedia writers, the mob, not that it really matters — whose rules and norms fall away as the couple is pushed together. Too much is happening to them too quickly for everyone else to keep up. 

    It Happened One Night opens with sheltered heiress Ellie Andrews (Claudette Colbert) jumping off her father’s yacht in Miami, swimming to shore, and catching a greyhound bus to New York City, where she falls in with the unemployed newspaper reporter Peter Warne (Clark Gable). High society and newspapers are two of the most popular screwball settings — both of them come with a rigid hierarchy and a specific, impenetrable constellation of tacit and explicit values that practically invites a benign sort of blasphemy. Those codes of conduct matter because we can leave them behind to watch Peter and Ellie, alone, in the middle of nowhere, gleefully turning all of them upside-down. 

    One of the funniest scenes in the movie comes when a gaggle of private detectives hired by Ellie’s father show up at a motel where the couple are staying under an assumed name. Peter and Ellie are having a very real fight when the cops burst in — but she instantly puts on an injured Southern housewife bit and he effortlessly plays along. This is the first time we see how intimately they understand each other, how comfortably they fall back into playing a game with the two of them against the world. This is how screwball works: all the anarchy and nonsense and plot contrivances are a scaffold for two people to develop a deep sympathetic attachment to one another. Think about the scene in Holiday with Katharine Hepburn and Cary Grant doing back-flips in the playroom of her family’s mansion while the grown-ups congregate downstairs, or the bit in Trouble in Paradise where Miriam Hopkins steals Herbert Marshall’s pocket-watch while he makes off with her garter, or when Ellie leaves her aviator fiancée at the altar to hit the road with Peter and spend their honeymoon in another cheap motel. In the process of falling in love, they recognize each other and begin to recognize themselves, without the impositions of family or class or sex. Screwball love is never simple or unspoken. At its best it is a perpetual give-and-take, a partnership, the continual process of making each other a better, freer person. 

    In sum: women on the screen in the 1930s project strength. They are independent and smart and stoic and, yes, very sexy. But is this so much better than what we have now — and what does “better” mean? Is it more realistic? More feminist? I could not say. Trauma does not reliably make people stronger. Weakness is a central part of being human. Everyone experiences it, and the stories we tell should reflect it. On the other hand, sometimes we all like to see ourselves in a position of strength, with irreducible inner resources, and utterly without self-pity. And comedy is an admirable response to hardship — a sign of mastery. I miss the scrappy depression-era way of looking at the world. It is not the only way to go through life, 243 but we could use a little more of it. 

    The question remains — what changed? There are a lot of things Hollywood studios took more seriously other than contemporary indie producers (starting and ending, of course, with profit). But female audiences were high on the list. Until the 1940s, when studios started conducting empirical research on audience composition, it was widely assumed that most people who went to the movies were women. Even then, the attitude took a long time to die. In Picture, Lillian Ross’s account in 1952 of the making of John Huston’s The Red Badge of Courage, she describes an early meeting with Dore Schary, the head of production at MGM: “There was resistance, great resistance, to making ‘The Red Badge of Courage.’ In terms of cost and in other terms. The picture has no women. This picture has no love story. This picture has no single incident … these are the elements that are considered important in determining success or failure at the box office.” Imagine an era when not having any women was the first and most obvious predictor of a box office flop! 

    The industry focus on female audiences is even starker in fan magazines such as Photoplay and Modern Screen. A quick glance at the advertisements makes it clear who their readers were — the most popular products are cosmetics, laundry soaps, and outfits worn by the stars — “faithful copies of these smartly styled and moderately priced garments on display in the stores of representative merchants whose firm names are conveniently listed on page 115.” These fan magazines helped to create another key element of classic Hollywood filmmaking: the star persona. Working closely with the studio’s own publicity departments, fan magazines gave stars a continuous identity that they carried with them from picture to picture. (Today we would call it their “brand.”) Greta Garbo was invariably alluring and otherworldly. Norma Shearer was poised and ladylike. Marlene Dietrich was sexy, exotic, and independent (though – as the magazine occasionally reminded its readers – she still loved nothing more than visiting her little daughter and husband back home in Germany). A spate of articles around Katherine Hepburn’s screen debut in 1932 crafted the narrative that would, with remarkable precision, define her career for the next six decades: a sophisticated thespian with patrician New England roots, but also something of a tomboy. One glowing review noted that she made her stage debut at age eight in a production of Beauty and the Beast – and she played the beast. 

    The magazines’ female readers were encouraged to view female stars as aspirational figures. They identified with the character that the star created for herself much more than with their role in any particular film. In one fascinating letter published in the January, 1934 issue of Photoplay, a Miss A. M. Johnson describes her own cinematic education as an awkward young girl: “Watching the incomparable Shearer, she learned to have poise and self-assurance. Watching the breathtaking beauty of Marlene, the ethereal loveliness of Garbo, the lady-like Harding and the sweet sincerity of Hayes, she kept on learning. She isn’t timid any longer, or lonely. She is popular now. She had, for the asking, the greatest teachers in the world.” One of the reasons ‘30s stars always projected strength is because audiences in that era did not go to the movies to see reflections of their own weakness. 

    The stars were supposed to be like them, but better: more glamorous, more poised, more capable. The most successful 245 of these actresses — Katherine Hepburn, Bette Davis, Barbara Stanwyck, Joan Crawford, Greta Garbo — had the power to shape the material they were given. Indeed, they shaped it simply by bringing their enormous individuality to bear. It is impossible to imagine any other human being making some of the acting choices that, say, Bette Davis does in Jezebel. The part is a rebellious Southern belle, but she plays it with a fragile, manic intensity, floridly and tragically, her eyes bugging out and her hands contorting into claws. (The rumor is that the role was her consolation prize from Warner’s for not getting cast as Scarlett O’Hara, and we should all take a moment to imagine what that movie would have been like). It is great and she is great and there is absolutely nobody like her — there has never been anybody like any of them. This is the genius of the star system. 

    Male stars did not attract nearly the same level of identification from their audience. As the film critic Dan Callahan has observed, “the limitation of so many of the screen men of the classic Hollywood period is their sense that men should not be acting at all.” There were plenty of fine actors in classic Hollywood, but with a very few exceptions — James Cagney, Cary Grant — none of them had the all-consuming personalities of their female counterparts. The stoicism of ‘30s women becomes a kind of repression in the men, most of whom struggled with the belief that projecting someone else’s emotions was just a little bit unmanly. All of that changed, of course, with Marlon Brando and Montgomery Clift and the Method. It created a new license for male vulnerability on screen — and it brought an end to the old kind of female star. 

    A lot of things happened at the same time in the late 1940s and early 1950s. In 1948, after a long legal battle, the studios’ 246 corporate owners were forced to sell off their theater chains, leaving their films without guaranteed buyers. Television was cutting into their market. Fewer movies were being made, and more of these were spectacles designed to compete with the small screen. The apparatus that had produced larger-than-life stars was collapsing, and audiences were gravitating towards a different kind of actor. For whatever reason, the women coming out of the Actor’s Studio never caught on with film audiences. Method actresses such as Kim Stanley and Geraldine Page had thriving stage careers but could not make the transition to the silver screen. Clift’s breakout performance in Red River and Brando’s in On The Waterfront are open, sensual, and sensitive — the kinds of parts that would have been reserved for women just a few years earlier. It is almost as if there was only so much emotion to go around: when the creative space available to men expanded, the room for women shrank. 

    Another popular explanation for Hollywood’s shift in focus during these years is a decline in the number of female creators, but I have never given this theory much weight. It is true that there was a cadre of influential female writers in Hollywood in the 1930s — women such as Anita Loos, Lillian Hellman, Frances Marion, Mary McCall Jr., and Ruth Gordon. Mary Pickford went from being a major star to a major producer, first of her own films and later as one of the founders of United Artists. But women were never more than about fifteen to twenty percent of screenwriters over the course of the decade (exact numbers are hard to find because writers were so often uncredited) — in fact, there were about as many as there are today. That is still an improvement on subsequent decades, but a significant drop from the 1920s, when almost half of all screenwriters were women. The most powerful women in ‘30s Hollywood were those who started their career in the silent era. It was the rare exceptions, such as Dorothy Parker 247 or Adela Rogers St. John, who broke into the business after it switched to sound (it helped that, like many sound-era screenwriters, they came from established literary careers). There was only one working female director — Dorothy Arzner — and female producers were always a small minority. 

    One of the great ironies of the studio era is that the rise of the talkies liberated women on screen while drastically limiting their creative role behind the scenes. On-screen women of the 1930s were more intelligent, more sophisticated, and generally more grown-up than they were in the 1920s, simply because they could speak. The greatest silent film actors – men as well as women – excelled in their ability to play broad types. There is only so much one can do with just a face, even when one has a face as infinitely expressive as Lillian Gish. But in the ‘20s it was common for studios to buy scenarios from freelancers. There were some highly paid staff scenarists (with Frances Marion leading the pack), but far fewer than sound-era studios would employ in their writers’ stables. Without dialogue, writing for the screen was less important and less prestigious, and thus, unsurprisingly, more female. The talkies brought a new crop of established writers to Los Angeles – journalists and playwrights, both largely male professions. 

    The surprising thing, then, is not that the women of ‘30s cinema were mostly written by men, but how little difference it seems to have made. I have never been able to tell whether a given movie from the 1930s was written by a man or a woman. They are equally likely to feature female protagonists and focus on women’s issues. Arzner is a fine director, but I have not been able to identify a uniquely female subjectivity in her work — because female subjectivity was so much the norm in the Hollywood of her day. 

    Loos, one of the most important and prolific screenwriters of any gender, said this about the movies in a late-in- life interview: “None of us took them the least bit seriously.” Hollywood in its earliest decades was full of people who treated their work with a mixture of perfectionism and contempt. They worked fourteen-hour days, spent years polishing scripts or invented new lenses just to get a particular effect, but they knew that the studios they worked for were glorified factories churning out a movie a week. Nobody involved in this system saw themselves as an artist –—and if they did, they saw themselves as slumming it. Instead, they were brilliant craftsmen. And the truth is that it doesn’t take lived experience or any kind of special or arcane knowledge to write formidable female characters, just close observation and dedicated craftsmanship. 

    Here is an unexpected hope: perhaps the rise of the streaming giants will restore some of that factory mentality. Netflix has fifty-one movies on its slate for this year. There are more people making more movies in mainstream Hollywood than at any point in the past eighty years. And that means more stories — about strength and weakness, about men and women, about a million other things. It might not be the old studio system come again, but surely it is no less capable of exploring all the aspects of human personality, of delivering to the screen the fullness of human life. 

    American Inquisitions 

    Fyodor Dostoevsky published the first installment of The Brothers Karamazov in February, 1879. The novel was the culmination of a decade of ideological strife, during which Dostoevsky had noted a steady slide toward populism. Socialism, the passion of Dostoevsky’s youth, was an enthusiasm still on the march. The author of The Brothers Karamazov was a devout Orthodox Christian and a conservative, a reactionary perhaps. He poured the expansive politics of his era into The Brothers Karamazov and especially into a phantasmagoric chapter — often read on its own — titled “The Grand Inquisitor.” For the past one hundred and forty years, this text has been mined for clues to modern politics. In the verdict of Lionel Trilling, “it can be said almost categorically that no other work of literature has made so strong an impression on the modern conscious- ness.” Modern consciousness was never more receptive to “The Grand Inquisitor” than in the 1930s and 1940s, when Dostoevsky was typically read as a prophet of totalitarianism. 

    Dostoevsky had foreseen this interpretation. In a letter written while he was completing The Brothers Karamazov, he worried about a regime that would provide “one’s daily bread, the Tower of Babel (i.e. the future reign of Socialism), and complete enslavement of freedom of conscience.” Enter the Grand Inquisitor. In the novel he directs the Inquisition in sixteenth-century Spain, having co-opted Christian mercy and replaced it with a cynical recipe for social control. According to Dostoevsky’s great biographer Joseph Frank, the Grand Inquisitor “has debased the authentic forms of miracles, mystery, and authority into magic, mystification, and tyranny.” His church enjoys absolute power. It traffics in mystification and magic. It caters cunningly to people’s spiritual needs and efficiently to their physical needs. For those who rebel against this domination masquerading as religion, there is the Inquisition. Thus did the Grand Inquisitor anticipate the techniques of the KGB, the Gestapo, and the political systems that those organs of oppression were meant to protect. 

    Dostoevsky’s Grand Inquisitor is plausibly a proto-Bolshevik or a proto-fascist. Yet such readings can reduce this text to a kind of prophetic political journalism, to commentary avant la lettre on the cataclysmic 1930s. Read it again now, in the middle of our own tribulations. Detached from those long-ago debates, “The Grand Inquisitor” emerges as a timeless meditation on authoritarian attractions and on freedom’s vulnerabilities. As the Grand Inquisitor declares, “man prefers peace, and even death, to freedom of choice or the knowledge of good and evil… Nothing is more seductive for man than his freedom of conscience, but nothing is a greater cause of suffering.” The dilemma of politics in this text is that freedom is a dilemma. Freedom of conscience causes pain, which is precisely the emotional resource on which the Grand Inquisitor draws: freedom’s impositions are the secret of his power. The avowedly illiberal Dostoevsky was dramatizing a spiritual reality — freedom of conscience as a burden, a task, an impossible ideal — that is no less pervasive in free societies than in those regimes designated by the totalitarian label. 

    Perhaps “The Grand Inquisitor” is not just a discourse about them, the unfortunate totalitarians overseas. Perhaps it is also about us, citizens empowered by the Bill of Rights and basking in liberty. So considered, “The Grand Inquisitor” could be as illuminating about the United States, past and present, as it was supposed to be about Stalin’s Soviet Union or Hitler’s Germany. Without ever going to the extremes of sixteenth-century Spain, without an established church or a dictatorial government, the United States has never lacked for inquisitions. They have appeared, disappeared, and reappearedthroughout American history, exquisite barometers of the national mood and the objective correlatives of political and cultural power. They are as often creations of society as of government. They cast their strange, lurid light on the seductions and the suppressions that emanate from freedom of conscience. And they are once again abroad in the land. 

    The birthplace of the American inquisition is Salem, Massachusetts. In 1692, several members of this provincial New England community were put to death after a public investigation into witchcraft. Many outbreaks of alleged witchcraft had occurred in early modern Europe, and Salem was not at all unusual in the way it combined civil and theological authority, the supernatural and the legal, the magical and the scientific in order to save itself from witches. Nor were the guilty in the “Salem witch trial” quietly executed. They were given the role of exemplary perpetrators. The trial, the public exposure of their guilt, was the point. Only by seeing the forest into which the few had strayed could the many find their way to righteousness. 

    Historians disagree about the roots of what happened in Salem. It may have been material, a redistribution of property through the convenient discovery of witches. It may have been psychosexual, a letting go of forbidden energy under the pretense of witchcraft or amid a sprawling investigation into witchcraft. It may have been the anxiety of a religious elite convinced that the Massachusetts Bay Colony was not as homogeneous or as devout as it was supposed to be. The full inventory of motives for those tribunals may be lost to the vagaries and the hyperbole of the historical record. Whatever the actualities of the event, Salem’s legacy in American history is not institutional or legal or political. It is symbolic. 

    With the creation of the United States, memories of Salem in 1692 began to acquire a gothic frisson. Witchcraft became the stuff of Hawthorne stories and then Halloween kitsch. The Salem witch trials were a ready metaphor for obscurantism, for of theocrats, of those who believed in witches, the zealotry of those who wish to destroy a human being accused of witchcraft. It was this bequest of the seventeenth century that was rejected in the language of the American eighteenth century, in the Declaration of Independence and the Constitution. Salem’s murk was left behind in the clean lines of Thomas Jefferson’s University of Virginia, proof positive of a new republican order. Universities would deliver what they were meant to deliver — enlightenment. The United States was born in the age of reason, though reason was often more a proposition than a reality in the new country. Reason had at least been put on a pedestal. 

    Instead of medieval inquisitions, then, the United States would concentrate on delivering justice. Few societies are as enamored of law as is the United States: the approach to justice through law speaks to the raison d’etre of the American republic. The nineteenth-century figure who best captured this infatuation was Abraham Lincoln, a lawyer by trade. Moved as so many other northerners were by the passage of the Fugitive Slave Act in 1850 and by the moral upheavals of the subsequent decade, Lincoln’s genius was to fight the Civil War with an eye to its legal outcome, which would be the guarantor of its moral outcome. His enduring achievements were the Emancipation Proclamation and the constitutional amendments that widened the scope of American citizenship. His literary-political style, like Jefferson’s, was precise, careful, analytical. His mysticism never clouded his mind. “With malice toward none, with charity for all” — such phrases banish the inquisitorial spirit. 

    But beware the romance of Lincoln’s oratory. In Lincoln’s time and after, there was often malice toward many and charity for few. Lincoln and Jefferson did not get the last word, and the inquisitorial spirit turned out to be a regular feature of American modernity. In the twentieth century it bubbled up from the extremes of the political spectrum. The Left went first. In the hothouse world of radical politics, the Dostoevskian dramas playing themselves out in the Soviet Union had an American resonance. It began with left-wing factionalism and culminated in the career of Joseph Stalin. (The Bolsheviks owed their name to factionalism, to their false claim to majority status and their mockery of their opponents as Mensheviks, the minority.) Stalin practiced the darkest arts of factionalism and, by logical extension, of inquisitions. Improvising rituals of investigation and confession, he had his enemies paraded before the Soviet public. Fewer than twenty people were put to death in Salem in 1692. Stalin had millions incarcerated and executed. The Massachusetts Bay Colony could not compete with the modern world in paranoia and dogmatic cruelty. 

    The American Communist Party slavishly followed Stalin and praised Stalinism. It justified the Moscow trials as it would the Molotov-Ribbentrop Pact of 1939. The American Communist Party also conducted a holy war against the Trotskyites, casting the demons out and subjecting those who wanted to stay on board to “party discipline.” The American Communist Party did not have the instruments of repression available to Stalin, who ruled over a police state that
    could conduct inquisitions of the sort portrayed in 1940 in Arthur Koestler’s epochal novel Darkness at Noon, a book that self-consciously retraces The Brothers Karamazov. Yet American communism endorsed the methods of the Soviet trials, which ensured that ideological and political enemies, having been publicly denounced, were airbrushed out of the historical photographs. 

    The reach of far-left sentiment — sometimes communist, sometimes not — was more cultural than political in the 1930s. It was an influence on educated people. The party and its fellow travelers were formidable enough to initiate a curious inquisition in the 1940s, just as an era of radical efflorescence was ending. In 1948, Whitaker Chambers, a former communist, accused Alger Hiss, a former State Department employee, of having spied for the Soviet Union. As if by magic, Chambers got put on trial. In the courtroom and in the press, he was accused of mendacity, of pathology, of homosexuality. A witness for the prosecution, he was not convicted, of course; but his reputation was irreparably damaged. An avid Dostoevsky reader, Chambers identified with Nikolai Rubashov, the persecuted protagonist of Koestler’s Darkness at Noon. At the same time, of course, Hiss was found guilty; he had been a spy, and Chambers, as would much later be shown definitively, had been telling the truth. For many of Hiss’s supporters, the trial itself had been a witch hunt. This tendency to see inquisitions in politically inconvenient investigations was an omen of political polarization to come. 

    The archetypal twentieth-century inquisition arose from the ashes of the Hiss case. One politician who profited from the case was Richard Nixon, Chambers’ friend and defender. Eisenhower chose him as his Vice President in 1952 in part because of Nixon’s barbed anti-communism. Another politician observing the Hiss case closely was Senator Joseph McCarthy of Wisconsin, who possessed most of theGrand Inquisitor’s cynicism and only some of his cunning. McCarthy was less moved by the Cold War and by the facts of communism, American or otherwise, than by the political opportunities that he saw in anti-communism. Ideology as such was not really McCarthy’s forte — not to the degree that personal ambition was. He was skilled in the relent- less application of power, updating the practice of inquisition for the age of mass media. He wrapped the techniques of character assassination in the garb of legal and senatorial procedure. McCarthy was and remains the virtuoso inquisitor of American history. 

    McCarthy founded his harassment on fear and guilt and gullibility. The Cold War’s unclear borders and its reliance on espionage raised punishing questions of friend and foe. McCarthy presumed enmity and, having presumed it, he found enemies. Where there was no actual subversion, it was invented — by projecting the Cold War’s mood back onto the 1930s, when an affiliation with communism could mean so many things. McCarthy compounded the guilt of his victims by prodding them to “name names.” An inquisition inquires: it must ask or, better yet, interrogate. But interrogation can take place in different ways, in different spirits; not all questions that take the form of asking for the truth actually want the truth. Even when it inquires, an inquisition is not the same thing as an inquiry. A person publicly interrogated is made culpable by the sheer fact of having been interrogated. Though McCarthy’s fans enjoyed the television show that the Senator put before them, they also took his words at face value. McCarthy’s fictions left their mark on reality. 

    For McCarthy, generating fear was indispensable. It was not he who fired people. He was holed up in the Senate. He let the fear ripple out across the country, from the State Department to Hollywood. Guilt by association ran rampant. Treason 257 committed by communists, which did occur in the 1930s and 1940s, was not the only crime about which investigations were launched. Certain books were declared treasonous; some were removed from the libraries of American embassies. Works of art by those implicated in the Red Decade could be treasonous too, and for McCarthy and his supporters there was no way a person, once accused, could be exonerated. Naming names did not expiate anybody. To the contrary: it transformed public into private guilt, while guilt by association made fear ubiquitous, a virus for which there was no vaccine. People got fired mostly because their employers were afraid. As often happens, the institutions buckled. 

    The Salem witch trails and McCarthyism met in 1953 in Arthur Miller’s play, The Crucible. Miller’s parable challenged the modernity of McCarthyism. In the year of our Lord 1953, the salience of 1692 was supposed to be shocking. So too was the religiosity of the Puritan colony, which was not impossibly distant from Eisenhower’s America. The Puritans’ fanaticism foreshadowed McCarthyism. It was the all too familiar precedent, the resurrected template. The Crucible attributed a power-hungry, manipulative darkness to the divines of seventeenth-century Salem, whose purpose (in the play) was transparently McCarthy’s purpose — to deceive, to destroy, to terrify, to ruin, to control. Less grandly, Miller developed a political analogy in The Crucible. Like McCarthyism, Puritan justice was conservative in Miller’s eyes and as such it was dour, rigid, intolerant, and hierarchical. 

    Miller was himself a political progressive. McCarthy’s war on progressives allowed Miller to adorn the play’s hero, John Proctor, with the attributes of a liberal — as Miller understood these attributes. Porter is a handsome “farmer in his middle thirties,” one who “had a biting way with hypocrites… even-tempered, and not easily led.” Proctor’s Enlightenment courage, his Voltaire-inflected liberalism, inspires sympathy. A moderate churchgoer, he struggles with his having committed adultery. He cannot stop the trial from running its course. He is put to death, but he will not name names and he rises up at the play’s end and condemns the hypocrisy that is closing in on him. He asserts his freedom of conscience, which is his honor — “Because it is my name!” Proctor’s freedom is to not go along with injustice and, by not going along, to speak the truth. He robs an illiberal hypocrisy of its ultimate victory.

    Like John Proctor in The Crucible, Bill Clinton felt persecuted in his second term as president. Clinton was yet another reader of Arthur Koestler, and he, too, likened his interrogation at the hands of Kenneth Starr to scenes from Darkness at Noon. There were severe limitations to the analogy, of course. In its obsessive, self-righteous cadences, in its prurient interest in private matters, the Starr report certainly belongs to the literature of inquisitions, but it is hard to see the “trial of Bill Clinton” as a bona fide inquisition. Inquisitions are group phenomena. They are rituals of strength and weakness. They intentionally cross the lines of guilt and innocence, since doing so enhances their scope and spreads their message. Clinton, a sitting presi- dent, was guilty of a sexual liaison with an intern and of lying about it. He was too compromised and unique for his troubles to reach the level of an inquisition. He also stayed in office and did not fall very far from grace among his supporters. When Hillary Clinton ran for president in 2016, memories of the Clinton-Lewinsky affair resurfaced without becoming a major stumbling block for Democratic voters. 

    The contemporary cycle of American inquisitions began 259 after Clinton’s presidency. It was something new in the annals of inquisition, and not because people had changed. What had
    changed in our time was technology, and with it the public sphere. Social media mirrored a confluence of novelties. Opinions could be generated in new ways, and reputations and personal histories were publicly available in new ways. Whereas in the past a letter to the editor would have to be accepted for publication, making the press the key arbiter of the public domain, social media eliminated oversight. This gave opinions the speed of lightning, liberating disclosures and emotions that in the past were mostly confined to private conversation or to the proverbial grapevine. There was no time for introspection or a second look. The public sphere was accelerated and democratized as it had never been before. 

    The public sphere’s democratization had its bright side. The capacity of the powerful to suppress awkward or awful information about themselves diminished abruptly. Individ- uals and groups acquired access to a public sphere — mediated though it was by corporations — that could not be controlled. This assisted in the exposure of crimes sometimes in the form of information and testimony, sometimes through photographs and sometimes, as the technology got better, through video. It was not until the second decade of our century that we began to understand just how manipulable the public sphere had become. In the twenty-first century’s first decade, social media could be the hoped-for vehicle of democratic progress, holding the high and mighty to account. What obtained for the United States obtained even more dramatically outside of the United States. Authoritarianism had met its nemesis in social media. Was that not the lesson of Tahrir Square in 2011 and of the Maidan in 2014? 

    But democracy also met its nemesis in social media, and not only because authoritarian governments — in Russia, China, and elsewhere — had learned to use them for their own ends. The dark side of the public sphere’s democratization lay in its canonization of aggression. Expressing aggression was suddenly effortless and seemingly cost-free for the many who expressed it. Social-media aggression alternates between the random and the targeted, the light-hearted and the deadly, but it is anything but a trivial aspect of American political culture. In 2014, feminist critics of misogynist video games were not just argued against on social media. They were subjected to odious attacks made in the most brutal language, and to threats of violence that may or may not have been purely rhetorical. This aggression was a grotesque comedy for its advocates and a terrifying ordeal of intimidation for its victims — video-game posturing intended to silence voices and thus to dominate the public sphere. But ugly as it was, unprecedented as it was, the acts of aggression did not backfire. There was no mass revulsion against them. They went largely unchecked. 

    “Gamergate” prefigured the miserable election of 2016, during which several long-term trends converged. One was the collision between politics and social media, without which Trump could not have become president. He channeled the natural bellicosity of social media, its aura of unreality, its malicious humor, its absence of restraint, its invitations to violence. Another development was a metastasizing factionalism, for which Trump and his critics were at fault, a zero-sum attitude toward politics that promoted an abyss between the political parties, between Left and Right, between the signifiers of political friendship and the telltale attributes of a political foe. The election of 2016, its rough texture followed by its revolutionary outcome, ushered in a new inquisitorial 261 age. Enmities encourage inquisitions. 

    The record of recent inquisitions reveals an interesting asymmetry. Trump’s victory in 2016 gave him immense political power. He never accepted the separation of powers and constantly encroached on the independence of the judiciary. With some limits, he had the power of the state behind him. The Democratic Party did not disappear under Trump: it regained the House of Representatives in the mid-term elections and mounted two impeachment trials. “The resistance” found compensation for four years of exile from political power in the awesome cultural power linked in one way or another to progressive sentiment. Under Trump, the consequential inquisitions on the Right relied on state power, while the consequential inquisitions on the Left relied on cultural power. It has been an era rich in progressive inquisitions. 

    The story of Marie Yovanovich is a useful example. As Ambassador to Ukraine, she found herself in the middle of Trump administration schemes to find or to fabricate damaging material on Hunter Biden, Joe Biden’s son, who has some business connections to Ukraine. Ambassador Yovanovich did not comply. Phony accusations of her political bias circulated in the press. Social media piled on. The ambassador was recalled from Kyiv as a response to baseless accusations. The Trump administration never lifted a finger to clear her reputation for the obvious reason that it was the Trump administration that was besmirching her reputation. As inquisitions go, it could have been worse, and given the factionalism of American politics Ambassador Yovanovich’s persecution by Trump made her an overnight hero on the other side of the political spectrum. Had Trump won a second term, government-led inquisitions would have come fast and thick.

    The practice of twenty-first century inquisitions has found a congenial home on the Right. A perceived transgression and at times a real transgression is the starting point. Social media cacophonously cries out for justice. The pressure mounts and an institution is asked to take action — usually by firing someone. Suffice it to say that these procedures have little to do with a courtroom trial. They have no prior rules; they do not require anything remotely resembling due process; the accused is not necessarily given his or her chance to mount a defense. An example here is Will Wilkinson, a vice-president at the Niskanen Center who sent out a crude tweet in January 2021 and was fired after a burst of right-wing anger on social media and in the press. The anger entailed a misreading of Wilkinson’s tweet. His poor taste was misconstrued as an incitement to violence. The misreading is a predictable, almost a necessary ingredient of such frenzies. The perplexities of evidence are irrelevant to the inquisitorial mind. 

    The same inquisitorial technique is familiar on the Left as well. Its popularity owes something to the Left’s special agony in the Trump years, when a viciously coarse and anti-progressive man was in the White House while the majority of the country’s elite cultural institutions were becoming more overtly progressive. Those institutions found solace for the loss of political power in the exercise of social and cultural power. They took after those who deviated too egregiously from progressive etiquette. Whether the offense was real or perceived was often beside the point, as were in many cases the intentions of the offender. Motive is the concern of trials. Criminal law revolves around intent. Inquisitions, by contrast, presume motive and intent, and look only for confirmation and satisfaction. Again, due process, which used to be a pillar of the liberal worldview, was nowhere to be found. The effect 263 of these inquisitions has been a climate of fear, a growing conformity of speech and behavior, a resigned acceptance of a bullying authority. 

    The most extraordinary inquisition conducted by progressives in the Trump era was of Trump himself. Owing to the Trump campaign’s bizarre ties to the Russian government, and to Trump’s egregious comments about the excellence of Vladimir Putin, and of course to the fact that Trump was Trump and guilty by definition in the eyes of his enemies, he was found guilty of being a Russian asset. This indeed may been the case, but the evidence — even after the exhaustive Mueller investigation, after countless books on the topic, after round after round of first-rate investigative journalism — is still not there. In the inversion of a courtroom trial on which inquisitions rely, Trump was guilty from the outset. A thousand data points were gathered into a narrative repeated for months by journalists and experts on CNN, MSNBC, and other platforms. Social media added to the ministrations of this exuberant jury. Trump’s nods to Salem and to his being enmeshed in a latter-day witch hunt were not altogether wild. His baroque hypocrisy was that he was energetically engaged in witch hunts of his own. 

    For those convinced that Trump was a Russian spy, evidence could easily be subordinated to optics. Senator McCarthy had operated on a similar principle, exploiting a populist suspicion of the diplomats in striped pants, the fancy kind of people likely to join a treasonous conspiracy. Trump made matters worse by himself employing the inquisitor’s tactics — against government employees such as Ambassador Yovanovich and against whichever celebrity or athlete or journalist or politician (Republican or Democrat) annoyed him. Much of his campaign against Hillary Clinton ran on guilt by association: she was a Clinton, a globalist, a feminist, a liberal, and therefore not to be trusted with the presidency. Or, better yet, she was all of these things and should therefore be locked up. Eager to scorch the political earth, Trump could be seen as deserving an inquisition. Live by inquisition, die by inquisition. 

    While an inquisitorial spirit was proliferating on the Left and the Right, many of the people who fell within its sights were not household names. Beginning in earnest in 2016, progressive institutions proved receptive to inquisitions also of lesser-known and low-profile personnel. These institutions were sensitive to the verdict of social-media campaigns, less interested in individual cases and more alert to the perception of wrongdoing than to the complexities and obscurities of lived reality. They displayed little patience, critical thinking, fairness, and courage. It is typical of inquisitions that those who instigate them are at ease with the fear they unleash. Fear is their tool. The fear of a potential misstep that many now feel at progressive institutions preemptively internalizes the inquisitor’s tactics. 

    A striking case study is Smith College in Northampton, Massachusetts, a hundred miles or so from Salem. In 2018, a black student at Smith off on her own and eating lunch was approached by a janitor and a campus police officer. She was asked what she was doing, as reported in a detailed New York Times article. The student later went on Facebook to describe her upset. News articles projected the student’s side of the story — that she had been discriminated against — though the
    college looked into the incident and found out that the student was eating in an empty dorm. By approaching her, the unarmed officer and janitor had been following protocol. There was no evidence that they were motivated by prejudice. The incident 265 was most likely a misunderstanding on the student’s part. 

    Knowing what it knew, the college administration stayed silent when photographs, names, and email addresses of the Smith employees involved in the incident were made public. Accusations of guilt touched an employee not at work at the time. Death threats, whispering campaigns, and public humiliation followed, while the college did little to defend its employee, offering no proof of his guilt but implying it because the student had perceived malign intent in their actions. Their punishment, of course, was not execution. It was not getting fired outright, although one employee was eventually furloughed and had trouble finding another job because the accusations of racism were so well known in the Northampton community; her reputation has been destroyed. The punishment was the repetition of untruths by those in positions of authority and the requirement that the employees undergo anti-racism and intersectionality training. 

    The willful misreading and misinterpretation on display at Smith College bore the imprint of an inquisition. Beyond the fear shadowing the accused, beyond the social media moralizing and the stigmatization at will, this story of inquisition speaks to the enormous power of these public abuses. The handful of poorly paid employees who fell into the inquisition’s net were not a threat to the college or the community. They sought out no controversy. The power and suffering brought down on them was the power to control the narrative, which is that Smith College is a valiantly progressive institution awash in a sea of structural racism and doing its best to right the age-old wrongs of American society. The employees’ humiliation was necessary in the scheme of things. They had gotten in the way of a popular and righteous narrative, of a tempest of virtue. 

    Freedom of conscience is not protected by the Constitution. It is not equivalent to religious freedom or to freedom of speech. Religious freedom intersects with freedom of conscience in the phrase “conscientious objector,” someone who for religious reasons cannot fight in a war. This status accords civic space and moral prestige to conscience. It is generous in its assumption that people are not uniform, that their conscience might indicate something rare about them and something inviolable. A majority enlists. A small minority conscientiously objects. Freedom of speech is similar. It is by definition the freedom to lie and to offend and to criticize, the freedom to explore life in all its heterodox constructions. No established church in religion, no government restrictions on speech (with some qualifications) — more than anything these constitutional rights are protections. American citizens cannot be forced to attend any one church. They cannot be told by the government what a newspaper can publish or what a scholar can write. These are their rights. 

    Though it can be violated, freedom of conscience cannot be legally protected. It is a loose set of principles or ideas. One is the freedom to choose, which is impossible without the freedom to err. When choice becomes crime — murder, theft, rape — freedom of conscience is abrogated. Short of crime, though, freedom of conscience deserves a wide latitude. Even when choice tips over into crime, freedom of conscience is not erased forever. It is implicit in the criminal’s recovery and in the possibility of moral transformation: one might still recognize a crime, feel remorse for it, and try to change one’s ways. Freedom of conscience is also individual. The state does not set its terms. No church can guarantee it. To survive it must be respected as something that everyone seeks or finds to the same degree. Freedom of conscience enables moral lives that are autonomous, that allow people — making mistakes — to choose the good as they see it and the good that is not imposed, not homogeneous, not obvious, the good that is idiosyncratic and non-conforming, the good that is always a work in progress. 

    Dostoevsky’s Grand Inquisitor focuses with ferocity on freedom of conscience. He relieves the people over whom he rules of their freedom, knowing that freedom of conscience carries with it ordeals of reflection and responsibility: choosing the good comes naturally to no one, and is at best a fleeting triumph over one’s internal limitations. Having freed his subjects of freedom, the Grand Inquisitor rewards them with moral uniformity: his church apportions dogmas, happily adjudicating all aspects of behavior and winning for itself immense power along the way. In this the Grand Inquisitor sees a version of happiness: obedience, authority, a narrative handed down from above, so that people can calmly go about their business. They are safe now. His inquisitions deliver them from the temptation of choice and from the onerous inner life in which conscience either flourishes or fails. By burning people alive, his inquisitions regulate conscience — not the consciences of the dehumanized victims but the consciences of the weak and fascinated onlookers. 

    If freedom of conscience cannot be legislated, how can it be defended? Its importance must first be acknowledged. Inquisitions translate people into cardboard enemies. They script them, they reduce them and diminish them and fill in all the blanks, meeting the psychological needs of their audience. Hence the never-ending political utility of inquisitions: they satisfy large numbers of people. They are populist theater. As long as our current factionalism reigns, we will be stuck with inquisitions. When you have raised the cost of expressing an unpopular opinion, when you lead individuals to censor themselves, when dissent (either for or against power) can result in the loss of one’s livelihood and social acceptance, then you have violated the freedom of conscience. Conversely, honoring another person’s freedom of conscience constrains the extra-legal machinery of ascribing guilt, of shunning. It argues for a light touch, for not going all in and for being especially vigilant about the points at which public shaming and the will to power converge. Freedom of conscience elevates the ideal of a merciful public sphere, an uphill battle in our unforgiving public sphere, addled as it is by the social- media furies and by incentivized sensationalism. Morte tua vita mia is the moral code of the internet. 

    The recent travails of Smith College illustrate the menace of a moral landscape without freedom of conscience. To the college administration, the employees accused of harassing a student were so unfree in conscience as to be almost without agency. Since they were regarded as unfree, as pillars of structural racism until proven otherwise, their actions and their intentions had no bearing on their destiny. Guilty or not in practice, they deserved their public shaming in theory. The college administration also felt justified in tolerating a public inquisition because it was certain of its own enlightened attitudes. The Grand Inquisitor would have approved. He is not, in his own view, a man who causes harm. “We shall triumph,” the Grand Inquisitor says in The Brothers Karamazov, “and shall be Caesars, and then we shall plan the universal happiness of man.” His hold on power is so firm because his policing of public morality leads to happiness. And what is a 269 bit of collateral damage in the vast enterprise of bettering humanity? 

    Precisely because freedom of conscience is not a right and cannot be a right, it is precarious. (“They have brought their burden to us and laid it humbly at our feet,” the Grand Inquisitor proudly declares.) No ACLU can be endowed in the name of this pre-political virtue. Its preservation — or its evisceration — will come from the culture, the climate of opinion, the spirit of laws, and from the education that art and philosophy confer. Repressive societies have amassed the richest literature of conscience. They make heroes and martyrs: while they exact a cruel price, and sometimes the ultimate price, from those who say no, they transform Václav Havel and Nelson Mandela and Sophie Scholl and Aleksandr Solzhenitsyn and Liu Xiaobo into great historical figures, whose hands may have been tied but whose conscience was free. But less repressive societies, and even some open societies, too readily congratulate themselves in these matters. They may not make martyrs, but they sometimes forget the labyrinthine lessons of moral freedom, thus unlearning the balancing acts of a culture that can thwart inquisitions big and small, of a culture that refuses to speak the final word about the guilt or the misdeeds of other people, even of criminals. The scales of justice can easily be melted down and repurposed as the crucibles of endless recrimination. Let us fight inquisition with inquiry. Even if there will always be witch hunts, there will never be witches. 

    Staple Lady

    Next time her skull is sliced open,
    she must have a mind limber as rubber,
    bending to the pain. Under the bright lights
    of the icy theater she will melt, allowing the saw’s buzz
    to fade into the sound of the surgeon entering
    her interior, surveying the field of tumors for the bad one.

    When he finds it, there will be no escaping
    his blade. She will hear him hack through her anesthetic fog
    and scrape at her numb meninges wall, vanquishing
    the invader. Perhaps she is only dreaming,
    she thinks: when she awakens she will be
    at home, puzzled but refreshed

    from this deeply troubled sleep. But then
    she feels the bone door closed and stapled
    shut, cancelling her delusion. Later she is told
    that the enemy is gone but his colluders, claiming
    innocence, remain. A new vigil begins as she watches
    and waits for them to regroup, organize and grow.

    She confesses that she loves her staples. Furtively,
    she caresses them throughout the day. They are her secret
    cranial adornment. Under her hair, in all their metallic glamor,
    they are hers alone to enjoy. Daily, she attends to them holding
    closed her incision, tenderly washing and polishing
    them so they shine like a zipper, ready for the grab.

    Meanwhile, inside, the humors rise and fall like the tide, sway
    to the east with the wind, hold fast against
    the western torment brewing. Above, the wise sagittal
    sinus keeps things churning, as the heart pumps
    furiously below and the mind—her determined mind—keeps
    flexible but centered on the task of healing and staying healed.

    Gingerly, she peers through her two
    still functioning eyes, the skull’s port-hole access
    to the sea, and beyond, to the healing world
    for whom she will endure still more incursions, stay
    supple and ready, if only
    in the end she can stay a while.
    To the god who does not exist she prays—teach me to ride
    the storm safely, past the fatal depths, to the receding shore.