The Poverty of Catholic Intellectual Life

    1

    In the middle of August in 1818, some three thousand five hundred Methodists descended on a farm in Washington County, Maryland, for days of prayer and fellowship. Their lush surroundings seemed to quiver in the swelter of a mid-Atlantic summer, to which the believers added the fever of faith. Men and women, white and black, freedmen and slaves, they were united by gospel zeal. There was only one hiccup: the scheduled preacher was ill-disposed and nowhere to be found.

     

    The anxious crowd turned to the presiding elder, a convert to Methodism from Pennsylvania Dutch country named Jacob Gruber, who accepted the impromptu preaching gig as a matter of ecclesial duty. His sermon began, in the customary style, with a reading from Scripture: “Righteousness exalteth a nation, but sin is a reproach to any people” (Proverbs 14:34). After explaining this verse from a theological perspective, Gruber ventured to apply it to the moral conditions of the American republic at the dawn of the nineteenth century. How did the United States measure up against this biblical standard?

     

    Not very well at all. America, Gruber charged, was guilty of “intemperance” and “profaneness.” But worst of all was the “national sin” of “slavery and oppression.” Americans espoused “self-evident truths, that all men are created equal, and have unalienable rights,” even as they also kept men, women, and children in bondage. “Is it not a reproach to a man,” asked Gruber, “to hold articles of liberty and independence in one hand and a bloody whip in the other?” There were slaves as well as white opponents of slavery at the camp that day, and we may assume that they were fired up by Gruber’s jeremiad. 

     

    But there were also slaveholders among his hearers. This last group was not amused. Following their complaints, he was charged with inciting rebellion and insurrection. Luckily for Gruber, he had the benefit of one of the ablest attorneys in Maryland, a forty-one-year-old former state lawmaker who also served as local counsel to an activist group that helped rescue Northern freedmen who were kidnapped and sold as slaves in the South. The case was tried before a jury in Frederick that included slaveholders and was presided over by judges who were all likewise slaveholders. Even so, Gruber’s lawyer offered a forceful defense of his client’s right to publicly voice revulsion at slavery. In his opening statement, the lawyer declared that 

     

    there is no law which forbids us to speak of slavery as we think of it. . . . Mr. Gruber did quote the language of our great act of national independence, and insisted on the principles contained in that venerated instrument. He did rebuke those masters who, in the exercise of power, are deaf to the calls of humanity; and he warned of the evils they might bring upon themselves. He did speak of those reptiles who live by trading in human flesh and enrich themselves by tearing the husband from the wife and the infant from the bosom of the mother.

    The lawyer went on to identify himself with the sentiments expressed in the sermon. “So far is [Gruber] from being an object of punishment,” that the lawyer himself would be “prepared to maintain the same principles and to use, if necessary, the same language here in the temple of justice.” The statement concluded with an unmistakable echo of Gruber’s sermon: that so long as slavery persisted in the United States, it remained a “blot on our national character.” Only if and when the detestable Institution was abolished could Americans “point without a blush to the language held in the Declaration of Independence.” 

     

    Gruber was acquitted of all charges. His triumphant lawyer was none other than Roger Brooke Taney: radical Jacksonian, successor to John Marshall as chief justice of the Supreme Court, and author of the decisive opinion in Dred Scott v. Sandford. Alongside fellow Jacksonian Orestes Brownson, Taney was the most influential Catholic in American public life during the pre-Civil War period. In Dred Scott, he rendered an opinion defined by an unblinking legal originalism — the notion that the judge’s role is strictly limited to upholding the intentions of constitutional framers and lawmakers, heedless of larger moral concerns. Applying originalist methods, Taney discovered that Congress lacked the power to ban slavery under the Missouri Compromise and that African-Americans could not be recognized as citizens under the federal Constitution. His reasoning prompted his abolitionist critics to “go originalist” themselves, countering that the Constitution had to be decoded using the seeing stone of the declaration. Put another way, Taney set in train a dynamic in American jurisprudence that persists to this day. 

     

    What do American Catholics make of Taney today? What does he represent to us? For most, Taney is occluded by the fog of historical amnesia that afflicts Americans of every creed. If he is remembered at all, it is as the notorious author of Dred Scott — one of those figures whose name and face are fast being removed from the public square amid our ongoing racial reckoning. Many of the chief justice’s contemporaries would have approved of this fate for “The Unjust Judge” (the title of an anonymous Republican pamphlet, published upon his death, that condemned Taney as a second Pilate). Taney ended his life and career attempting to foist slavery on the whole nation, prompting fears that markets for bondsmen would soon crop up in Northern cities. His evil decision sealed the inevitability of the Civil War and hastened the conflict’s arrival. “History,” concluded one abolitionist paper, “will expose him to eternal scorn in the pillory she has set up for infamous judges.” Speaking against a measure to install a bust of the late chief justice at the Supreme Court, Senator Charles Sumner of Massachusetts fumed that “the name of Taney is to be hooted down the page of history.” An abolitionist ally of Sumner’s, Benjamin Wade of Ohio, said he would sooner spend two thousand dollars to hang Taney’s effigy than appropriate half that amount from the public fisc for a bust of the man.

     

    Modern American institutions should be excused for declining to memorialize a figure like Taney. What is inexcusable is the contemptuous indifference and incuriosity of much of the orthodox Catholic intellectual class not just toward Taney and figures like him, but almost to the entirety of the American tradition, in all its glories and all its flaws: the great struggle to preserve authentic human freedom and dignity under industrial conditions; to promote harmony in a culturally and religiously divided nation; to balance competing and sometimes conflicted regional, sectional, and class interests; and to uphold the common good — all, crucially, within a democratic frame.

     

    This profound alienation is, in part, an understandable reaction against the progressive extremism of recent years, which has left orthodox and traditionalist Catholics feeling like “strangers in a strange land.” But it is also a consequence of an anti-historical and deeply un-Catholic temptation to treat anything flowing from modernity as a priori suspect. The one has given rise to the other: a pitiless mode of progressivism, hellbent on marginalizing the public claims of all traditional religion and the Church’s especially, has triggered a sort of intellectual gag reflex. I have certainly felt it, and sometimes vented it. Anyone who knows his way around the traditionalist Catholic lifeworld knows the reflex: Who cares, really, what American political actors, Catholic or otherwise, have done through the ages? The whole order, the whole regime, is corrupt and broken.

     

    Whatever the sources, the results are tragic: highbrow Catholic periodicals in which you will not find a single reference to Lincoln, let alone Taney; boutique Catholic colleges that resemble nothing so much as Hindu Kush madrassas, where the students can mechanically regurgitate this or that section of the Summa but could not tell you the first thing about, say, the Catholic meaning of the New Deal; a saccharine aesthetic sensibility, part Tolkien and part Norman Rockwell, that yearns for the ephemeral forms of the past rather than grappling with the present in the light of the eternal; worst, a romantic politics that, owing to an obsession with purity, can neither meaningfully contribute to democratic reform nor help renew what Arthur Schlesinger Jr. felicitously called the “vital center” of American society: the quest for a decent order in a vast and continental nation, uniting diverse groups not in spite of, but because of, their differences. 

     

    All this, just when a dangerously polarized nation could desperately use that intellectual capaciousness and historical awareness, that spirit of universality, for which the Catholic tradition is justly renowned.

     

    The whole order, the whole regime, is corrupt. The Catholic critique of modern politics is formidable. It cannot be reduced, as Michael Walzer did in these pages not too long ago, to a fanatical yearning for a repressive society bereft of “individual choice, legal and social equality, critical thinking, free speech, vigorous argument, [and] meaningful political engagement.” As if these ideals were not instantiated in various modes and circumstances under preliberal regimes; or as if actually existing liberal democracies have always and everywhere upheld them, heedless of other concerns, such as solidarity, social cohesion, or simple wartime necessity.

     

    The tension arises also at a much deeper level: namely, the metaphysical. The classical and Christian tradition holds that every agent acts for an end (or range of ends). It is the examination of the ends or final causes of things that renders the world truly legible to us. Most things in nature, from the lowliest shrub to astronomical objects, act for their respective ends unconsciously. But human beings’ final end — to live in happiness — is subject to our own choices. Those choices are, in turn, conditioned by the political communities we naturally form. It follows that a good government is one that uses its coercive authority to habituate good citizens, whose choices fulfill our social nature, rather than derail us, unhappily, toward antisocial ends. 

     

    Government, in this telling, is not a necessary evil, but an expression of our social and political nature. Government is what we naturally do for the sake of those goods that only the whole community can secure, and that are not diminished by being shared: common goods. Justice, peace, and a decent public order are among the bedrock common goods, though these days, protection of the natural environment — our common home — supplies a more concrete example, not to mention an urgent priority. And just as government is not a tragedy, the common good is not a “collectivist” imposition on the individual. Rather, it comprehends and transcends the good of each individual as an individual. We become more social, more fully human, the more we partake in and contribute to the common good of the whole. 

     

    The Church took up this classical account of politics as its own, giving birth to what might be called a Catholic political rationality. In doing so, sages such as Augustine and Aquinas made explicit its spiritual implications. If there be an unmoved mover or absolute perfection in which all others participate, as the unaided reason of the Greek metaphysicians had deduced, then true happiness lies in communion with this ultimate wellspring of reality — with the God who has definitively revealed himself, first at Sinai and then even more intimately in Jesus of Nazareth. 

     

    The eternal happiness of the immortal soul is thus the final common good of individuals and of the community. It is the summum bonum, the highest good. Men and women’s status as rational, ensouled creatures thus cannot be partitioned off from how we organize our politics. To even attempt to do so is itself to stake a spiritual claim, one with profound ramifications, since politics is architectonic with respect to all other human activities (to repeat the well-known Aristotelian formula), and since the law never ceases to teach, for good or ill.

     

    This final step in the argument is where things get hairy, since it turns on revealed truths to which Catholics give their assent and adherents of other belief systems do not. The ideal — of properly ordering, or “integrating” if you will, the temporal and spiritual spheres — has never been abrogated by the Church, not even by the Second Vatican Council in the 1960s. Yet what this proper ordering should look like as a matter of policy has clearly shifted in the mind of the Church since the council and in the teaching of recent popes. A reversion to the specific legal forms and practices of, say, King Louis IX or even Pope Pius IX is unimaginable. It would be unimaginably cruel. “This Vatican Council declares that the human person has a right to religious freedom.” It is among the most unequivocal statements ever to be etched in a Holy See document (Dignitatis Humanae). True, religious freedom, like all rights, must be circumscribed by the common good. And a “confessional state,” under the right circumstances and with due respect for religious freedom, is not ruled out. But if the Roman pontiff isn’t running around demanding confessional states in the twenty-first century, then a lay Catholic writer such as yours truly would be wise similarly to demur. 

     

    As vexing as disagreements over the scope of the Church’s public authority have been, the basic metaphysical rupture is of far greater practical import today. The revolt against final causes unsettled the whole classical picture of an orderly cosmos whose deepest moral structures are discernible to human reason; and whose elements, from the lowest ball of lint to the angels in heaven, are rationally linked together “like a rosary with its Paters,” to borrow an image from the French Thomist philosopher A.G. Sertillanges. The anti-metaphysical revolt lies at the root of orthodox Catholics’ alienation from modern polities, and American order especially, which has lately reached a crisis point.

     

    As Walzer correctly hints, the revolt against metaphysics was launched by Luther & Co. for reasons that had nothing to do with political liberalism. Rather, the Reformers accused Rome of having polluted the faith of the Bible by deploying pagan categories to explain it. (In this sense, the Reformation was a special instance of the fundamentalist “biblicism” that had already erupted as early as the thirteenth century in the violent reaction of some Latins to the Muslim recovery of Aristotle.) Still, it was Hobbes and his progeny who brought the revolt to a stark conclusion, ushering in the modern. “There is no finis ultimis,” Hobbes declared in Leviathan, “nor summum bonum as is spoken of in the books of the old moral philosophers.”

     

    Ditch the highest good, and you also sweep away the common good, classically understood. The whole analogical edifice crumbles. What are human beings? Little more than selfish brutes, thrown into a brutish world and naturally at war with their fellows. Why do we form communities? Because we fear each other to death. The best politics can achieve is to let everyone maximize his self-interest, and hope that the public good emerges “spontaneously” out of this ceaseless clash of human atoms. Self-interest comes to dominate the moral horizon of the modern community; selfishness, once a vice, now supplies what one thinker has called its “low but solid ground.” Yet practices such as commercial surrogacy and suicide-by-doctor, not to mention the more humdrum tyrannies of the neoliberal model, leave us wondering just how low the ground can go and how solid it really is. Invoking natural law in response, Catholics find that our philosophical premises, which like all serious thinking in terms of natural law appeals to reason and not to revelation, are treated like nothing more than an expression of subjectivity and a private “faith-based” bias. 

     

    Historic Christianity had taught that “order is heaven’s first law,” as Sertillanges put it, that even angels govern each other in harmonious fashion. The new politics insisted that order was a fragile imposition on brute nature; that if men were angels, they would have no need of government. Over the years, I have heard liberals of various stripes earnestly profess that the classical account of politics is nothing more or less than a recipe for authoritarianism, even “totalitarianism.” This is madness. As George Will wrote in a wonderful little book published in 1983, the classical account of politics formed for millennia the “core consensus” of Western civilization, and not only Western civilization.

     

    Aristotle, Cicero, Augustine, and Aquinas believed that governments exist to promote the common good, not least by habituating citizens to be naturally social rather than unnaturally selfish. The great Jewish and Muslim sages agreed, even as they differed with their medieval Christian counterparts on many details. Confucius grasped at the same ideas. To frame these all as dark theorists of repression is no less silly and ahistorical than when The New York Times claims that preserving slavery was the primary object of the American Revolution. 

     

    The reductio ad absurdum of all this is treating all of past history as a sort of dystopia: a benighted land populated exclusively by tyrannical Catholic kings, vicious inquisitors, corrupt feudal lords, and other proto-totalitarians. Under this dispensation, as progressives discover ever more repressed subjects to emancipate, history-as-dystopia swallows even the relatively recent past, and former progressive heroes are condemned for having failed to anticipate later developments in progressive doctrine. The dawn of enlightened time must be shifted forward, closer and closer to our own day.

     

    The whole order, the whole regime, is corrupt and broken. That about sums up the purist moral instinct, the phobia of contamination, at the heart of, say, The 1619 Project or the precepts of Ibram X. Kendi. It is the instinct of a young liberal academic with a big public profile who once told me with a straight face that he thinks Aristotle was a “totalitarian.” But it is also, I worry, the instinct that increasingly animates the champions of Catholic political rationality, driving them to flights of intellectual fancy and various forms of escapism, and away from the vital center of American life. 

     

    The temptation — faced by the orthodox Catholic lifeworld as a whole, not just this or that faction — is to ignore the concrete development of the common good within American democracy. We face, in other words, a mirror image of the ahistorical tendency to frame the past as a dystopia. Only here, it is modernity, and American modernity in particular, that is all benighted. Meanwhile, the very real shortcomings of the classical worldview — not least, its comfort with slavery and “natural hierarchies” that could only be overcome by the democratic revolutions of the eighteenth and nineteenth centuries — are gently glossed over, if not elided entirely.

     

    There is a better way. It begins with taking notice that American democracy has, at its best, offered a decent approximation of Catholic political rationality: the drive to make men and women more fully social; to reconcile conflicting class interests by countervailing elite power from below; and to subject private activity, especially economic activity, to the political imperatives of the whole community. To overcome our misplaced Catholic alienation, then, we need to recover American Catholicism’s tactile sense for the warp and weft of American history: to detect patterns of reform and the common good that lead us beyond the dead-end of the current culture war.

     

    Doing so would liberate us from the phantoms of romantic politics, be it the fantasy of a retreat to some twee artificial Hobbiton or the dream of a heroically virtuous aristocracy. (Today that latter dream could only legitimate the predations of Silicon Valley tycoons, private-equity and hedge-fund oligarchs, and the like.) It may even begin to shorten the distance between orthodox Catholicism and the American center, to the mutual enrichment of both.

     

    To be clear, I have no brief here for the theory, often called “Whig Thomism” in Catholic circles, according to which modern liberal democracy and the American founding represent a natural blossoming of the classical and Christian tradition as embodied by Aquinas, improbably recast by proponents as the first Whig. There clearly was a rupture, and the philosophy and theology of Federalist 51 cannot easily be reconciled with Catholic political rationality. Nor am I suggesting that we chant the optimistic mantra, first voiced by American bishops in the early nineteenth century, that the Founders “built better than they knew”: meaning that the framers of the Constitution somehow (perhaps providentially) transcended their late-eighteenth century intellectual limitations to generate a sound government; or that American order ended up working much better in practice than its theoretical underpinnings might have suggested. “Better than they knew” is a backhanded compliment where patriotism and reason demand sincere reverence for the Founders’ genius for practical statesmanship and constitution-building. This, even as we can critique how their bourgeois and planter-class interests warped their conceptions of liberty and justice — a task progressive historiography has carried out admirably, and exhaustively, since the days of Charles and Mary Beard and the early Richard Hofstadter.

     

    Such debates, over how Catholic or un-Catholic the Founding was, are finally as staid and unproductive as Founders-ism itself. Even the extreme anti-Founding side is engaged in Founders-ism (albeit of a negative variety): the attempt to reduce the American experience to the republic’s first statesmen, who fiercely disagreed among themselves on all sorts of issues, making it difficult to distill a single, univocal “American Idea” out of their sayings and doings.

    So what am I proposing? Simply this: that American Catholics must not lose sight of their own first premise, inscribed right there in the opening of the Nichomachean Ethics, that people naturally seek after the good — after happiness — even if they sometimes misapprehend what the genuine article entails. Widened to a social scale, it means that the quest for the common good didn’t grind to a halt with the publication of this or that book by Hobbes or Locke, nor with the rise of the modern liberal state and the American republic. 

     

    Whether or not it was called the common good, the American democratic tradition — especially in its Jacksonian, Progressive, and New Deal strains — has set out to make the republic more social and solidaristic, and less subject to self-interested and private (including its literal sense of “idiotic”) passions. The protagonists of this story have acted within the concrete limits of any given historical conjuncture, not to mention their own limits as fallen human beings. The whole project demands from the Catholic intellectual what used to be called “critical patriotism”: a fiercely critical love, but a love all the same.

     

    Such a Catholic inquiry must begin with a consideration of the concrete fact of American actors, Catholic and otherwise, striving for the common good via the practice of democracy, especially economic democracy. 

    2

    Consider Roger Brooke Taney. In addition to being a world-historic bad guy, he was an economic reformer. At crucial moments as a member of Jackson’s Cabinet and later as the nation’s chief judicial officer, he insisted that government is responsible for ensuring the flourishing of the whole community, as opposed to the maximal autonomy of private corporate actors. In this aspect, his story is illustrative of how democratic contestation functions as the locus of the American common good, drawing the best energies of even figures whom we otherwise (rightly) condemn for their failings.

     

    He was born in 1777, in Calvert County, in southern Maryland. Six generations earlier, the Taneys had arrived in the region as indentured servants. They had won their freedom through seven years of hard toil. Freed of bondage, the first Taney in Calvert became prosperous, even getting himself appointed county high sheriff. In short time, his descendants joined the local gentry. To ease their way socially, they converted to Catholicism, the area’s predominant faith. Yet the colony was soon overrun by migrating Puritans, who barred Catholics from holding public office, establishing their own schools, and celebrating the Mass outside their homes. 

     

    The Taneys had supported the revolution in 1776, not least in the hope that it might bring them greater religious liberty. The birth of the new republic otherwise barely touched them, at least initially. They continued to occupy the commanding heights in the area, from a majestic estate that overlooked the Patuxent River to the west, while to the south flowed Battle Creek — named after the English birthplace of the wife of the first colonial “commander,” Robert Brooke, another convert to Catholicism. It was a measure of the Taneys’ social rise, from their humble origins to the Maryland planter semi-aristocracy, that Roger’s father, Michael, had married a Brooke. Having begun their journey in the New World as indentured servants, the Taneys now owned seven hundred and twenty-eight acres of land, ten slaves, twenty-six cattle, and ten horses. 

     

    Michael Taney, Roger’s father, belonged to the establishment party, the Federalists, but he broke with them on important questions. He opposed property qualifications for voting, while favoring monetary easing to rescue struggling debtors. These stances were more befitting a follower of Jefferson than a disciple of Hamilton. Michael Taney, then, was a slave-holding democrat — a contradictory posture that we find replicated, a few decades later, in Jacksonian democracy, of which his infamous son was both a leader and a supreme exemplar.

     

    Under the rules of inheritance that in those days governed the fate of male children, Taney’s older brother was to take over the estate, while a younger son was expected to find his own way in the world as a professional, if he was the book-learning type. This was his good fortune, for the business of the estate soon soured: the Napoleonic Wars wrecked the international tobacco trade, and Maryland’s climate was ill-suited to other crops, such as wheat, cotton, and indigo, that dominated slave economies further south and west. The revolution had been a great spur to commercial boom, and that meant urbanization. In Maryland, the center of gravity shifted to towns such as Baltimore, while places such as Calvert County fell behind. It was for those urban power centers that Taney was destined. He graduated valedictorian at Dickinson College in Pennsylvania and went on to study law in Annapolis under a judge of the state Court of Appeals, and soon rose to the top of the Maryland bar, eventually becoming attorney general, with a stint in the state legislature as a Federalist before the party imploded.

     

    The post-revolutionary commercial boom decisively gave the upper hand to what might be called “market people”: coastal merchants and financiers, technologically empowered manufacturers, large Southern planters, and enterprising urban mechanics in the North who mastered the division of labor to proletarianize their fellows. Their rise came at the expense of “land people”: the numerical majority, the relatively equal yeomanry that formed the nation’s sturdy republican stock, Jefferson’s “chosen people of God.” The disaffection of the latter set the stage for a ferocious democratic backlash and the birth of American class politics.

     

    As he won entrée to the urban professional bourgeoisie, Taney didn’t leave behind his emotional commitment to the “land people” from whose ranks he thought he hailed. I say thought, because in reality the Taneys belonged to the rarefied planter-capitalist class of the Upper South, even if their fortunes had begun to wane. Still, as a result of this subjective sense of class dislocation, the future Taney would remain acutely aware of the topsy-turvy, and the misery, to which Americans could be exposed in market society. It was predictable that he would rally to Andrew Jackson’s battle cry against finance capital generally and particularly against the Second Bank of the United States. 

     

    Congressionally chartered, the Bank of the United functioned simultaneously as a depository for federal funds, a pseudo-central bank, and a profiteering private actor in the money market. Its biggest beneficiaries were market people, those who held “the money power,” in the coinage of Jackson’s senatorial ally and one-time pub-brawl belligerent Thomas Hart Benton. In the 1820s, Taney became a Democrat, and declared himself for “Jacksonism” and nothing else. Soon he found his way into Jackson’s Cabinet as attorney general in the heat of the Bank War. 

     

    In 1832, Jackson issued the most famous presidential veto of all time, barring the Bank’s charter from being renewed on the grounds that it made “the rich richer,” while oppressing “farmers, mechanics and laborers.” Taney was a principal drafter of the message, alongside the Kentucky publicist and Kitchen Cabinet stalwart Amos Kendall. The BUS counterpunched by tightening credit in an attempt to bring the Jackson administration to its knees. In response, Old Hickory tapped Taney — “the one who is with me in all points” — to oversee the removal of American taxpayer funds from the Bank’s coffers. It was Taney who, at the behest of a Baltimore crony named Thomas Ellicott, improvised the so-called pet-banking “experiment,” in which the federal deposits were gradually placed with select state banks.

     

    The Bank War was a lousy reform at best, and emblematic of American populism’s enduring flaws. Jackson had not contemplated an alternative to the BUS before restructuring the nation’s credit system on the fly; state banking was the gimcrack alternative improvised as a result of this lack of planning. As the unimpeachably scrupulous Taney was later to learn, to his utter mortification, his Quaker friend Ellicott was a fraudster who had dreamed up state banking as a way to rescue his own bank, which had been engaged in reckless speculation, and to fund further gambling of the kind. 

     

    The local and the parochial, it turned out, were no more immune to corruption than the large Northeastern institution; indeed, local cronyism was in some ways worse, since its baleful effects could more easily remain hidden. What followed was a depression, an orgy of wildcat banking, and decades of banking crises that comparable developed nations with more centralized banking systems would be spared. As Hofstadter noted, what had been needed was better regulation of the Bank. Jackson, however, only knew how to wallop and smash.

     

    What matters for our purposes, however, are less the policy details than the overarching concept. Taney was a sincere reformer, keenly aware of what the insurgent democracy was all about. At stake in the struggle, he declared at one point, had been nothing less than the preservation of popular sovereignty against a “moneyed power” that had contended “openly for the possession of the government.” This was about as crisp a definition of the Bank War’s meaning as any offered by those who prosecuted it. 

     

    Jacksonian opponents of the Bank considered it an abomination for self-government that there should be a private market actor that not only circumvented the imperatives of public authorities, but also used its immense resources to shape political outcomes to its designs. With the Bank defeated, no longer could a profiteering institution wield “its corrupting influence . . . its patronage greater than that of the Government — its power to embarrass the operations of the Government — and to influence elections,” as Taney had written in the heat of the war. Whatever else might be said about the Jacksonians, they had scored a decisive victory for the primacy of politics over capitalism. Politics, democracy, the well-being of the whole had to circumscribe and discipline economic forces. The Bank War, in sum, had been all about the common good. 

     

    Taney believed that market exchanges, especially where markets were created by state action, should be subject to the political give-and-take that characterized Americans’ other public endeavors; subject, too, to the imperatives of the political community. It was a principle that Taney would champion even more explicitly in some of his best rulings as chief justice of the U.S. Supreme Court, especially in 1837 in the Boston Bridges case. As Hofstadter commented, although the outcome of the Bank War was on the whole “negative,” the “struggle against corporate privileges which it symbolized was waged on a much wider front,” most notably the Charles River Bridge case. 

     

    The facts of the case are more arcane than a brief summary will permit, but the upshot was that a private corporation — the Harvard corporation, as it happens — had been granted a charter to operate a bridge across the Charles River dividing Boston and Charlestown. Could the state legislature later grant a license to another corporation to build and operate a second bridge, to relieve traffic congestion on the first? Harvard argued that this was a violation of the exclusive contractual concessions that the state had made to it. The second charter, Harvard insisted, trespassed the constitutional prohibition against state laws abridging pre-existing contractual rights — one of the Hamiltonian system’s main defenses against the danger of democracy interfering with the market. Chief Justice Taney, writing for a Jacksonian majority, disagreed with Harvard. Again, the principles that underlay his thinking are more important, for our purposes, than its immediate practical import for corporation law. “Any ambiguity” in contractual terms, Taney wrote in a famous passage,

     

    must operate against the adventurers and in favour of the public. . . . [This is because] the object and end of all government is to promote the happiness and prosperity of the community by which it is established. . . . A state ought never to be presumed to surrender this power, because, like the taxing power, the whole community have an interest in preserving it undiminished. . . . While the rights of private property are sacredly guarded, we must not forget that the community also have rights, and the happiness and well-being of every citizen depends on their faithful preservation.

     

    That is, the preservation of the state’s prerogative to act for the common good, even if that at times means derogating private-property rights. Those are genuinely marvelous sentences. There are scarcely any more crystalline expressions of the classical account of politics in the entire American tradition. It is notable, too, that Taney went on to reason that no lawmaking body could possibly bind its own power to act for the common good in granting a privilege to a private actor at some earlier point in history.

     

    In the shadow cast by Dred Scott, praising Taney’s economic jurisprudence can feel a little like the old joke about whether Mrs. Lincoln enjoyed the play. But in fact the tragedy of Dred Scott lay in Taney’s violation of the principle that he had himself articulated in the course of the Bank War and again in the Boston Bridges case. Dred Scott represented a moral catastrophe of epic proportions. It was also a failure to uphold common-good democracy: indeed, Taney’s common-good reasoning in the Charles River case could have served as a dissenting opinion in Dred Scott.

     

    Taney — the lawyer who had once called slavery “a blot on our national character,” who had stated that the Declaration of Independence would be a source of shame until slavery was abolished, and who early in life had manumitted his own bondsmen — ruled that Congress had lacked the authority to ban slavery in parts of the Northwest Territory under the Missouri Compromise. He reasoned that Congress’s power under the Property Clause of the Constitution to make rules for all territories only applied to American lands at the time of ratification, in 1787, and not to territories subsequently acquired, such as through war or the Louisiana Purchase. In the Charles River case, he had insisted that a lawmaking body cannot possibly shackle itself at a particular moment in history in such a way that subsequent generations of lawmakers would be unable to act for the common good of the whole. In declaring the Missouri Compromise unconstitutional, however, he imposed just such a limitation on Congress. 

     

    But Taney went further than that. In also passing judgment on Dred Scott’s standing to bring suit as a citizen of the United States, he denied the primacy of morality and political rationality that had characterized his reasoning in the Bank War and his Jacksonian rulings on economics. Men at the time of ratification, he pointed out, did not believe that black people were endowed with any rights that whites were obliged to respect. To be sure, the abolitionist press seized on that one sentence to suggest that Taney was expressing his own opinion regarding the moral status of black Americans. Yet the full quotation and reasoning do little to exculpate Taney for attempting to write into the Constitution, in perpetuity, the racist biases of the eighteenth and nineteenth centuries — prejudice that even enlightened slaveholders in the eighteenth century acknowledged to be just that.

     

    Taney ended his life an authentic villain, fully deserving his ignominious reputation, a fact made all the more painful by his brilliance and doggedness in defense of the economically downtrodden in other contexts. “The Unjust Judge,” the pamphlet anonymously published to celebrate Taney’s death, noted that “in religion, the Chief Justice was Roman Catholic.” And his own Pope Leo X had “declared that ‘not the Christian religion only, but nature herself, cries out against the state of slavery.’” (Never have the words of a Roman pontiff been deployed to such devastating effect against the moral legacy of an American Catholic jurist.)

     

    Removing the “blot on our national character” and correcting Taney’s hideous error in Dred Scott would require the shedding of democratic blood. And future generations of democrats, including the Progressives and especially the New Dealers, would enact far more effective reforms than Taney’s cohort achieved, even as they drew inspiration from the Jacksonian example. Those looking for a Catholic exemplar — for a figure who confidently advanced Catholic political rationality within the American democratic tradition — need look no further than the Reverend James A. Ryan. The moral theologian and activist came to be known as “Monsignor New Deal” for his advocacy for a “living wage” (the phrase was the title, in 1906, of the first of his many books), health insurance for the indigent, the abolition of child labor, and stronger consumer laws, among other causes. In 1936 he vehemently challenged the racist and anti-Semitic populist Father Charles Coughlin and endorsed Franklin Delano Roosevelt for president. In 1942, he published a book called Distributive Justice, in which he insisted that economic policies must not be detached from ethical values and championed the right of workers to a larger share of the national wealth. He was both an originator and a popularizer of New Deal ideas, prompting Roosevelt to serenade him in 1939 for promoting “the cause of social justice and the right of the individual to happiness through economic security.”  In 1945, he delivered the benediction at Roosevelt’s last inauguration.

     

    What distinguished Ryan’s public presence was a humane realism about modern life that is sorely lacking among many of today’s “trads.” As the historian Arthur S. Meyer has noted, for example, Ryan in theory gave preference to Catholicism’s patriarchal conception of the living wage. But recognizing that economic necessity forced many American women, especially poor and working-class women, to enter the labor force, he didn’t pine for a romantic restoration of the patriarchal ideal. Instead, he called for the extension of living-wage and labor-union protections to working women. At a more fundamental level, he understood that under modern industrial conditions, social justice and class reconciliation could not be accomplished by means of mere exhortations to virtue targeted at the elites, but by means of power exerted from below and bolstered by state action — in a word, by means of democracy.

     

    To advance the common good today, to act (in the words of Matthew) as salt and light, the American Catholic intellectual must enter this drama, wrestling with its contradictions, sincerely celebrating its achievements, and, yes, scrutinizing its shortcomings in the light of the moral absolutes that we believe are inscribed in the hearts of all men and women. This, as opposed to striking a dogmatic ahistorical posture and rendering judgment from a position of extreme idealism giving rise to an unhealthy and philosophically indefensible revulsion for the nation and its traditions. Critical patriotism and a return to the American center — the vital center redux — should be our watchwords, and this implies, first and foremost, a recognition that American democracy is itself a most precious common good.

    after St Francis of Assisi

    Here goes; and there it went. It might stay gone.

    What next? Play faster with the quick and dead, with the tightened fist play looser:

    amplify the beggar in the chooser.

     

    Cursed are we who lop the tops off trees to find heat’s name is written in the wood;

    cursed are we who know it’s hard to save the world from everyone 

    who wants to save the world. You do have to be good.

    after Margaret Cropper

    Genesis, behold your progeny: 

    inventor, behold your inventory:

     

    protagonist, behold your agony: 

    window, the wind is in your eye:

     

    Capuchin, here’s your cappuccino: 

    tragedy, I’ve got your goat:

     

    and here I come

    O deathless mortgage, O unmanageable manifesto.

    Ready or not.

    Job 42:10–17

    Yesterday P. asked: “Do you think the children from Job’s second chance could actually be happy?”

                                     – Anna Kamieska, A Nest of Quiet: A notebook, translated by Clare Cavanagh  

    But then amid the helplessness of Lives and corrugated sewage, underneath the heavens’

                     cold and hatchbacked tabernacle, absolute, at night and then in the tubercular dawn, the Man who had

                     been locked in Place, shocked by his loss of Face and Family, was loosed: and then the World donated to

                     him twice what had been gone. 

    His Children (whom he’d seen the fired pyres stripping of their nakedness and every woolly

                     talisman) came back, came bringing groceries: and they said, this is what a bad trip feels like, we were

                     never dead, you only thought we were: and though he had mislaid his Face in tumuli of boils, had

                     dropped his Eyes in lozenge-bottles crouched behind the ziggurats of shipping boxes at the docks,

                     screamed at Life’s fair unfairness, they beatified him, decorated him with Reassurances that tugged like

                     ugly gold hoops at his ears. 

    So in the End he was more blessed (which in some Tongues translates as wounded) than in

                     the Beginning: but he cried I said, I said, I know you’re as dead as the oxen the asses the camels the

                     sheep that the Mesopotamians carried away, in that Book. 

    And he blessed the World in turn because he feared to curse it.
    Blessed the mad black flowers crackling hotly in the Planet’s gradients of heart, the bed-mud 

                     of the Mersey their grey, gradual becoming; blessed the Bodies fished like banknotes from the throats of

                     archways; blessed the Names that passed like pills or golem spells under the drawbridge of his Tongue,

                     and his roarings that poured like the waters; blessed his Eye trapped now inside another cranky orbit,

                     and the broken hug of him ungrasping child and fatherless. 

    And any liberal, and always liberally worded, Words they said were only words: he still

                     missed what they said had not gone missing. 

    After This is After That, he said, and if this were a bad trip I would know it. And did not

                     escape the Feeling, angry as a tennis racquet, of his being made to serve. 

    You’re dead, you’re dead, he said, watching his children reproduce; and soon they too grew

                     to believe it. 

     

    Job 3:11–26

    To me moans came for food, my roars poured forth like drink.

    – John Berryman, “Job”

    “So why did my umbilicus, umbrella of the belly, not asphyxiate and fix me at my birth

                     and make my due my expiration date? 

    Why was I lapped in aprons, and not limbo’s fair-welled, farewell wave; 

                     why was I milk-fed, milk-toothed, given weight? 

    Better end here unborn: then I’d shut the hell up; 

                     then I would snooze all my alarm 

    at all the hedge-funds who so priveted and so deprived the world,

                     who drilled a black yet golden heaven from the deadened graves, 

    and all the highnesses who built a pyramid of buried complexes 

                     on pyramids of schemes. 

    Or why was I not canned like laughter or an unexpected baby, 

                     my metaphysic offal cured in sewage? 

    There the stranded heartbeat of the world’s unquickened by desiring, 

                     the tired sleep in forever. 

    There the mountains range, the sundials of wild granite, 

                     and sun sets like a dodgy jelly. 

    And the thimble and the Brunelleschi dome alike are there, alike, 

                     and corners are the only cornered things. 

     

    Why is a light given to who is darkness,

                     life to whose long life seems lifeless, 

    who, meeting the business end of time, if it returned his holy texts, 

                     would see in definition things defined, or finitude; 

    who dances on his own life’s line, his own grave’s square, 

                     in a garden teething white with plastic chairs? 

    Why is enlightenment a thing, when we are walled up in this faceless space 

                     where blindness is a kindness? 

    My daily bread and I are toast,

                     and hormones pip from the eyes as tears drip-drip from ice caps. 

    My agoraphobia gores me, my claustrophobia closes in,

                     and when I’m being oh-so-careful, the piano drops from nowhere on my head. 

    I’m not a laugh, but nor am I the strong and silent type.

                     I take no painkillers; how can I — I, who make a living of my pain?” 

    Wessobrunn Prayer

    Once, there were neither bottled-up fields nor bluebottled breeze;

    nor trill of pollen, tree nor hill to die on was there there (there, there):

    not yet our unseated adjustment of dust; no striking star, nor stroke of sun;

    nor did the moon light, like the grey, scaled nodule nodding off the dead end of a cigarette; 

                 nor was sea seen.

    No, nothing: neither loose nor bitter ends.

    Yet there was something sizing up that endlessness, some agency

    which advertised the heavens’ opening: our ice floes’ flow, our black and smeary snow above

                 the alps of steel production plants, our rivers’ scalp of fat; 

    and which said, “Before you were, I am.”

     

    Like that last phrase, you run, like blinding colors through the eyeless world

    and when the mind forgets itself, you’re there — where what is left to know is left to live. 

    Fine, hold me in your Holocene: give me a kicking; and the goods, 

    the martyrs with their hopscotch blood and nails as fragrant in their palms as cloves —

    a coat of your arms to weather the flustering, clusterbomb wind, which changes,

    and the tide of time which draws us from ourself and — as it takes time to keep time; it takes

                 one to know one; it takes — and which draws itself out. 

     

    The War in Ukraine and The Fate of Liberal Nationalism

    1

    If nationalism sounds like a dirty word, then Ukrainian nationalism has sounded even worse. In the imaginations of many, it is associated with extreme xenophobic violence. Even those who sympathize with Ukraine are not free from this image. Michael Ignatieff, for example, an eminent Western liberal intellectual, wrote shortly after visiting independent Ukraine: “I have reasons to take Ukraine seriously indeed. But, to be honest, I’m having trouble. Ukrainian independence conjures up images of embroidered peasant shirts, the nasal whine of ethnic instruments, phony Cossacks in cloaks and boots, nasty anti-Semites.” This stereotype is not totally groundless, and it has various roots. Indeed, xenophobic overtones can be found in one of the earliest formulations of Ukrainian identity, in an early modern Ukrainian folk song:

    There is nothing greater,

    no nothing better,

    than life in Ukraine!

    No Poles, no Jews,

    No Uniates either.  

    The funny thing is that a few hundred years later Ukrainians and Poles have managed to reconcile, and Ukraine ranks among the leaders of pro-Israeli sympathizers, and Uniates — present-day Greek Catholics living mostly in western Ukraine — display the highest level of Ukrainian patriotism.

     

    The song makes no mention of Russians. At the time, in the early modern centuries, Russian ethnicity was not widely familiar to Ukrainians. And even later, when it was, for a long time it did not feature in the common inventory of Ukraine’s historical enemies. That list comprised Poles, Jews, and Crimean Tatars. Now former enemies have turned into allies, and Russians are the ones who have launched a full-scale war on Ukraine.

     

    This radical transformation in Ukrainian identity can also be illustrated by a video taken in Kyiv during the first days of the Russian-Ukrainian war. It depicts Volodymyr Zelensky and his men standing in the courtyard of the presidential office in Kyiv. They were delivering a message to Ukraine and to the world: “All of us here are protecting the independence of our country.” Of the five people there, only two — Dmytro Shmyhall, the Prime Minister, and Mykhailo Podoliak, an advisor to the preisent’s office — are ethnic Ukrainians. The other two, Zelensky and Andriy Yermak, the head of his office, are of Jewish origin, and the fifth one, David Arakhamia, is Georgian. One person missing from the photo is Defense Minister Oleksii Reznikov. Like Zelensky and Yermak, he is also of Jewish origin. In September 2023, he was replaced by Rustem Umerov, a Crimean Tatar. Regardless of their different ethnic origins, all of them identify as Ukrainian. In short, they represent what is known as civic nationalism.

     

    We are living in a golden age of illiberal nationalism. We see it in countries as historically and geographically diverse as Hungary, India, Brazil, and others. Ukraine, however, seems to run against this lamentable global trend. In this sense, the Ukrainian situation, for all its hardships, is a source of good news. Its rejection of tribal and exclusivist nationalism in favor of an ethnically inclusive kind, the civic nationalism for which it is now fighting, is a remarkable development in an increasingly anti-democratic world. But to what extent is the Ukrainian case unique? And does it convey any hope for the future? 

     

    In and of itself, nationalism is neither good nor bad. It is just another “ism” that emerged in the nineteenth century. According to the twentieth-century philosopher Ernest Gellner, who thought long and hard about the nature of nationalism, “nationalism is primarily a political principle, which holds that the political and the national unit should be congruent.” Or, as the nineteenth-century Italian nationalist Giuseppe Mazzini declared, “Every nation a state, only one state for the entire nation.” In other words, nationalism claims that a national state should be considered a constitutive norm in modern politics. And indeed it is: the main international institution today is called the United Nations, not the United Empires.

     

    Nationalism, of course, can take a wide array of forms. One of the most frequently debated questions is whether nationalism is compatible with liberalism. Hans Kohn, a German-Jewish historian and philosopher displaced to America who was one of the founders of the scholarly study of nationalism, claimed that “liberal nationalism” is not at all an oxymoron, and with other historians he documented the early alliance of national feelings with liberal principles, notably in the case of Mazzini. But he located liberal nationalism only within the Western civic traditions. Eastern Europe, in his opinion, was a domain of illiberal ethnic nationalism.

     

    The study of nationalism has advanced since Kohn’s day, and nowadays there is a consensus among historians that the dichotomy of “civic” versus “ethnic” nations is analytically inadequate. With few exceptions, ethnic nations contain within themselves numerous minorities, and civic nations are built around an ethnic core. So the correct question to ask is not whether to be a civic nation or an ethnic nation, but rather this: what are the values around which a civic nation is built?

     

    Since the very beginning, Ukrainian nationalism combined both ethnic and civic elements. Ukrainian identity is based on the Ukrainian Cossack myth. The Ukrainian national anthem claims that Ukrainians “are of Cossack kin.’’ Initially, there was nothing “national” about Cossackdom. It was a typical military organization that emerged on the frontier between the settled agrarian territories and the Eurasian steppes. The transformation of Ukrainian Cossacks into a core symbol of Ukrainian identity occurred in the sixteenth and seventeenth centuries within the realm of the Polish-Lithuanian Commonwealth. Though we are accustomed to viewing Ukrainian history in the shadow of Russia, this formulation is anachronistic: historically speaking, Poland’s impact on Ukraine started earlier and lasted longer. It began with the annexation of western Ukrainian lands by Polish kings in 1349, and was extended to almost all the Ukrainian settled territories after the emergence of the Polish-Lithuanian Commonwealth in 1569, and remained strong even after the collapse of that state in 1772 –1795.

     

    On the map of early modern Europe, the Polish-Lithuanian Commonwealth looks like an anomaly. In the first place, it was known for its extreme religious diversity. The Polish-Lithuanian Commonwealth was the only state where Western Christians and Eastern Christians lived together as two large religious communities. It was as a consequence of their intense encounters that the Ukrainian identity emerged. Also many Jews, expelled from the Catholic countries of Europe, found refuge in the Polish-Lithuanian Commonwealth. They were under the protection of the Polish king, and he engaged them in the colonization of the rich Ukrainian lands on the southeastern frontier known as the Wild Fields.

     

    Moreover, the power of the king was very limited. As the proverb goes, he governed but did not rule. The king was elected by local nobles (szlachta). Their exceedingly high numbers — this aristocracy comprised five to eight percent of the population, compared to one to two percent in other states — along with the scope of their privileges and their multiethnic composition, constitute yet another anomaly of that state. By and large, the Polish-Lithuanian Commonwealth was an early (and limited) model of the civic nation — if we understand the concept of the nation in the context of those times: a nation of nobility whose rights and privileges did not extend to other social groups.

     

    The nobles legitimized their privileged status by serving the Polish-Lithuanian Commonwealth with the sword. But the Ukrainian Cossacks did the same. They fought in the military campaigns of the Polish-Lithuanian Commonwealth and defended its borders from Tatars and Turks. By this token, the Cossacks could claim equal status in the polity. But the gentry jealously guarded their privileges. They viewed the Cossacks as a rebellious rabble who could not lay claim to equal dignity. Then, from the 1590s through the 1630s, the Commonwealth was rocked by uprisings sparked by the Cossacks’ dissatisfaction with their status. Their rebellions fell on favorable ground. The Commonwealth, after all, was known as “heaven for the nobles, paradise for the Jews, purgatory for the townspeople, hell for the peasants.” The situation of the peasants was particularly deplorable. The emergence of the Commonwealth coincided with the institution of mass serfdom, as the local gentry aimed to maximize profits from the production of Ukrainian bread for European markets. Guillaume Levasseur de Beauplan, the author of A Description of Ukraine, from 1651, claimed that the situation of the local peasants was worse than that of Turkish galley slaves.

     

    Alongside the rise of serfdom, religious tolerance began to wane. The local Orthodox church was under pressure to convert to Catholicism. To protect themselves, the hierarchy agreed to a union with Rome. For most of the Orthodox flock this was an act of treason, so they turned to the Cossacks. And as the Cossacks offered support and protection to the Orthodox Church, the Church offered the Cossacks a sense of a national mission. The result was an emergence of a new national identity — Ukraine, with “no Poles, no Jews, no Uniates.” This formula was implemented in the early modern Ukrainian Cossack state. It came into being as a result of the victorious Cossack revolution under Bohdan Khmelnytsky. The rebellion was spectacularly violent. As a Cossack chronicler wrote, blood “flowed like a river and rare was the person who had not dipped their hands in that blood,” and Jews and Poles were the main victims of the Cossack massacres. The Hebrew chronicles of 1648 concur with the Cossack ones about the magnitude of the savagery. 

     

    Even though the Cossacks rebelled against the Polish-Lithuanian Commonwealth, they also emulated its practices: like the Polish kings, the leaders of the Cossack state — they were known as hetmans — were elected by Cossacks, and Cossack officers saw themselves as equivalent to the Polish nobility. In a sense, the Cossack state was a mixture of civic and ethnic elements. It was civic insofar as the Cossacks saw themselves as citizens, not subjects; the Cossack ruler was elected and his power was limited. It was ethnic insofar as its core was made of Orthodox Ukrainians. It reflected the common European principle of cuius regio eius religio, he who governs the territory decides its religion. This principle emerged from the ferocious and protracted religious wars between Catholics and Protestants in Europe in the sixteenth and seventeenth centuries. Tellingly, the Cossack revolution started the same year that the Thirty Years’ War — one of the bloodiest wars in European history — ended. 

     

    In the long run, this religious dimension played a bad joke on the new Ukrainian identity. As a petty noble with no royal blood, Khmelnytsky had no legitimate claim to become an independent ruler. He thus sought a monarch who would allow him to preserve his autonomy. Finally he chose the Tsar in distant Moscow, who, like the Cossacks, was of the same Orthodox faith. This choice was ruinous for the Cossack state. Under Russian imperial rule the Cossack autonomy was gradually abolished, and the Cossack state finally dissolved in 1764.

     

    Around the same time, the Russian Empire, together with the Austrian and the Prussian empires, annexed the lands of the Polish-Lithuanian Commonwealth. The Russian Empire thus gained control over most of Ukrainian ethnic territory, and only a small western part went to the Austrian Empire. In this new setting it seemed like the fate of early modern Ukrainian identity was sealed. The offspring of the Cossack officers made their way into the Russian imperial elites. Russia was a large but backward empire. It desperately needed an educated elite to govern its vast expanses. That elite was most abundant on its western margins. The Ukrainian Cossack nobility, although not as educated as the Baltic German barons and not as numerous as the Polish gentry, had one advantage: they were of the same faith as the Russians. In the eighteenth century, Ukrainians made up almost half of the Russian imperial intelligentsia. In the nineteenth century the Ukrainian influence became hard to trace, because most of them had already been assimilated into Russian culture.

     

    Like the Scots in the British Empire, Ukrainians paved the way for the Russian Empire to become a global power because many of them thought of it as their empire. Ironically, they started out like the Scots but finished like the Irish. Those Ukrainians who moved out of Ukraine to make their careers in the imperial metropoles of St. Petersburg and Moscow became a success story. The ones left behind were less fortunate. Under Russian imperial rule, they were increasingly impoverished, progressively ousted from the administration, and steadily deprived of their liberties. The Ukrainian language was twice banned. The Ukrainians mourned their glorious Cossack past and resented the new order. They were certain that their nation was going to their graves with them.

     

    They were wrong: the revival of Ukrainian identity came with newer elites of lowlier origins. The most influential figure in this revival was Taras Shevchenko (1814-1861). Born a serf, he rose to prominence as a national poet. In his poetry, Shevchenko glorified Ukraine’s Cossack past but disdained the assimilated Cossack elites: they were “Moscow dirt, Warsaw scum.” His heroes were the Ukrainian common people: “I will raise up/Those silent downtrodden slaves/I will set my word/To protect them.” His model of the new Ukrainian nation was close to that of the French Revolution. Indeed, to the monarchs, Shevchenko sounded just like a Jacobin: “Ready the guillotines/For those tsars, the executioners of men.” He was arrested for his poetry and sentenced to exile as a private in the army without “the right to write.” His personal martyrdom enhanced his image as a national prophet. His poetry came to be read with an almost religious fervor. As one of his followers wrote, “Shevchenko’s muse ripped the veil from our national life. It was horrifying, fascinating, painful, and tempting to look.”

     

    Shevchenko’s formula of Ukrainian identity became paradigmatic. Its strength lay in its double message of social and national liberation. Later generations of Ukrainian leaders were said to carry Shevchenko’s poetry in one pocket and Das Kapital in the other. In the words of Mykhailo Drahomanov, a leading nineteenth-century Ukrainian thinker, in conditions in which most Ukrainians are impoverished peasants every Ukrainian should be a socialist and every socialist should be a Ukrainian. When it came to Jews and Poles, Drahomanov envisaged for them a broad national autonomy in exchange for their support of the Ukrainian cause.

     

    This formula proved successful once the Russian empire collapsed during the Russian Revolution in 1917. The Ukrainian socialists managed to create the Ukrainian People’s Republic with massive support from the Ukrainian peasantry. But the peasants subsequently turned their backs on this state once it was attacked by the Russian Bolshevik army. Later the peasants rebelled against the Bolsheviks as well. In the end, the moment for Ukrainian independence was lost, and Ukraine was integrated into the Soviet Union. Ukrainians paid dearly for this loss: in the 1930s most of their elites were repressed, while peasants became the victims of Stalin’s collectivization and famine, the infamous Holodomor.

     

    The failure of the Ukrainian People’s Republic led to a reconsideration of Ukrainian identity. The key figure in this respect was Viacheslav Lypynsky (1882-1931). He was born to wealthy Polish landowners in Ukraine. Driven by a feeling of noblesse oblige, he decided to shift from a Polish identity to a Ukrainian one. Lypynsky blamed Ukrainian leaders for their narrow concept of Ukrainian identity. In his opinion, one could not build the Ukrainian state while relying exclusively on peasants. One had to attract professional elites, which in most cases were non-Ukrainians. Lypynsky propagated a civic model of the Ukrainian nation informed by the American example, “through the process of the living together of different nations and different classes on the territory of the United States.”

     

    His ideas made little headway among Ukrainians. In Soviet Ukraine his works were forbidden, like those of many other Ukrainian authors. Beyond Soviet rule, in interwar Poland and in the post-war Ukrainian diaspora in the West, the minds of Ukrainians were intoxicated instead with “integral” nationalism  –– a militant nationalism that required exclusive and even fanatical devotion to one’s own nation. Its ideology took shape in the shadow of the defeat of the Ukrainian state in 1917-1920. The key ideologue was Dmytro Dontsov (1883-1973), a prolific Ukrainian literary critic. For him, the main problem with Ukrainian nationalism was that it displayed too little ethnic hatred. Dontsov admired fascism and saw it as the future of Europe. His views became very popular among members of the Organization of Ukrainian Nationalists (OUN) and the Ukrainian Insurgent Army (UPA), founded, respectively, in 1929 and 1943. True to these ideas, the UPA was responsible for the ethnic cleansing of Poles in Western Ukraine and, partially, for the Holocaust. Among Ukrainian nationalists, the most emblematic figure, the hero, was Stepan Bandera (1909-1959). He was a symbol of struggle against all national foes. Bandera was imprisoned by the Poles in 1936-1939, by the Nazis in 1941-1944, and assassinated in 1959 by a Soviet agent. Even though he was not directly involved in the wartime anti-Polish and anti-Jewish violence — at the time he was a prisoner in the Sachsenhausen concentration camp — Poles and Jews hold him accountable for the crimes of Ukrainian nationalists.

     

    The xenophobic ideology of integral nationalism was not an isolated Ukrainian phenomenon. Commenting on Ukrainian nationalists’ rallying cries — “Long live a greater independent Ukraine without Jews, Poles and Germans!” “Poles behind the river San, Germans to Berlin, and Jews to the gallows!” — the Hungarian-American historian István Deák wrote: “I don’t know how many Ukrainians subscribed to this slogan. I have no doubt, however, that its underlying philosophy was the philosophy of millions of Europeans.” His remark reflects one of the main features of Ukrainian identity: it changed along with fluctuations in the European Zeitgeist. Its earliest articulation resonated with the formula that arose in the European religious wars of the sixteenth and seventeenth centuries; it was reinvented in the nineteenth century within ideological trends initiated by the French Revolution; and its evolution during the first half of the twentieth century kept pace with the growth of totalitarianism in most of the European continent.

     

    The latest round of rethinking Ukrainian identity was similarly shaped by European developments — and the establishment of liberal democracy in post-war Europe. Since at that time Ukraine had no access to the rest of Europe, this sympathetic vibration was rather unexpected. All Ukrainian ethnic territories, including Western Ukraine, were united under Soviet rule, and tightly isolated from the outside world. A small tear in the Curtain was made in 1975 by the Helsinki Accords; in its search for a modus vivendi with the capitalist West, the Kremlin committed itself to respecting human rights. The anti-Soviet opposition in Ukraine immediately saw this as an opportunity. They linked human rights with national rights. Ukrainian dissidents declared that in a free Ukraine, not only would the rights of Ukrainians be respected, but also the rights of Russians, Jews, Tatars, and the other nationalities that were represented in the country.

     

    Instinctively, Ukrainian dissidents reconnected with the ideas of Drahomanov and Lypynsky. And the revival of the civic concept coincided with the failure of the xenophobic Dontsov doctrine. Its decline began as early as the years of the Second World War, when, under the Nazi occupation, the nationalists in Western Ukraine tried to establish contacts with their compatriots in the Ukrainian East. But local Ukrainians turned a deaf ear to the slogan “Ukraine for Ukrainians!”. They were more interested in the dissolution of the Soviet collective farms and the introduction of the eight-hour workday and other social reforms. By the end of the war, the Organization of Ukrainian Nationalists revised their ideological tenets and moved to a more inclusive slogan: “Freedom to Ukraine, freedom to all enslaved nations.” 

     

    Throughout its history, Ukrainian identity kept evolving and changing. There has been no single canonical formula for how to be Ukrainian. Even within Ukrainian integral nationalism there were dissident groups that opposed anti-Semitism and stood for a civic concept of the Ukrainian nation. Still, for a variety of reasons, since the end of the nineteenth century, there was a growing tendency, among both Ukrainian nationalists and their opponents, to conceive of Ukrainian identity in ethnic terms. In this conception, it was the Ukrainian language that became the main criterion of Ukrainian identity. Since, as the outcome of Ukrainians assimilating into Russianness under the Russian Empire and the Soviet Union, the number of Ukrainian speakers was dramatically decreasing, the resulting impression was that the Ukrainian nation was doomed. Thus, in 1984, Milan Kundera, in his famous essay “The Tragedy of Central Europe,” declared that the Ukrainian nation was disappearing before our eyes, and that this attenuation may be indicative of the future of Poles, Hungarians, and other nations under Communism. 

     

    This perspective was opposed by some Ukrainian historians who had the advantage of the long-term perspective. They claimed that even if Ukrainians, like the Irish, stopped speaking their native language, it would not necessarily make them less Ukrainian. In their view, the fundamental difference between Russians and Ukrainians was not in language but in age-old political traditions, in a different relationship between the upper and the lower social classes, between state and society. This, they argued, was owed to the fact, that despite various handicaps, Ukraine was genuinely linked with the Western European tradition and partook in European social and cultural progress.

     

    2

    History does indeed hold the key to the Ukrainian identity. The current Russian-Ukrainian war can be largely regarded as a war over history. Most of Putin’s arguments for his aggression are of a historical nature. He claims that Russia originated from the early empire of Rus in the medieval centuries. The core of this empire lay in present-day Ukraine, with its capital in Kyiv. In Putin’s opinion, since he equates Rus to Russia, and since many contemporary Ukrainians speak Russian, Ukraine is destined to be Russian in the future.

     

    Nothing could be further from the truth. Kyivan Rus was not Russia. It was, rather, similar to Charlemagne’s empire in the West. That formation covered the territories of present-day France, Germany, and Italy. None of these nations can claim exclusive rights to its history. Yet there was also a significant difference between Charlemagne’s empire and Kyivan Rus, which created long-term “national” effects. Western countries took Christianity from Rome along with its language, which was Latin. Rus adopted Christianity from Byzantium, and without its language: all the necessary Christian texts were translated from Greek into Church Slavonic. This severed the intellectual life of Kyivan Rus from the legacy of the ancient world and made it (in the words of George Fedotov, the Russian émigré religious thinker) “slavishly dependent” on Byzantium. If we were to collect all the literary works in circulation in the Rus lands up to the sixteenth century, the total list of titles would be equal to the library of an average Byzantine monastery. The differences between Western Christianity and Eastern Christianity became even more pronounced following the advent of the printing press. In the wake of its invention and until the end of the sixteenth century, two hundred million books were printed in western Christendom; in the Eastern Christian world, this figure was no more than forty to sixty thousand. 

     

    Literature is one of the key prerequisites for the formation of nations. As the Russian-born American historian Yuri Slezkine has put it, nations are “book-reading tribes.” From this perspective, the world of Rus was like the proverbial “white elephant” or the “suitcase without a handle”: a hassle to carry around but too valuable to abandon. Rus ceased to exist as a result of the Mongol invasion in the mid-thirteenth century, and its territories were divided between the Grand Duchy of Lithuania and later the Polish-Lithuanian Commonwealth, on the one hand, and the Moscow Tsardom, on the other. Inhabitants of the former Rus expanses had some idea of the origin of their Christian civilization from Kyiv, and spoke mutually intelligible Slavic dialects, and prayed to God in the same Church Slavonic language. But these commonalities made them neither one great nation nor multiple smaller nations. Their world was largely a nationless world: they lacked the mental tools to transform their sacred communities into national societies.

     

    By this token, the making of the Russian and Ukrainian nations inevitably marked the destruction of the conservative cultural legacy of Rus. Recent comparative studies suggest that both nations emerged more or less concurrently with the Polish and other European nations, that is, in the sixteenth, seventeenth, and eighteenth centuries. But then the Russian nation was “devoured” by the Russian empire. Like most modern empires, the Russian empire did not tolerate any nationalism, including Russian nationalism: a national self-identification of the Russians might lead to imperial collapse. By this token, antagonism between the Russian Empire, on the one hand, and the Polish, Ukrainian, and other nationalisms, on the other, should properly be regarded as a conflict between a state without a nation and nations without states.

     

    And the same was true, mutatis mutandis, of the Soviet Union. At its inception in the 1920s, Soviet rule was promoting nation-building in the non-Russian Soviet republics. Among other considerations, this was meant to stem the growth of local nationalism — and the strong nationalist movement in Ukraine in the wake of the revolution in 1917 was particularly unnerving. Later, when Stalin came to power, these attempts were abandoned. The Soviet Union returned to old imperial ways, and Ukrainians were particularly targeted by the Stalinist repressions of the 1930s.

     

    Formally, all of the Soviet republics were national republics. In fact, they were republics without nations. Their future could be best illustrated by a party official’s answer to the question of what would happen to Lithuania after its Soviet annexation: “There will be a Lithuania, but no Lithuanians.” This did not mean the physical destruction of all Lithuanians or other nations. The objective of Soviet nationality politics was to relegate all these nations to the status of ethnic groups with no particular political rights.

     

    After the death of Stalin in 1953, and for the first time since the beginning of the Soviet Union, Ukrainians were promoted to high positions both in Moscow and Kyiv. The situation was similar to that of the eighteenth century, when they were invited to play a prominent role in running the empire. Still, to make a career, they had to reject their own national ambitions. All attempts to extend the autonomy of Soviet Ukraine were vociferously quashed. In relation to Ukraine, Leonid Brezhnev, who was the General Secretary of the Communist Party of the Soviet Union from 1964 to 1982, set two goals: to strengthen the fight against Ukrainian nationalism and to accelerate the assimilation of Ukrainians. This had a paradoxical effect: during the last decades of the Soviet Union, Ukrainians were overrepresented both in Soviet power and in the anti-Soviet opposition.

     

    Ukrainians, like Lithuanians, Georgians, and others, were to be dissolved into a new historical community — the Soviet people. Being a Soviet meant being a Russian speaker. The prevailing belief was that Russian would become the language of communism very much like French had been the language of feudalism and English the language of capitalism. Still, if being Soviet meant being a Russian speaker, the reverse did not work: Russian speakers were not necessarily Russians. Rather, to paraphrase the title of Robert Musil’s famous novel, they were to be men without national qualities. This was in accordance with the Marxist principle that nations were relics of capitalism and bound to disappear under communism. The ambitious Soviet project aimed to create a homogenous society but without a national identity. A case in point was Donbass, the large industrial region in eastern Ukraine. Even though its population was predominantly Russian speaking, the Russian identity was underrepresented. Inhabitants considered themselves “Soviets” and “workers.”

     

    It is worth mentioning again and again that nations are political entities. They are not exclusively, or even primarily, about language, religion, and other cultural criteria — they are about political rights, and who can exercise those rights. Nations presume the existence of citizens, not subjects. This principle was multiplied and strengthened by the French Revolution, with its slogan “liberty, equality, fraternity.” In the Russian empire, this revolutionary slogan was counterposed by the ideological doctrine of “Orthodoxy, Autocracy, Nationality.” And the “nationality” (narodnost in the original) in this formula was not related to a nation. It meant rather a binding union between the Russian emperor and his subjects. The slogan reflected a recurrent feature of Russian political culture: the idea of the unlimited power of a ruler. In this sense, there is no substantial difference between a Moscow Tsar, a Russian emperor, a leader of the Communist party, or, today, a Russian President.

     

    This is not to say that there were no attempts to democratize Russia. The past two centuries have seen several such attempts. Of these the two most significant were the reforms of Alexander II in the 1860s-1880s and then the Russian president Boris Yeltsin in the early 1990s. These attempts were rather short-lived and were followed by longer periods of authoritarianism or totalitarianism. Ultimately, Soviet Russia failed to become a nation. Ukrainians failed to become a full-fledged nation, too. But some of them — Ukrainian- speaking cultural elites, local communist leaders and the population of western Ukraine, where the effects of Sovietization were least felt — had preserved national ambitions. Very much like their compatriots in the nineteenth century, they hoped that once the colossal empire fell to pieces, Ukraine would form a breakaway state.

     

    When Gorbachev came to power in 1985, he believed that, in contrast to the Baltic peoples, Ukrainians were true Soviet patriots. In his opinion, Russians and Ukrainians were so close that sometimes it was difficult to tell them apart. Even in western Ukraine, he claimed, people did not have “any problems” with Bandera. There were experts in Gorbachev’s milieu who kept warning him about the dangers of Ukrainian separatism — but he preferred not to heed their warnings. 

     

    The moment of truth came with the Ukrainian referendum in December 1991, when ninety percent voted for the secession of Ukraine from the Soviet Union. This number exceeded both the share of ethnic Ukrainians (seventy-three percent) and the share of Ukrainian speakers (forty-three percent). This overwhelming support for Ukrainian independence was the result of an alliance between three very unlikely allies: the Ukrainian-speaking Western part of Ukraine, the national communists in Kyiv, and the miners of the Donbass, the last of whom hoped that their social expectations would be better met in an independent Ukraine than in the Soviet Union. This alliance soon fell apart as independent Ukraine plummeted into deep economic and political crises. In late 1993, the CIA made a prediction that Ukraine was headed for a civil war between the Ukrainian-speaking West and the Russian-speaking East that would make the Balkan wars of the time look like a harmless picnic.

     

    These Ukrainian presidential elections in 1994 reinforced these fears. They revealed deep political cleavages consistent with the linguistic divisions. The main rivals were the incumbent president Leonid Kravchuk and his former prime minister Leonid Kuchma. Under the Soviets, Kuchma had been director of a large Soviet factory in Eastern Ukraine. Kravchuk was supported by the western part of the country, and Kuchma by the East. 

     

    Russia was likewise undergoing a deep crisis at the time, but of a different nature. In December 1992, the Russian parliament rejected the appointment of Yegor Gaidar, the father of the Russian economic reforms, as acting prime minister. After several months of acrimonious confrontation, President Yeltsin dissolved the parliament and the parliament in turn impeached him. In response, Yeltsin sent in troops, and tanks fired at the parliament building. In early October 1993, several hundred people were killed or wounded in clashes on the streets and squares of Moscow.

     

    Unlike in Russia, the Ukrainian political crisis was resolved without bloodshed. Kravchuk lost the election and peacefully transferred power to Kuchma. This was a key moment in the divergence of political paths between Ukraine and Russia. In contrast with Russia, Ukraine launched a mechanism for the alternation of ruling elites through elections. As the Russian historian Dmitry Furman observed, Ukrainians had successfully passed the democracy test that the Russians failed. It is worth noting that Ukrainians passed that test on an empty stomach, because the economic situation in Ukraine at the time was much worse than in Russia.

     

    The Kuchma years — the decade between 1994 and 2004 — were a period of relative stability, but at a high cost: corruption skyrocketed, political opposition was suppressed, and authoritarianism was on the rise. Very much like Yeltsin, who “appointed” Putin as Russia’s next president, Kuchma approved Viktor Yanukovych, the governor of the Donetsk region, as his successor. A worse choice would be difficult to imagine: it was akin to nominating Al Capone to run for the American presidency. By that time the worker movement in the Donbass had diminished, and the region was run by a local mafia-like clan, of which Yanukovych was a key figure. His attempts to come to power and to stay in power sparked two separate waves of mass protests in Kyiv, known as the Orange revolution of 2004 and the Euromaidan of 2013-2014. They managed to win, despite harsh weather conditions — both revolutions took place in winter — and despite the mass shooting of protesters in the final days of the Euromaidan.

     

    Russia had experienced similar mass protests in the winter of 2011-2012. By that time, the ratings of Putin and his party had plummeted to record lows, and the discontent of Russians grew. Putin was brought to power in rigged elections, catalyzing mass protests on Bolotnaya Square in Moscow. Tragically, several factors contributed to the protests’ failure. One was that mass passive discontent did not transform into mass active participation. At the very height of the protests in Moscow, their leaders managed to attract only one hundred and twenty thousand people. This was the largest mass political action in the post-Soviet history of Russia. At the Euromaidan, by contrast, the largest meeting numbered, according to various estimates, from seven hundred thousand to a million people. And consider that the populations of Kyiv and Ukraine at the time were three to four times smaller than those of Moscow and of Russia. But the difference was not merely quantitative. 

     

    This brings us back to the definition of Ukrainian identity considered above: a basic difference between Ukraine and Russia lies in the capacity for self-organization. Ukrainians at the time protested against everything that Yanukovych stood for: corruption, fraud, crime. But they were perfectly aware that behind Yanukovych stood Putin and his regime. Therefore, their protests also had a national dimension; they were fighting against Russia and its regime as well. It is safe to presume that this incontrovertible fact provided a strong mobilizing effect. The Kremlin tried to paint the Euromaidan revolution as an outburst of ethnic nationalism, led by Ukrainian nationalists, or even by Nazis. In reality the protesters were bilingual and included broad swathes of Ukrainian society. The Ukrainian journalist Mustafa Nayem, an ethnic Afghan, was the leader of the protest movement. The first victim to be shot dead at the Euromaidan was Serhiy Nigoyan, who came from an Armenian family. The second was Mikhail Zhiznevsky, a Belarusian. The Euromaidan also included a large group of Ukrainian Jews. They were slurred by Kremlin propaganda as “Jewish Banderites” (Zhydobanderivtsi) — and they adopted this absurdly oxymoronic moniker as a badge of honor.

     

    True, Ukrainian nationalists were present on the Euromaidan. But the focus on Ukrainian nationalists ignores the crucial point: the Euromaidan was centered around values, not identities. It is not for nothing that was it was called the Revolution of Values. They are the values of free self-expression, and they are likely to inspire elite-challenging mass action. They are also characteristic of post-industrial societies, and in the mid-2000s Ukraine underwent a shift from an industrial to a post-industrial economy. A major part of the country’s GDP was produced in the service sector — the tech sphere, the media, education, the restaurant business, tourism, and so on. Historically, the industrial economy in Eastern Europe had been closely connected to the state. This relationship reached its peak in the Soviet industrialization. Since large industrial enterprises require centralization and discipline, the ethos of an industrial society is naturally hierarchical. In contrast, a post-industrial economy grows from private initiative. As a result, in Ukraine there emerged a large sector that was less dependent on the state and less affected by corruption. To survive and to compete successfully, those who work in the service economy need fair rules of the game. Thus, they strive for change.

     

    The social profile of Volodymyr Zelensky and his team may serve as a good illustration. With the exception of Dmytro Shmyhal, none of these people had previous experience in state administration. Volodymyr Zelensky and Andriy Yermak came from the entertainment industry, Mykhailo Podolyak started his career as a journalist, David Arakhamia worked in the IT sector, Oleksii Reznikov was a private lawyer, and Rustem Umerov was a businessman. Another important characteristic is the generational dimension: their average age is forty-five — which also happens to be the average age of the soldiers in the Ukrainian army. Taken together, these three attributes of “Zelensky’s men” — their multiethnic character, their social profile, and their age — attest to the same phenomenon: the emergence and coming to power of a new urban middle class in Ukraine. It must be clearly emphasized that this class does not constitute the majority of the Ukrainian population. It is a minority, but it is a decisive minority that pushes Ukraine on the path of liberal order.

     

    The history of Ukrainian nationalism follows the “now you see it, now you don’t” formula. In peaceful times, it is hard to detect and seems almost non-existent. Ukrainian national feeling emerges, however, during large geopolitical upheavals — as was the case during the crisis of the seventeenth century, the First and Second World Wars, the collapse of communism, and, most recently, the Russian-Ukrainian war. 

     

    The fact that the Ukrainian nation springs collectively to life during crises is partly responsible for the bloodthirsty image of Ukrainian nationalism. Since Ukraine has been a very ethnically diverse and geopolitically highly contested borderland, these crises evolved in Ukraine into a Hobbesian “war of all against all.” This led, in the past, to outbursts of shocking violence. Like so many other nationalist movements in the region, Ukrainian nationalism committed acts of great violence. There is no doubt that the crimes of some Ukrainian nationalist groups were outrageous, and an independent Ukraine must come to terms with its sins. Still, those who point the finger at Ukraine would do well to remember that this bloodlust was not unique, or even the most voracious, in the reion. There was plenty of brutality to go around.

     

    Ukrainian nationalism has had a very protean nature, and evolved with social changes. Among other things, it rarely articulated Ukrainian identity in strictly ethnic or strictly civic criteria, but mostly as a combination of both. The general balance was defined by a group that made up the core of national Ukrainian elites: Ukrainian Cossacks in the seventeenth and eighteenth centuries, the Ukrainian intelligentsia of the nineteenth century, the integral nationalists in the interwar period and during World War II, the post-war anti-Soviet Ukrainian dissidents, the Ukrainian national communists and national democrats in independent Ukraine, and most recently, the new urban middle class. In recent years, Ukrainian nationalism has been largely a liberal nationalism. In present-day Ukraine, the overwhelming majority (ninety percent in 2015-2017) believes that respect for the country’s laws and institutions is a more important element of national identity than language (sixty-two percent) or religion (fifty-one percent). Civic identity peaked during the two Ukrainian revolutions of 2004 and 2013-2014, and once the Russian-Ukrainian war began it became dominant across all socio-demographic, political, and religious groups in the country.

     

    In present-day Ukraine, nationalism serves as a vehicle for democracy. This remarkable fact has been emphasized by Anne Applebaum. During her first visits to Ukraine, she seemed to share the prejudices similar to those expressed above by Michael Ignatieff, but the Euromaidan caused her to change her mind. In her opinion, nationalism is exactly what Ukraine needs, and the very opposite of the poison that “cosmopolitans” denounce in it. One need only look at Russian-occupied Donbass, she has written, to see 

     

    what a land without nationalism actually looks like: corrupt, anarchic, full of rent-a-mobs and mercenaries… With no widespread sense of national allegiance and no public spirit, it [is] difficult to make democracy work… Only people who feel some kind of allegiance to their society—people who celebrate their national language, literature, and history, people who sing national songs and repeat national legends—are going to work on that society’s behalf. 

     

    Universal values have found a home in contemporary Ukrainian nationalism to an exhilarating degree. In the wake of the Russian-Ukrainian war, the prominent Italian historian Vittorio Emanuele Parsi similarly observed that Ukraine’s courageous resistance confirms that the idea of the nation 

    is very much alive in the world debate and represents a formidable multiplier of energy, self-denial and spirit of sacrifice: it is able to create a civic sense that, in its absence, does not succeed in making that leap forward, the only one capable of welding the experience of the communities in which each of us is immersed with the institutions that create and guarantee the rules of our associated life. 

    Parsi uses the words “nation” and “motherland” interchangeably, and he is unabashed about attributing a positive connotation to both.

    Since the Second World War, both terms have been compromised in Italy and other West European countries by their associations with fascist Italy, the Third Reich, and Vichy France. Now they are largely monopolized by the conservative right in the West and by Putin in Russia. To strengthen liberal democracy, however, liberals have to reclaim the original meaning that these words had acquired in the wake of the French Revolution: as ultimate values that are worthy of personal sacrifice in order to protect liberty and a decent civic spirit. As many Western intellectuals remarked during the Euromaidan in expressing their support for the democratic rebellion, Ukraine is now a beacon of Western liberal values. 

     

    War is always hell, it is always catastrophic, but it also creates opportunities. It accelerates certain processes and mandates a shift of paradigms. Suddenly everything is in flux, including pernicious status quos that seemed intractable. Every large war brings radical change. The moral character of the future changes largely depends on how and when this war will end. It is strongly in the interests of the West that Ukraine win and that Putin lose. For this reason, the West is properly obliged to help Ukraine with weapons and resources. We are in this together. Still, material assistance is not the only vital variety. The West must support Ukraine also philosophically, which is of course a way of supporting the West’s own ideals of freedom and tolerance and equality. As Ukraine fights for its liberty, it is time to think again about the benefits of nationalism, and to celebrate its compatibility with civic diversity and democratic openness.

     

    Liberland: Populism, Peronism, and Madness in Argentina

    For Carlos Pagni 

    1

    Too many electoral results are described as earthquakes when in reality they are little more than mild tremors, but the self-described anarcho-capitalist Javier Milei’s victory in the second and deciding round of Argentina’s presidential election over Sergio Massa, the sitting minister of the economy in the former Peronist government, who in the eyes of many Argentines across the political spectrum has wielded far more power than the country’s president, Alberto Fernández, truly does represent a seismic shift in Argentine politics, the radical untuning of its political sky. On this, ardent pro-Peronists such as Horacio Verbitsky, editor of the left online magazine El Cohete a la Luna, and some of Peronism’s most perceptive and incisive critics, notably the historian Carlos Pagni – people who agree on virtually nothing else – find themselves in complete accord. “Demographically and generationally,” Verbitsky wrote, “a new political period is beginning in [Argentina].” For his part, Pagni compared the situation in which Argentina now finds itself, to “the proverbial terra incognita beloved of medieval cartographers,” and “heading down a path it had never before explored” — a new era in Argentine political history.

     

    The  country’s disastrous economic and social situation was the work of successive governments, but above all its last two – the center-right administration of Mauricio Macri between 2015 and 2019, and the Peronist restoration in the form of Alberto Fernández´s government between 2019 and 2023, in which Fernández was forced for all intents and purposes to share power with his vice-president, Cristina Fernández de Kirchner, who had been Macri´s predecessor as president for two successive terms, from 2007 to 2015, having succeeded her husband Néstor, who was president between 2003 and 2007. Cristina (virtually every Argentine refers to her by her first name) remains — for the moment, at least — Peronism´s dominant figure. Despite some success during the first two years of his administration, Macri proved incapable of either sustainably mastering inflation or of stimulating high enough levels of international direct investment in Argentina. Cristina had left office with inflation running at twenty-five percent annually. Under Macri’s administration, that figure doubled to fifty percent, a level not seen in the country for the previous twenty years, and the key reason why Macri failed to win reelection in 2019. But during his four years in office, Alberto Fernández accomplished the seemingly impossible: making his predecessor´s failure seem almost benign. The legacy that he has left to Milei — unlike Macri, he knew better than to seek reelection — is an inflation rate of one hundred and forty-two percent, nearly three times higher than under Macri.

     

    It is not that Argentina had not suffered through terrible economic crises before. Three of them were even more severe than the present one. The first of these was the so-called Rodrigazo of 1975 (the name derives from then President Isabel Perón’s minister of the economy, Celestin Rodrigo), when inflation jumped from twenty-four percent to one hundred and eighty-two percent in a year. The Rodrigazo was not the main cause of the coup the following year that overthrew Isabel Perón and ushered in eight years of bestial military dictatorship, but the panic and disorientation that it created in Argentine society certainly played a role. The second was the hyperinflation of 1989, during the Radical Party’s leader Raúl Alfonsín’s second term as president. Alfonsín, who was the first democratically elected president after the end of military rule in 1983, is generally regarded in Argentina, even by Peronists, as having impeccable democratic credentials, although Milei has rejected this portrayal, instead calling him an “authoritarian” and a “swindler” whose hyperinflation amounted to robbery of the Argentine people. The last and by far the worst was the economic and financial crisis of 2001-2002, which saw Argentina default on virtually all its foreign debt and brought it to the brink of social collapse. There was widespread popular repudiation of the entire political establishment, exemplified by the slogan, “Que se vayan todos,” “they must all go.” Milei own promise in the 2023 campaign to get rid of what he calls La Casta, and by which he means the entire political class, resurrects that anti-elitist revulsion in the service of the populist right rather than the populist left that took to the streets in 2001. 

     

    But in 2001, there was finally no social collapse (even though Argentina had five presidents in a period of two weeks). That the country would weather the storm was anything but clear at the time. That it did so at all, as Pablo Grechunoff, one of Argentina’s most distinguished economic historians and himself no Peronist, was Néstor and Cristina Kirchner´s great accomplishment. (They were always a team politically, rather like Bill and Hillary Clinton.) The Kirchners, Gerchunoff has written, were not only able to “contain the social and political bloodbath [that had occurred] in 2001,” but also managed to “reconstitute both presidential authority and a [functioning] political system.” On the economic front, even most of the Kirchners’ anti-Peronist critics — except the contrarian Milei, of course — find it hard to deny that during Néstor’s presidency and Cristina’s first term in office the Argentine economy made a powerful recovery. To be sure, these critics are also quick to point out that this recovery was not only fueled in large measure fueled by the huge spike in world commodity prices — “a gift from heaven” is the way Gerchunoff has described it — but also by the fact that Néstor´s predecessor as president, Eduardo Duhalde, had instituted a series of harsh economic measures, including a brutal devaluation of the currency, and so he had a freedom of maneuver enjoyed by few Argentine presidents before or since to refloat the Argentine economy and vastly increase welfare payments and other forms of social assistance for the poorest Argentines. 

     

    It is this seemingly cyclical character of Argentina’s economic crises — “Boom and recession. Stop and go. Go and crash. Hope and disappointment,¨ as Gerchunoff summarizes it — and at the same time the country’s repeated capacity to recover and once more become prosperous that still leads many Argentines to take something of a blase approach every time the country gets into economic difficulty. But while it is true that, so far at least, Argentina has indeed emerged from even its worst economic crises, it is also important to note that each time it was left with fewer middle-class people and more poor people. The crisis of 2001 was the tipping point. Before that, even after the Rodrigazo and Alfonsín’s hyperinflation, Argentina continued to be not only one of Latin America’s richest countries and to sustain a middle class proportionally much larger than those of other countries in the continent, but, most importantly, to be a society in which, for most of the years between 1870 and the crisis of 2001, social mobility was a reality for the broad mass of the population. After 2001, however, it was no longer possible to deny the melancholy fact that Argentina was quickly becoming — and today has become — very much like the rest of Latin America. As the sociologist Juan Carlos Torre has put it, in previous periods of its history “Argentina had poor people but it did not have poverty [in the sense that] the condition of being poor in a country with social mobility was contingent.” But in the Argentina of today, social mobility scarcely exists. If you are born poor, you stay poor for your entire life, as do your children, and, if things don’t change radically, your children’s children. As a result, poverty, and all the terrible moral, social, and economic distortions that flow from it (including narcotrafficking on a massive scale), has become the country’s central problem. 

     

    It is in this context that Milei’s rise and unprecedented victory needs to be set. According el Observatorio de la Deuda Social Argentina (ODSA) of la Universidad Cátolica Argentina, a Jesuit-run think tank whose intellectual probity and methodological sophistication are acknowledged by Argentines across the political spectrum, by the time the presidential primaries took place on August, 13, 2023 the national poverty rate had reached 44.7%, while the rate of total immiseration had climbed to 9.6%. For children and adolescents, the figures were still more horrific: six out of ten young Argentines in these two age cohorts live below the poverty line. In aggregate, 18.7 million Argentines out of a total national population of forty-six million are unable to afford the foodstuffs, goods and services that make up the so-called Canasta Básica Total, of whom four million are not able to meet their basic nutritional needs. 

     

    Again, the 2001 statistics had been just as bad in a number of these categories — but this time, going into the 2023 election, there was a widespread feeling that there was no way out. Neoliberalism Macri-style had been a disaster, but so had Peronism Alberto Fernández-style (though hardline Peronists rationalized this to the point of denial by claiming that Alberto had betrayed the cause and if he had only carried out the policies Cristina had urged upon him, and on which he had campaigned, all would have been well). That was why Milei’s populist promise to do away with the entire political establishment resonated so strongly. Flush with revenues from agro-business, the Kirchners had managed to contain the crisis for a while by rapidly establishing and then expanding a wide gamut of welfare schemes — what are collectively known in Argentina as los planes sociales. As the political consultant Pablo Touzon has observed, in doing so the Kirchners succeeded in achieving what had been the priority of the entire political establishment, Peronist and non-Peronist, which was “to avoid another 2001 at all costs.”

     

    The problem is that commodity prices are cyclical and that the agricultural resource boom of the first decade of the century proved, like all such booms, to be unsustainable. And when price volatility replaced consistent price rises, for all the Kirchners’ talk of about fostering economic growth through a planned economy and a “national” capitalism focused on the domestic market, there proved to be no non-commodity-based engine for sustained growth and thus no capacity to create jobs that would restore the promise of social mobility. (The many government jobs that were created could not offer this.) As a result, the mass welfare schemes that had been created rapidly became unaffordable. The commentator who likened Argentine society in 2023 to an intensive care patient who remains in agony despite being on an artificial respirator called the state was being hyperbolic, but that such an analogy could be made at all testifies to the despair that is now rampant in the country. And it is this the despair that has made it possible for the bizarre Milei to be elected.

     

    Joan Didion’s famous observation that we tell ourselves stories in order to live has never seemed quite right to me, but there is no question that it is through stories that most people try to grasp where they stand in the world. What the Peronists do not seem to have been able to face, even during the four year-long social and economic train wreck that was Alberto’s presidency, was that many of the voters whom they believed still bought their story had in fact stopped doing so. Some blamed Alberto, and when Massa ran in effect asked the electorate for a do-over. Others simply found it impossible to believe that Milei could be elected. Peronism is both Manichean and salvationist. Peronism has never conceived of itself as one political party among other equally legitimate political parties. It regards itself as the sole morally legitimate representative of the Argentine people and of the national cause. When Perón joked that all Argentines were Peronists whether they knew it or not, the anti-pluralist subtext of his quip was that one could not be a true Argentine without being a Peronist. And that conviction remains alive and well in Kirchnerism. So to indulge in the very Argentine habit of psychological speculation — after all, Argentina is the country which has one hundred and forty-five psychiatrists for every one hundred thousand inhabitants, the highest proportion in the world — it may be that Peronists were so slow in recognizing Milei’s threat because, for the first time since Juan Perón came to power in 1946, they faced a candidate just as Manichaean and salvationist as they are. Peronism had always seen its mission to sweep away the “anti-national” elites, so that Argentina could flourish once more. Having become accustomed to seeing political adversaries as not only their enemies but also as the enemies of the Argentine nation, the Peronists did not know what to do with someone who viewed them in exactly the same light. As a result, the Argentine election of 2023 was the confrontation between two forms of populism, which is to say, between two forms of anti-politics. 

     

    Going into the campaign, the problem for the Kirchnerists was that Alberto was in denial about the social crisis and a few days before he left office he even saw fit to challenge the accuracy of these figures. Without offering any countervailing data, Fernández simply said that many people were exaggerating how poor they were. If the poverty rate really had reached 44.7%, Fernández insisted, “Argentina would have exploded.” To which Juan Grabois, a left populist leader and union organizer with close links to Pope Francis and whose base of support consists mostly of poor workers who make their living in what the International Monetary Fund calls the informal economy — work that not only in not unionized, but in which government labor regulations, from health and safety to workplace rights, go completely unrespected — retorted: “It has exploded, Alberto. It’s just that we got used to it; it didn’t explode, it imploded. That makes less noise, but the people bleed internally.” 

     

    For the Argentine middle class, the situation, though self-evidently not the unmitigated disaster that it is for the poor, is quite disastrous enough. An inflation rate of one hundred and forty-two percent — which even Milei has conceded will not end soon, a bleak prediction which his early days in office supports — makes intelligent business decisions impossible, seeing that it involves trying to guess what the Argentine peso will be worth next month or even next week. In practice, the currency controls instituted by Fernández´s government damaged and in many cases ruined not only the retailer who sell imported merchandise, but also the pharmacist whose stock includes medicines with some imported ingredients, the machine tool company that, while it makes its products in Argentina, does so out of imported steel or copper, and the publisher unable to assume with any confidence that paper will be available, let alone guess at what price. Nor is the psychological dimension of the economic crisis to be underestimated. Confronted by rising prices, many middle-class people now buy inferior brands and versions of the items that they have been used to buying. In the context of a modern consumer culture such as Argentina´s, there is a widespread sense of having been declassed, of having been expelled from the middle-class membership which they had assumed to be theirs virtually as a birthright. This has produced a different kind of implosion, of being bled dry, than the one to which Grabois referred, but an implosion just the same.

     

    An implosion is a process in which objects are destroyed by collapsing into themselves or being squeezed in on themselves, and, by extension, a sudden failure or fall of an organization or a system. What Milei´s election as president has made clear is just how fragmented and incoherent and fragile the two forces that have dominated Argentine life since the return of democracy — the Peronists one one side, and the Social Democrats and Neoliberals on the other — have now become. That this should be true of the center and center-right parties that had come together to form the Cambiemos coalition that Macri successfully led to power in 2015 is hardly surprising. For Cambiemos had united very disparate forces in Argentine politics — the neoliberals of Macri´s party, the PRO, two more or less social democratic parties, the Union Cívica Radical (UCR), the party of Raúl Alfonsín, and a smaller party led by the lawyer and activist Elsa Carrió that had broken off from the UCR in 2002 and since 2009 had been known as the Coalición Cívica. Somewhere in between were anti-Kirchnerist Peronists, one of whom, the national senator Miguel Pichetto, had been the vice-presidential candidate in Macri´s failed bid to win re-election in 2019.  These various groupings within Juntos por el Cambio, as Cambiemos was renamed going into the 2019 campaign, were united largely by their anti-Peronism. This should not be surprising. Since Juan Perón was elected president in 1946, Peronism has been for all intents and purposes the default position of the Argentine state — except, obviously, during the periods of military rule, which had their own kind of Manichaean salvationism. A central question that Milei´s election poses is whether the seventy-eight-year-long era has finally come to an end. Is Argentina on the verge of a political path that, in Carlos Pagni´s words, “has never before explored,” or will the days ahead be only a particularly florid instance of the exception that proves the rule?

     

    Apart from the fact that it is salvationist and Manichaean, and that it is a form of populism, usually though not always on the left, Peronism is notoriously difficult to define. It is both Protean and plastic in the sense that it contains within itself such a gamut of political views that a non-Argentine can be forgiven for wondering whether, apart from the morally monopolistic claims that it makes for itself, it is one political party among a number of others or instead all political parties rolled into one. Horacio Verbitsky tried to account for the fact that it had been at various times left and at other times right by saying that Peronism must be “a mythological animal because it has a head that is on the right while its body is on the left.” A celebrated remark of Borges sums up Peronism’s diabolical adaptability. “If I must choose between a Communist and a Peronist,” he quipped, “I prefer the Communist. Because the Communist is sincere in his Communism, whereas the Peronists pass themselves off in this way for their advantage.” And as Carlos Pagni has observed, “This empty identity gives them an invaluable advantage.”   

     

    Even assuming that Milei’s victory turns out to bring down the curtain on Kirchnerism, this does not mean that Peronism is over. After all, Argentines have been at this particular junction before. When Macri became president in 2015, his election was widely viewed as representing much more than one more anti-Peronist intermission between acts of the recurring Peronist drama. It was seen to mark the inauguration of an era of straightforward neoliberalism, which would transform Argentina both economically and socially — the instauration in the country, however belatedly, of the Reagan-Thatcher revolution.  Certainly that was what Macri thought he was going to put in motion. Instead his government was an abject failure. Macri seems to have believed that a non-Peronist government combined with what he perceived as his own special bond with the international financial world — which, as the son of an extremely rich Argentine entrepreneur, was his home ground — would lead to widespread direct foreign private investment. The problem was that not only was his economic team not up to the job, but, far more importantly, the structural problems of the Argentine economy, above all that the country had been living beyond its means for decades, would have been very difficult even in a country far less politically divided. As a result, the only important investments outside the agribusiness sector during Macri´s presidency were what in the financial markets are referred to as “hot money,” that is, speculative bets by hedge fund managers who are as happy to sell as to buy, rather than by more economically and socially constructive long-term investors.

     

    In 2018, three years into his administration, with a currency crisis looming that was so severe it would almost certainly have led to the Argentine state becoming wholly insolvent, Macri turned as a last resort to the International Monetary Fund. There were echoes in this of the loan facility that the IMF had provided to the government of Fernando de la Rúa that was in power at the time of the 2001 crisis. But this time, instead of demanding radical austerity measures, and when these were not fulfilled to the Fund’s satisfaction cutting off the loans, the executive board of IMF, prodded by the Trump administration, which viewed the demagogic anti-elitist Macri with particular favor, voted to grant Argentina a loan of fifty-seven billion dollars — the largest in IMF history. As the institution would itself later concede, in doing so the board broke its own protocols and failed to exercise the most basic due diligence. Both Peronists and non-Peronist leftists are convinced that the IMF’s goal was simply to prop up Macri´s government, and this is certainly what impelled the Trump administration to intervene. But even if one takes at face value that, in the words of a subsequent IMF report, the institution’s main objective had been instead to “restore confidence in [Argentina’s] fiscal and external viability while fostering economic growth,” this is not at all what occurred. “The program,” the IMF report concluded, “did not deliver on its objectives,” and what had actually happened was that “the exchange rate continued to depreciate, increasing inflation and the peso value of public debt, and weakening of real incomes, [weakened] real incomes, especially of the poor.” 

     

    It was under the sign of this disaster that the Argentine electorate voted Macri out of office, installing Alberto as president and Cristina as vice-president. One might have thought that, as the undisputed leader of Peronism, she would have run for president herself. Certainly, this is what the overwhelming majority of hardcore Peronist militants had hoped and expected. But Cristina soon made it clear that she believed herself to be too controversial a figure to carry Peronism to victory in 2019. This did not lessen the expectation among most Peronists that Cristina would be the power behind the throne. But to their shock and indignation, Alberto refused to bow to these expectations. At the same time, he was too weak to put through a program of his own, if he even had one. But the disaster that was his government should not be allowed to obscure just how dismal a failure Macri’s presidency had been. And it is in the context of these successive failures, first of the neoliberal right and then of Peronism, that Milei’s rise to power must be understood. He was the explosion that followed the implosions. 

     

    The implosions were sequential. The first round of Argentine presidential elections includes a multitude of candidates, and this often leads to a run-off between the two top vote-getters. In that first round, it was the turn of Juntos por el Cambio’s candidate, Macri’s former Security Minister Patricia Bullrich, to come up short. Bullrich campaigned almost exclusively on law-and-order issues, and there is no doubt that her emphasis on these questions resonated with an Argentine population increasingly terrified by the dramatic rise over the past decade of murder, assault, violent robberies, and home invasions — the regular disfigurements of present-day Argentine society. On economic questions, however, she more or less followed a standard neoliberal line, but in a manner that did not suggest that she had any intention of shaking up the political status quo. To the extent that she spoke of corruption, Bullrich pointed exclusively at Kirchnerist corruption, whereas corruption in Argentina is hardly restricted to Peronism. 

     

    To the contrary, every Argentine knows full well that their entire political class, regardless of party, faction, or ideology, has nepotism, corruption, and looting all but inscribed on its DNA. As Argentina’s most important investigative journalist, Hugo Alconada Mon, wrote in his definitive analysis of the phenomenon, The Root of All Evils: How the Powerful Put Together a System of Corruption and Impunity in Argentina, “Argentina is a country in which prosecutors don’t investigate, judges don’t judge, State supervisory bodies do not supervise, trade unions don’t represent their rank and file, and journalists don´t inform.” Given all that, Alconada asked, “Be they politicians, businessmen, judges, journalists, bankers, or trade unionists, why would any of them want to reform the system through which they have amassed illegitimate power and illicit fortunes with complete impunity?” To which the answer, of course, is that they don´t, as Alconada has illustrated in his most recent investigation of phantom jobs on a massive scale within the legislature of the province of Buenos Aires. As the evidence mounted and a prosecutor was named, the Peronists and anti-Peronists in the legislature finally found there was something on which they agreed: a blanket refusal to cooperate with the investigation.

     

    The reality is that the only way not to see how corrupt Argentine politics are is to refuse to look. But since Peronist corruption is generally artisanal, that is to say, a matter of straightforward bribes in the form of money changing hands, or, at its most sophisticated (this innovation being generally attributed to Nestor Kirchner) officials being given a financial piece of the companies doing the bribing, Peronist corruption is more easily discerned. When pressed, those Peronists who do concede that there is some corruption in their ranks still insist that it has been wildly overstated by their enemies in what they generally refer to as the “hegemonic media.” In any case, one is sometimes told, too much attention is paid to what corruption does exist. “What should be clear,” wrote the Peronist economist Ricardo Aronskind in Horacio Verbitsky’s El Cohete a la Luna in the wake of Milei’s victory, “is that the decency or indecency of a political project cannot be defined by certain acts of corruption that arise within it, but rather by the great economic and social processes that it sets in motion in the community: its improvement and progress, or its deterioration and degradation.” In other words, Peronists are not just on the side of the angels, they are the angels, and their blemishes should not trouble anyone that much.

     

    Whether Peronist corruption is worse than that of their neoliberal adversaries is a separate question. There is no doubt that the alterations to the tax code that Mauricio Macri made over the course of his administration made it possible for his rich friends to make fortunes, thanks both to the insider information they seem to have secured and to various forms of arbitrage they were able to execute in the currency markets. And this “white-gloved” variety of corruption is thought by many well-informed observers to have involved profits at least as large and possibly larger for those who had benefitted from it than whatever the Kirchners and their cronies have been able to secure for themselves. Milei´s promise to sweep away La Casta, the entire political elite, made no distinction between Peronist and ant-Peronist corruption. It was his promise throughout the campaign to sweep it all away — a pledge that he routinely illustrated in photo-ops with him waving around a chainsaw — combined with a far purer and more combative version of neoliberalism than Bullrich could muster that allowed Milei to see her off in the first round. Nothing that she could possibly say could have competed with Milei´s rhetoric of rescue, as when he shouted at rallies, “I did not come here to guide lambs, I have come to wake lions!” No one was surprised by the result — by some accounts not even Bullrich herself. 

     

    Against the pollsters’ predictions, however, it was Massa, not Milei, who came in first. This was somewhat surprising, since on paper Massa should never have stood a chance. For openers, Massa had been minister of the economy from July 2022 to the 2023 elections. It was on his watch that inflation had reached triple digits. Massa was more than just an important figure in Alberto’s government. Owing to the looming threat of hyper-inflation, the economy eclipsed all other issues during the last eighteen months of Alberto´s presidency. The president did not know the first thing about economics, and by late 2022 he had become a kind of absentee president, who, with the exception of some foreign-policy grandstanding that largely consisted in paying homage to Xi, Putin, and Lula, seemed to prefer to play his guitar at Olivos, the presidential retreat. As a result, almost by default, Massa became in practical terms the unelected co-president of Argentina. This gave him enormous power, but it also meant his taking the blame for the government´s failure to do anything to successfully mitigate runaway inflation.

     

    And yet Milei seemed so unstable personally and so extreme politically that many Argentines, particularly in the professional classes, in the universities, and in the cultural sphere — which in Argentina, as virtually everywhere else in the Americas and in Western Europe, all but monolithically dresses left, including in the Anglo-American style of identitarianism and wokeness — allowed themselves to hope that Massa would pull off the greatest comeback since Lazarus. They drew comfort from how many civil society groups, not just in the arts but also in the professions, the trade unions, feminist groups, even football associations, were coming out in support of Massa, presumably because they assumed, wrongly as it turned out, that these groups´ members would vote the way their leadership had called for them to vote. Even seasoned journalists and commentators with no love for Massa took it as a given that he was in command of the issues that confronted Argentina in a way that Milei was not. In contrast, it was generally agreed that Milei was barely in command of himself. After his televised debate with Massa, the gossip among political insiders was that Milei’s handlers were less concerned with the fact that Milei had lost the arguments so much as relieved that he had not lost his cool and given vent to the rages, hallucinations, and name-calling that had been his stock-in-trade as a public figure since he burst onto the Argentine scene in 2015.

    2 

    To describe Javier Milei as flaky, as some of his fellow libertarians outside of Argentina have done as a form of damage control, is far too mild. This is a man who in the past described himself as a sex guru, and now publicly muses about converting to Judaism. During the trip he made to the United States shortly after his election to speak with Biden administration and IMF officials, Milei took time out to visit the tomb of the Lubavitcher rebbe Menachem Mendel Schneerson in Queens. At the same time Milei is a proud devotee of the occult, confessing without the slightest embarrassment to his habit of speaking to a dead pet through a medium. He also claims to have cloned the pet in question, an English mastiff named Conan, to breed the five English mastiffs that he now has, four of which are named after neoliberal and libertarian economists. Cloning, Milei has declared, is “a way of reaching eternity.¨ It often seems as if he lives entirely in a world of fantasy and wish-fulfillment that closely resembles the universe of adolescent gamers. And he is, in fact, an avid devotee of cosplay, though here Milei´s fantasy life is harnessed to the service of his economic views: at a cosplay convention in 2019, Milei introduced his character General Ancap, short for “anarcho-capitalist”, the leader of “Liberland, the country where no one pays taxes.”  The life mission of General Ancap is to “kick the Keynesians and the collectivist sons of bitches in the ass.”

     

    Milei´s public persona seems designed to reflect these wild convictions and obsessions. He has a mop of unruly hair, which he proudly claims never to comb. He thinks it makes him look leonine, and being a lion leading a pride of Argentine lions is one of Milei´s most cherished images of himself, and one that proved to resonate deeply with the Argentine electorate. In public appearances Milei always seems to be on the verge of a hysterical tantrum, and often he explodes into one. And Milei’s economic program can seem at least as wild as his fantasy life. He has promised to address the collapse of the Argentine peso by scrapping the national currency and replacing it with the dollar; to abolish the central bank; to privatize many industries, from the national airline to the national oil company; and to open up the Argentine market to foreign competition while at the same time abolishing such protectionist boondoggles as the electronic assembly plants in Tierra del Fuego in Argentina’s far south, where, free of federal tax and VAT, some Argentine entrepreneurs have made fortunes assembling electronics from Korea, Vietnam, and China that could have just as easily and much more cheaply assembled in the factories that produced them in their countries of origin. Milei has spoken of offering people educational vouchers as an alternative to public education. Some of this, such as educational vouchers and privatization, are straight out of Margaret Thatcher’s playbook. Milei has said that he greatly admires her, which is an odd stance for an Argentine politician to take regarding the British prime minister who repelled the Argentine effort to seize or (depending on your flag) regain rightful control of the Malvinas or (depending on your flag) Falkland Islands. At least Milei has backtracked from his proposals to allow the commercial trade in human organs, even if he did so reluctantly, indignantly demanding in an interview: “That the state should want to enslave us is allowed, but if I want to dispose of some parts of my body…What’s the problem?”

     

    The irony is that in large measure it was the widespread belief within the Peronist establishment that Milei was too extreme to be electable that proved crucial to Milei´s successful quest for the presidency. Born in 1970 into a lower middle-class family in Buenos Aires, educated in parochial schools and trained as an economist, Milei worked for a number of financial institutions before becoming chief economist at the privately held Corporación America, the holding company of Argentina´s sixth richest man, Eduardo Eurnekian, whose fortune largely rests on Aeropuertos Argentina 2000, which manages Argentina’s thirty-five largest airports as well as roughly the same number internationally. He is also rich from media interests and agribusiness. By most accounts, it was Eurnekian who in 2013 launched Milei´s media career. Why Eurnekian did this remains unclear. According to one version, Eurnekian had become deeply dissatisfied with the policies of Cristina´s government and wanted a mouthpiece to express this dissatisfaction in the noisiest way possible in the media. According to another, it was Milei himself who had had decided that he wanted to do more than work behind the scenes at Corporación America and chose to seek a more public role. Whatever the case, it hardly seems likely that it was coincidence that a media outlet which hired Milei, first as an occasional contributor in 2013 and then, the year later, as a full-time columnist, was the website Infobae America, a news outlet in which Eduardo Eurnekian´s nephew Tomás held a twenty-percent stake.  The first article that Milei published in Infobae was titled “How to Hire a Genius.” 

     

    Four years later, in 2017, halfway into Macri´s term in office, at the time when the government was facing its first major economic crisis, Milei became an important media figure. He moved from print to television and social media and came to national prominence as a pundit — a success that was largely due to his propensity for provocation, his seemingly unslakable thirst for on-air confrontation. At the time of Eurnekian´s break with Macri, then, Milei certainly did not require Eurnekian´s, or, indeed, anyone else´s help to secure public exposure. In 2021 a magazine named him the fourth most influential person in Argentina. (The top spot went to Cristina.) He was giving forty speeches a year preaching his anarcho-capitalist gospel, initially in person and then, during the pandemic, over the internet. He even mounted a theatrical piece to proselytize his ideas, El consultorio de Milei, in which, in the form of a psychotherapy session, he offered a whirlwind tour of the previous seventy years of Argentine economic history from a libertarian perspective. 

     

    Milei once said of himself: “Take a character out of Puccini and put him in real life and you have me.” Tosca or Scarpia? A bit of both, I think. But what seemed to most turn him on in his media appearances was hurling very unPuccinian insults in all directions. “The state,” he said in one interview, “is the pedophile in a kindergarden where the little kids are chained up and covered in Vaseline.” Pope Francis is one of his favorite targets. Over the years Milei  has called him an “imbecile who defends social justice” (coming from Milei’s lips “social justice” is a slur), and a “son of a bitch preaching Communism,” and “the representative of the Evil One on earth.” When Horacio Rodriguez Larreta, a major political figure of the center-right, was head of government of the City of Buenos Aires, Milei characterized him as “a disgusting worm” whom he could “crush even while sitting in a wheelchair.” He confessed on a talk show that one of the ways he let off steam is by throwing punches at a mannequin across whose face he has glued a photo of Raúl Alfonsín. 

     

    In the midterm elections of 2021, which inflicted a stinging defeat on Alberto Fernández´s government, Milei was elected to Diputados, the lower chamber of Congress, as a member of a new libertarian political party called Avanza Libertad, which counted among its members another libertarian economist, José Luis Espert. Once seated, Milei remained true to his libertarian creed, even opposing a law to expand Argentina’s congenital heart disease treatment program for newborns on the ground that could well lead to more opportunities for the state to “interfere in the lives of individuals.” The Avanza Libertad delegation not only included libertarians such as Espert and Milei himself, but also figures of the hard right such as Victoria Villarruel, an activist lawyer notorious in human rights circles for what they view as her denial of the crimes of the military dictatorship of 1976-1983, accusations that she has denied though not very convincingly. Avanza Libertad never made any secret of its hard-right sympathies. Milei, Espert, and Villarruel were enthusiastic signatories of “The Madrid Charter: In Defense of Freedom and Democracy in the Iberosphere,” which had been produced by La Fundación Disenso, the think tank of Spain’s extreme conservative Vox Party. (The current Italian prime minister Giorgia Meloni and the rightwing Chilean politician José Antonio Kast were also among the signatories.) Santiago Abascal, the leader of Vox, the radical right party in Spain, has said that his feeling for Milei is one of “brotherhood.” Milei subsequently broke with Espert, renaming his own party La Libertad Avanza . He did not, however, break with the hard right, and few Argentines were surprised that it was to none other than Villarruel to whom he turned as his running mate in the recent election.

      

    In the speech that he made at his inauguration — a ceremony one of whose more curious sideshows was Cristina giving the finger to journalists as she was driven onto the grounds of Congress, and then chatting amiably with Milei while ignoring Alberto completely — Milei spoke of the political class in Argentina as having followed “a model that considers it the duty of a politician to oversee the lives of individuals in as many fields and spheres as possible.” Milei´s solution was the classic libertarian one: Liberland! The state simply needs to get out of the way. Only when this happens can the free market flourish, and people live up to their potential to be “a pride of lions and not a flock of sheep.” Quoting the Spanish libertarian economist Jesús Huera de Soto, whose work has greatly influenced him, Milei declared flatly that “anti-poverty plans only generate more poverty, [and] the only way to exit from poverty is through more liberty.” And he went on to contrast the Argentina of the pre-World War I boom, when it was one of the richest countries in the world, when it had been, as he put it, “the beacon of the West,” with the Argentina of today, drowning in poverty, crime, and despair. This, he insisted, was because the model that had made Argentina rich had been abandoned in favor of a suicidal collectivism. “For more than a hundred years,” he thundered, “politicians have insisted on defending a model that has only generated more poverty, stagnation, and misery, a model that as citizens we are here to serve politics rather than that politics exists to serve citizens.”

     

    The problem with this account of Argentine history is that it is spectacularly selective. It leaves out too much to be quite credible. Yes, Argentina´s pre-1914 economic golden age did indeed make the country rich — if proof were needed, the grand bourgeois architecture of Buenos Aires attests to the prosperity. It was also an era in which the foundations of mass education in Argentina were laid and great advances took place in public health. But the Argentine middle class kept expanding prodigiously until 1930, sixteen years after Milei claimed its prosperity had been shattered by statism, and it began to expand again, if anything more prodigiously, between 1946 and 1955, during Juan Perón’s first two terms as president. Nor would you know from Milei´s account that universal male suffrage was not introduced in Argentina until 1912, during the presidency of Roque Sáenz Peña with the support of Hipólito Yrigoyen, the leader of la Union Cívica Radical (UCR), or that the first time it was applied was in the presidential election of 1916, which brought Yrigoyen to power, while Argentine women did not get the right to vote until 1947, under Perón. Nor would you learn that when, in his inaugural address, Milei eulogized Julio Argentino Roca as “one of Argentina’s best presidents,” he was paying tribute to a figure who, in 1878, while still General Roca, two years before his election as president of Argentina, had led the genocidal Campaña del Desierto against the indigenous peoples of the south of the country, and for this reason is one of the most controversial figures in Argentine history. Milei’s revered Roca had fiercely opposed not just universal suffrage, but also the secret ballot.

     

    When Milei speaks of the disastrous model of the past hundred years, what he is actually talking about is the Argentina that began to take shape in 1916, when Yrigoyen became president. What Milei certainly has in mind is Yrigoyen´s creation of a state-owned national railway system and his partial nationalization of Argentina´s energy resources. What he leaves out is not only Yrigoyen´s role in establishing democratic practices such as the secret ballot and universal male suffrage, but also the reform and expansion of the university system and the creation of the first retirement funds for workers. In Milei´s version of the twentieth century in Argentina, the great movements of social reform that led to the legitimization of trade unions, paid holidays for workers, progressive tax regimes, and so on, were all wholly unnecessary, because the markets eventually would have sorted all these problems out to everyone´s satisfaction. Forget communism: on this account, even social democracy was an act of collective self-harm. Milei said as much in the speech he gave at the World Economic Forum in Davos five weeks after his inauguration. Whatever they might call themselves, he declared, “ultimately, there are no major differences” between Communists, Fascists, Social Democrats, Christian Democrats, neo-Keynesians, progressives, populists, nationalists, and globalists. All are “collectivist variants” that held that “the state should steer all aspects of the lives of individuals.”  Milei made no bones about his ambition to redeem not just Argentina but the entire Western world. “We have come here today,” he told his tony audience, “to invite the Western world to get back on the path to prosperity” and to warn it “about what can happen if countries that became rich through the model of freedom, stay on this path of servitude.”

     

    To accept this, one would have to believe that sometime around a century ago the Argentine people lost its collective mind and that it took the advent of Milei to bring them back to their senses and guide them back to the prelapsarian days of the late nineteenth century. As for Peronism, it was either completely imposed on the population, or it remains one of history’s most extraordinary examples of the madness of crowds. The historical facts say otherwise. What Milei is pleased to call the liberal order but what was in fact an oligarchic one was first seriously challenged when Yrigoyen was elected, and it was finally broken only in 1946 with the rise of Perón. Indeed, Perón´s own personal ascent from obscurity was at least partly due to the fact that the earthquake that reduced most of the Province of San Juan to rubble in 1944 was widely understood at the time, both regionally and nationally, as demonstrating the bankruptcy of the old social order, while Colonel Perón´s successful effort to rebuild the province offered a glimmer of a different and better Argentina. (Mark A. Healey’s brilliant study, The Ruins of the New Argentina: Peronism and the Remaking of San Juan after the 1944 Earthquake, shows this in painstaking detail.) The reason that so many Argentines remained loyal to Peronism for so long was because of its accomplishments, beginning with that of upholding the rights of the working class. (Good libertarian that he is, Milei would doubtless respond that classes do not have rights.) Most Argentines know this, which is why, despite all its failures, they have turned back to Peronism again and again. 

     

    The fact that, despite their knowing this, Milei could not only defeat Massa, but best him by eleven points, a landslide in Argentine political terms and the worst defeat that Peronism has suffered in seventy-five years, is eloquent testimony to the anger about the present and the despair about the future that now grips so many Argentines, including those who in the past voted Peronist. Out of Argentina’s twenty-three provinces as well as the city of Buenos Aires, Massa won only three — the Province of Buenos Aires (not to be confused with the city) and the provinces of Formosa and of Santiago del Estero. And even in the Province of Buenos Aires, Massa won only by a two percentage points, whereas to have had any chance at winning the presidency he would have had to win there overwhelmingly. Still, to view Milei´s election as a vote against the government rather than a vote for him is to misunderstand what has taken place in Argentina. 

     

    Milei´s appeal is both deep and wide, and it cuts across all social classes. The antipathy of the cultural and professional elite towards Milei was always visceral, and with his election it is now dumbfounded as well. They are appalled by the persona that he projects, with its signature aggression and its rigidly Manichaean division of Argentine society into la gente de bien, the good people, and the corrupt elite of La Casta. It is not that the cultural elite does not believe in emotion in politics; the left is hardly immune to the excitements of populism, and the Argentine left is profoundly sentimental in its vision of the Argentine people, above all, as the Italian political scientist Loris Zanatta has argued, in their adherence to the element in liberation theology that assigns a unique moral worth to the poor. The problem, as the Kirchnerists and their supporters in the cultural left demonstrated time and time again during the campaign, is that they do not believe in the sincerity of any emotion that they do not themselves feel. The pundits, on the other hand, underestimated Milei because they viewed the campaign too rationally, as when, for example, Massa won the arguments in his public debate with Milei, as if populist politics cares a whit about arguments. For many poor people, especially poor young men who work in the informal economy, what had resonated so strongly with them in the debate, as it had throughout Milei’s campaign, were not the policy details that Milei mentioned but the promise of rescue that he offered. 

     

    The issues of trans rights and even human rights that now have captured the moral imagination of the cultural elite and of the professional managerial classes — in Argentina just as much in the United States and the United Kingdom — are simply not of much concern to the poor and the economically stranded, and cultural issues such as whether or not Milei would cut off government funding to the Argentine cinema are irrelevant to them. As for the memory of the dictatorship: during the campaign it was repeatedly brandished by the left as a powerful argument, morally and politically, against Milei and Villarruel. This was not surprising, since it is perhaps the Argentine left’s deepest conviction that democracy and memory are inseparable. The fact that the election took place on the fortieth anniversary of the return to democracy only reinforced these feelings. But then Milei won, and they groped for answers. Some of these were of a generational high-handedness that were painful to read, as when the Lacanian Peronist (only in Argentina!) Jorge Alemán argued that “the superimposition of the images that erupt on social media has generated an empire of the ephemeral that threatens the dynamic of emancipatory narratives to the point of rendering them inoperable.” For the Peronist sociologist Fernando Peirone, La Libertad Avanza “had succeeded in tuning in to those web-based narratives that renounce argument, memory, and the idea of truth itself.” What the Argentine cultural elite still has not been able to come to terms with is that in human terms it is the purest wishful thinking to imagine that a twenty year-old voter in the slums who is too young to have experienced the dictatorship would promote this historical memory into the determining factor in their vote.

     

    Just as the elections of Trump and Bolsonaro were to the American and Brazilian cultural elite, Milei´s election is quite incomprehensible to most of the Argentine elite. They certainly don´t know anyone who voted for him! But one must be careful here. The conventional left-liberal view of Milei, which is that he is simply an Argentine version of Trump or Bolsonaro, and that his victory is one more feather in the cap of global rightwing populism, alongside Meloni in Italy or Orban in Hungary, is at best a half-truth. Like Trump, Bolsonaro, and Orban, Milei is a savior figure. What distinguishes him from the rest of the Black International is his appeal to poor voters, many of whose opposite numbers in the United States or Brazil would never dream of voting for Trump or Bolsonaro. In this, curiously, for all the talk on the left that Milei is a throwback to the military dictatorship, and to the economic policies of the junta’s economics minister José Alfredo Martínez de Hoz, he actually resembles Perón in 1946 more than he does Perón’s enemies in the old oligarchy. In his many impassioned defenses of Peronism, Horacio Verbitsky has often said that while there are those who associate Peronism with fascism, this is false because, as he once put it, ¨fascism is a movement of the bourgeoisie against the working class, [whereas Peronism] represents the working class, its rights, and its forms of organization.”

     

    Although he would doubtless reject the suggestion, Verbitsky’s view helps to explain what distinguishes Milei from Trump, Bolsonaro, and Orban. While Peronism in 2024 may still be said to represent the Argentine working class, it represents only the organized working class. And with the exception of a few leaders of Peronist-leaning social movements, the most consequential of whom is Juan Grabois, Peronism has failed to address the problems, and thus have failed to sustain the allegiance, of the informal workers who now make up forty percent of Argentina’s labor force. The unionized working class may still feel that Peronism speaks in their name (though given the corruption of the trade union leadership, even this is debatable), but the workers in the informal economy most definitely do not. As Carlos Pagni has said, there is a “crisis of representation” in Argentine politics. Milei’s great strength has been his ability to make so many informal workers feel represented. He has done so by speaking to their sense of abandonment by government, to their belief that the entire political class, Peronist and non-Peronist alike, are in it only for the money and the power, and that if there is any hope of change they must put their faith in him. Again, Trump and Bolsonaro inspire the same hope in their constituencies, and Milei’s margin of victory — 55% — is roughly the same as Bolsonaro received when he was elected. But if you are a Trump-supporting Evangelical you are not someone who would ever cast your ballot for Cornel West, and Bolsonaro lost badly among Brazil’s poorest voters, whereas Milei has triumphed among the poor as well — precisely those voters whom a Juan Grabois or an Argentine Trotskyist would think of as their core constituency. 

     

    Having ridden to office on a wave of emotion, how will Milei actually govern?  La Libertad Avanza was largely a vehicle for his presidential hopes and it fielded very few candidates for election to Congress. It is true that Milei came into office believing that that he could impose a part of his program by issuing a series of what in Argentina are called DNUs, the Spanish acronym for “Necessary and Urgent Decrees”. And ten days into his term that is exactly what he did. But instead of promulgating a handful of decrees whose urgency could indeed be justified to the public at large as an immediate response to the economic crisis, as he had justified the devaluation of the Argentine peso by one hundred and twenty percent that he had put into effect almost immediately after taking office, Milei issued a “Mega Decree¨ that included most of the major elements of the remaking of the Argentine economy and Argentine society that he had promised to enact if elected. Milei’s omnibus DNU either radically altered or annulled three hundred and sixty-six economic regulations, structures, and — most controversially — laws, including the removal of all rules governing the relations between landlords and tenants, loosening labor laws (above all employment protections), limiting the right to strike, scrapping export controls, rescinding government subsidies for public transport and fuel, and voiding regulations preventing the privatization of state enterprises. At the same time, the DNU cut off subsidies to the public broadcasting system — widely considered, much in the way National Public Radio is viewed by many in America, to skew to the left in its reporting and its commentary — as well as to other cultural institutions. 

     

    Even many who in principle support the major changes that Milei wants to effect are disturbed by the fact that he so clearly seems to be trying to transform Argentine society without Congress’s approval. There is even some question as to whether what Milei is doing is constitutional, and despite his threat to call a plebiscite should Congress not allow the Mega-DNU to go through, there is little doubt that the issue will eventually be decided by Argentina’s Supreme Court. Milei’s decision to go forward is the standard populist leader’s playbook for dealing with a recalcitrant legislature: claim that his election is all the justification that he needs go forward with his program, in defiance of Congress if necessary. He is the leader who incarnates the people. Whether this strategy will work is another matter. For now the DNU has largely taken effect, though one change in labor law has been blocked by a court. But if both the Senate and the Diputados reject it, its provisions will have to be rescinded. Meanwhile, day by day omnibus legislation is being whittled down in order for the government to secure the votes it needs so that at least some of it is passed. Even if Milei makes many more concessions than he seems currently prepared to countenance, it is by no means clear that he will succeed. Milei won a crushing victory for himself, but he had no legislative coattails and the Peronists are close to having a majority in the Senate. To put the matter starkly, having come into office pledging to do away with La Casta, Milei now finds himself depending on it, not just in order to be successful but even to survive in office.

     

    Many Argentines were surprised that Milei asked for so much all at once. They should have known better. Never in his career has Milei gone for less than everything he wanted. He believes that the only way to transform Argentina is to do as much as possible right away. It is the anarcho-capitalist equivalent of shock and awe. Whether Milei, who thinks that he is Argentina’s savior, can withstand the fact that Congress has other ideas is anyone’s guess. More threateningly still, the Peronist trade unions and left social movements are beginning to take to the streets. They will not topple Milei tomorrow. But if in a few months, inflation has not begun to come down and economic conditions continue to worsen, then all bets are off. In that case, there could be a social explosion that would chase Milei from office in much the way De La Rúa was chased from office in 2001. But whether it is the halls of Congress or in the streets, what is clear is that Carlos Pagni has observed sardonically, for Milei la realidad avanza.

     

    Curricular Trauma

    A number of years ago — sometime in the decade between the financial crash and the advent of Covid — I found myself at the hotel bar of the Modern Language Association’s annual conference (in Vancouver? Boston? Chicago?) arguing with a professor about modernism. Or rather, about modernism as a field of current scholarship in literary studies. I wondered why the distinguished English department in which this professor taught, having failed for several years to replace a retired modernist, did not have a single senior scholar of modernism on its faculty. “Well,” he said, “it’s hard. It doesn’t help that modernism has a problem with anti-Semitism, racism, and fascism.” 

     

    What could he mean, I wish I’d asked him, by characterizing the first literary period in the West in which Jews were absolutely central to the literary establishment as having a “problem with anti-Semitism”? Or the movement that included the Harlem Renaissance and, elsewhere, négritude and affiliated developments as “racist”? The period in which modernism flourished was, of course, one of world-historical ideological mobilization; fascism, racism, eugenics, and so on carved out their vicious territories across the face of the world and the world of the mind. But it was also the period of suffragettism, of varieties of national determination both liberatory and murderous, of Bolshevism. Did the professor just mean that some of the most important modernists were themselves fascists and anti-Semites? That is true, of course. Did he mean that in some cases modernist aesthetics and fascism drew on common idioms, made use of common sources? That is true, too, but so did modernism and anarchism; and so did modernism and socialism; so did modernism and a certain species of secular liberalism.

     

    I didn’t raise those obvious objections because I was so surprised. My interlocutor was an intelligent and sensitive scholar and normally not susceptible to this sort of faddish moralizing. Had I quarreled, I am sure he would have withdrawn the sloppy charge. But the fact that this judgment, in a moment of thoughtlessness, emerged so easily is testament to its status as a piece of common sense in the field. Thoughtless remarks can be very revealing. The professor was guilty of a reflex, a sort of professional hiccup—a regurgitation symptomatic of the extent to which the study of literature has become the terrain of a certain brand of vaporous politics. 

     

    “Politics.” Or pseudo-politics. There are of course serious ways of approaching modernism’s fascism problem. One of the greatest scholars of modernism, the Marxist critic Fredric Jameson, wrote a book, Wyndham Lewis: The Modernist as Fascist, exploring with unrivaled rigor the modernism of fascism and the fascism of modernity, including aesthetic modernity. Even if every modernist were a Wyndham Lewis or an Ezra Pound — and every modernist was not — the movement would demand scholarly attention. Totalitarian warts and all, it made us into who we are. Modernism remains, as T.J. Clark, another brilliant Marxist critic, wrote, “our antiquity … the only one we have.” To have abandoned it as a hiring field will, in the not-too-distant future, deprive college students of access to one of the keys to any understanding of the present. 

     

    Modernism’s curtailment on pseudo-political grounds is an aspect of a much larger phenomenon: the precipitous decline in the prestige of the humanities in general, a decline which has seen precincts of study that seemed vital a mere fifteen years ago reduced to husks of their former selves. The situation is too well known to need detailed rehearsing here. Christopher Newfield, in his presidential address to the Modern Language Association in 2023, summed things up neatly, as least as far as literary studies is concerned. “Our profession is in trouble. We all know this. We can all instantly name the troubles that we must fix: a shrinking academic job base, in which tenurable faculty with academic freedom are replaced by a reserve army of precarious workers; declining numbers of majors in literature and language fields; program closures and consolidations; and very small quantities of research funding for literature and language scholarship and for the humanities more broadly.” The fundamental question bedeviling all analyses of this grim situation can be posed simply: Are the causes of the crisis external to the humanities, or do they reflect something gone awry in humanistic study itself? 

     

    No satisfying answer should insist that either the external or the internal side of the problem is decisive. Plainly the catastrophe is, as they say, overdetermined. Declining state-level funding is obviously a problem from the outside. Administrative failures to limit adjunctification reflect problems both external (declining funding) and, to a degree, internal (a symptom of the collapse of faculty self-governance within the university). Declining major rates are more confusing. Do they reflect a sudden incapacity to feel the relevance of inherited cultural and interpretive traditions, a large-scale societal shift that the critic Simon During has called a “second secularization”? Or have students simply been warned — by parents, teachers, the broader culture — on financial grounds against taking classes in things that otherwise interest them? Or has the nature of the humanistic enterprise itself, at least as institutionalized in our colleges, changed in ways that have rendered it no longer appealing to students? 

     

    No one can be certain, and the question has polarized the academy’s diagnosticians. Many observers — conservatives especially, but also some disenchanted liberals and leftists — point to what they see as disciplinary moral orthodoxies run amok, while others — including most academics themselves, at least when speaking publicly — emphasize a combination of state-level defunding and our society’s philistinism, its larger hostility to humanistic inquiry on both political and instrumental grounds. The fact is that no one knows exactly how to distribute blame; anyone who pretends the crisis is univariate is propagandizing. The current intellectual culture of the humanities cannot be responsible, all on its own, for the material crisis that academic humanists face.

     

    Yet that is hardly a reason to exempt humanists and the contemporary practice of the humanities from scrutiny. One place to begin might be the departmental statements, posted usually to a department’s official website, that became common after the police murder of George Floyd in 2020. Princeton’s English department, for instance, announced the following: “We confront literary study’s long history as a prop to the worst forces of imperialism and nationalism, and its role in underwriting crimes of slavery and discrimination. Such a history compels us to continually reflect on how we read and teach literature and to actively dissociate literary studies from their colonial and racist uses.” Its Classics department declared simply that “the history of our own department bears witness to the place of Classics in the long arc of systemic racism.” The University of Chicago’s English department, in a since-removed statement, asserted that “we believe that undoing persistent, recalcitrant anti-Blackness in our discipline and in our institutions must be the collective responsibility of all faculty, here and elsewhere.” Harvard’s English department explained that “we, as a department, as scholars, teachers, students, and staff are committed to holding up our community, past and present, to the light of recent events, specifically the movement for racial justice. We will take what immediate actions we can, as we also consider the courses, colloquia, and public talks and reading the Department is running now or planning for the future.” 

     

    Departmental statements of political commitment are not in themselves expressions of academic activity, of research and teaching (although the Harvard one comes close). But they nevertheless crystallize something about the state of scholarship itself, which, although rarely as nakedly programmatic, has indeed embraced a portfolio of activist commitments not in any obvious way entailed by humanistic objects of study. In the immediate aftermath of the murder of George Floyd, those commitments were expressed primarily with respect to the struggle against racism, although that was more incidental than essential; racial injustice is by no means the only area of political concern in the humanities.

     

    Such disciplinary activism might seem to suffer from what John Guillory of the English department at NYU calls literary criticism’s endemic “overestimation of aim.” (There is no reason to restrict the diagnosis of overestimation to literary criticism; other fields in the humanities are similarly afflicted.) Guillory’s distaste for what he sees as a species of moralistic grandiosity compels him to uncharacteristic polemic: “The absurdity of the situation should be evident to all of us: as literary studies wanes in public importance, as literature departments shrink in size, as majors in literature decline in numbers, the claims for the criticism of society are ever more overstated.” These delusions of grandeur are themselves compensatory — consoling fantasies of relevance hallucinated defensively against an encompassing political economy in which the humanities simply do not count. Stefan Collini puts it this way: “As academic scholars in the humanities feel increasingly vulnerable in societies governed by the imperatives of global capital, so they seek to ratchet up their ‘relevance.’”

     

    For the defense of relevance, we might look to Caroline Levine, the David and Kathleen Ryan Professor of Humanities at Cornell University. Originally a scholar of narrative and the author of a brilliant study of suspense in the Victorian novel, Levine has in recent years turned her talents toward literary scholarship’s potential to produce political change. Her career is something like an allegory for the trajectory of the larger field. In her most recent book, The Activist Humanist: Form and Method in the Climate Crisis, as well as in a series of articles over the last several years, Levine has attempted to make the case for what she calls the humanities’ “affirmative instrumentality,” an activist orientation meant to counter humanistic study’s bias toward “anti-instrumentality,” the for-its-own-sakeness of the study of literature, music, art history, and so on. The practical payoffs of such affirmative instrumentality remain uncashed; perhaps they will be worth the trouble. 

     

    But the interpretive payoffs are here, and they are not always encouraging. Consider “In Praise of Happy Endings: Sustainability, Precarity, and the Novel,” a recent essay in which Levine sets out to “help to guide political action in the climate crisis” by offering close readings of a series of novels. Levine begins by asserting, no doubt correctly, that critics of literary fiction from Flaubert to the postmodernists have tended to promote “indeterminacy” over what she calls “happy endings.” With this rather minimal account of the status quo in hand, she states her thesis: 

     

    I want to suggest that this insistence on openness has reached its limit. We live in an age of acute precarity. As neoliberal economics undoes hopes of secure work and as fossil fuels radically disrupt long-standing ecosystems, the most urgent threat facing people around the world is not oppressive stasis but radical instability — intensifying poverty and food insecurity, flooding, forest fires, violent conflicts over water, the rapid extinction of species. The poorest and most vulnerable communities are already struggling to meet basic needs, including adequate nutrition, clean air and water, and stable shelter. This condition of mass precarity is poised to worsen as climate catastrophes are fueling ever more massive displacements. We are used to thinking of entrenched norms and institutions as the worst engines of oppression, but right now most of the world’s species are threatened most by rapid and multiplying forces of unmaking and devastation. Open-endedness is not primarily a source of pleasure and excitement for those who are afraid they will not be able to find their next meal or a safe place to sleep. Predictability and security have been bad words for artists and intellectuals, but they have also been much too easy to take for granted.

    There is a sociohistorical claim embedded in this call to dogmatism that would go something like this: in the heyday of long modernism, from the 1850s through the 1970s, indeterminacy was valued because the world was so stable that privileged litterateurs, just to feel alive, titillated themselves with fantasies of collapse and dispersal. Stated that way, the thesis is obviously untrue. The most iconic works of modernism — The Waste Land, say — were responses to an “acute precarity” at least as extreme as anything experienced today. In any event, the moral and political norms proposed here are all the more bathetic because they are so absurdly impotent in the face of the political threats that Levine enumerates almost lovingly. She goes on: 

     

    In our own moment, in fact, the open-endedness so beloved of artists and humanists has become eerily consonant with domination and exploitation. Authoritarian leaders on the right have been as much in love with rupturing rules and norms as any avant-garde artist. In the name of freedom, the Trump administration rolled back more than ninety-five environmental regulations, including those banning fracking on Native lands, drilling in wildlife preserves, and dumping toxins in waterways. Climate denialism is itself oddly consonant with the humanistic value of open-endedness. 

     

    Again, the historical thesis collapses on inspection. Hitler also broke with legal norms and rules — is that an argument against Kafka’s open-endedness? A worrying confusion of realms has occurred here. The result is a criticism that cannot tell us much either about the urgent political questions it adopts as a kind of camouflage or about the aesthetic objects it purports to explain. 

     

    Levine’s bottom line is political prescription, which she awkwardly hitches to her interest in narrative form. “Rather than reserving praise for those fictions that deliberately leave us hanging,” Levine says, we should admire novels “which conclude by combining the pleasures of material predictability and plenty with workable models of social relations that might help guide political action in the climate crisis.” The kind of novel we need to read, study, and teach now, in other words, will both foster the right sentiments toward climate change and offer imitable tactics for activism. 

     

    Does Levine really believe that her interest in novelistic happy endings — on its own, a fine topic for a course or a book — can help inculcate activist dispositions that will mitigate climate emergencies? How could she? What would that even mean? “In Praise of Happy Endings” ends not with literary-critical analysis at all, but with an exhortation to the reader, presumed to be a professional scholar of literature, to get involved in climate politics. “As a literary studies scholar, you might claim that this kind of political action is not your real business, fine for your spare time but outside of the sphere of your professional responsibilities.” Not to worry — there’s a place for you in the movement. “Maybe you would consider participating in the struggle to stop the financing of fossil fuels and agribusiness. There are probably groups already working to divest your campus or alma matter. There is also a growing movement to push banks and retirement funds to divest from fossil fuels — perhaps including your own.” And so on. Levine suggests community projects; reading about carbon offsets; protesting fracking; partnering with indigenous people; campaigning against deforestation in the Amazon. All of her suggestions are reasonable. None has anything at all to do with the professional identity of her addressee, the skeptical “literary studies scholar” who might feel that after all environmental activism is one thing and literary studies another. The disappearance of any literary-critical vocabulary from her hortatory peroration gives the lie to the whole project. Levine has many good, if rather obvious, ideas about how citizens might fight for the environment. But none involves novels, and none is specific to scholars. 

    Although she was trained as a scholar of the Victorian novel, Levine’s recent work falls under the banner of “ecocriticism,” which is now a field in its own right. Indeed, young scholars are more likely to get hired doing something ecocritical than working on Victorian poetry. Along with a range of other subfields — disability studies, queer studies, critical race studies of various kinds — ecocriticism (or “environmental studies” or, sometimes, “the environmental humanities”) has emerged as one of the heavy hitters among what John Guillory calls the “subfields” that paradoxically “dominate over the fields.” Fields in literary study, as Guillory explained in a conversation with Matt Seybold on the podcast American Vandal, have traditionally been defined by “periods — and definitely connected with the category of literature, with literature as object.” But then something happened. An “exhaustion with the basic organization of the discipline into literary periods” set in, and in response, in quest of new energy, literary studies turned to “ecocriticism, postcolonial studies, critical race studies, various kinds of queer studies.” In sum: “A discipline that doesn’t appear to have any core mission, anything that holds it together.” 

     

    This centrifugal propensity is exacerbated by the fact that the subfields, as Guillory says, “have an inherently interdisciplinary tendency, so you have a discipline in which the field concepts seem to be exhausted, and the subfield concepts have taken over, but the subfield concepts have actually depended on interdisciplinary enterprises.” The excitement of the subfields comes at a cost: the overall coherence of the larger field of literary studies. That cost is multiplied when the subfields themselves entail strong normative political commitments, as they often do. An essay such as “In Praise of Happy Endings” betrays its own desperate awareness of what has been lost.

     

    Political conservatives, of course, have long charged that the activist entailments of the subfields threatened to degrade the fields into sites of political agitation, shorn of scholarship. In 1992, dismissing the still-young subfield of queer theory, an unsigned dispatch in The New Criterion from a meeting of the Modern Language Association gave that concern bigoted point: “This year’s convention took the obsession with bizarre sexual subjects to new depths.” For the author of this dispatch — Hilton Kramer? Roger Kimball? — the focus on what he calls “‘alternative’ sexual behavior” was a facet of the larger politicization of the humanities, the importation of the language of the “political rally” into “the offices and classrooms of our most prestigious educational institutions.” Queer theory eventually won out against such attacks; it is now not so much a subdiscipline as simply part of the field. The species of literary and cultural criticism that the formidable Eve Kosofsky Sedgwick called “anti-homophobic inquiry” wore its politics openly, and a sense of political mission was surely an element in the force and the energy that its early proponents brought to their work. But it did not reduce the fields it influenced into nothing other than politics. Sedgwick was a committed activist, but you cannot imagine her ending an academic article the way Levine ends “In Praise of Happy Endings,” with a few hundred words of practical advice about, say, which organizations to make charitable gifts to. 

     

    Queer theory won out, too, because it attracted a large number of extraordinarily talented practitioners who, like Sedgwick, were the beneficiaries of a highly conventional literary-critical training. Sedgwick herself noted her “strong grounding in New Critical close reading skills.” This classical and untransgressive training they turned to good account in pursuing new scholarly directions. And finally — perhaps most importantly — queer theory won out for another reason: because it was never difficult to see its connection to literature, whose traditional objects offered a ripe field for its concerns. I suspect that today even Hilton Kramer and Roger Kimball could not read Henry James, or Proust, or for that matter Shakespeare, without, in spite of themselves, being influenced in what they notice by the tenured sexual radicals they loved to hate. 

     

    Will the same ever be true of the ecocritical turn? “It seems like a little bit of a falling off,” Guillory said on American Vandal. “It’s very hard to say what literary study is doing on behalf of the climate crisis by talking about a particular poem by Wordsworth. Not that there’s not a relation between Wordsworth and the environment, because we rediscovered the whole subject of nature in Romantic literature by way of the climate crisis. But what is it doing? What is that criticism doing for the climate crisis?” Here we must admit that the concerns of Kramer and Co. had a certain prescience. Questions like Guillory’s can be asked of almost all of the currently fashionable subfields claiming some version of Levine’s “affirmative instrumentality.” Either the theoretical frame is inadequate to the political mission — as in ecocriticism — or else an achievable mission is bathetically disproportionate to the theoretical armature in which it is cloaked. My favorite recent instance of the latter is the professor of geography at a SUNY school who offered a lecture on “Decolonizing your Garden.” Attendees would “learn to enjoy the benefits of a chemical-free garden using local hardy native species.” The Home Depot near me offers the same service, although they don’t call it decolonization. 

     

    As Guillory’s comments suggest, the local concerns of much ecocriticism are perfectly valid. But that does not mean they are valid as subfields. They would be better described as topics. As topics rather than subfields, they might flexibly inform curricula and research without deforming it. As a topic, ecocriticism might use current ecological theory to bolster a focus on the mediation of the natural world by poetic or literary genres. But as a subfield, Guillory implies, ecocriticism risks subordinating literary study to an intrinsically less coherent category. And it does so in the name of activist instrumentality, even though it plainly lacks the capacity to achieve its pragmatic goals. That is probably not a recipe for disciplinary longevity, let alone for healing the planet. 

     

    Traditionally, English literary studies has been organized in two principal ways: by period (“Elizabethan,” “nineteenth century”) and by genre (“poetry,” “the novel”). Often but not always, a faculty position consisted of some combination of period and genre (“We seek a scholar of the English literature of the eighteenth century with particular expertise in its poetry”). There are a few murkier designations, too, such as “modernism” and “Romanticism,” which name both periods and aesthetic tendencies. Finally, there are, or there used to be, a handful of single authors considered so important that they constitute fields in themselves: in English, Shakespeare first of all; then Chaucer, Milton, and, distantly, Spenser. (Of these, only Shakespeare still survives as a hiring category.) While other major figures — Dickens or Wordsworth or George Eliot or T.S. Eliot, say, and more recently Thomas Pynchon or Toni Morrison or John Ashbery — have long enjoyed robust scholarly communities, there have almost never been faculty positions devoted exclusively to them. Finally, there were the small number of subfields proper, which tended to demarcate minority literatures in a particular period (like “twentieth-century African American literature”).  

     

    This was a broadly if never entirely coherent system, with rough parallels in other humanistic fields. But in the last decade, it broke down almost completely. The rudiments of the old categories persisted — or at least some of them did; others, like “modernism,” flickered out of existence entirely — but the real energy was in the subfields, like “ecocriticism.” The proliferation of subfields can look bewildering and baroque to an outsider, both weirdly random and oddly specific. A perusal of some recent job advertisements gives the flavor. Skidmore seeks a medievalist “with research and teaching experience in the field of premodern critical race studies,” especially one who might bring “an intersectional approach.” The University of Saint Joseph, in Connecticut, wants to hire a scholar of Renaissance literature who can also teach “gender studies, postcolonial studies, and/or social media writing.” Colby College needs a scholar of pre-1800 British literature (a capacious swath!) and is “especially interested in candidates whose work engages the environmental humanities or premodern critical race studies.” (Perusing these ads, one notices how common is the “or” linking two utterly disparate subfields, as though the hiring committee couldn’t help but admit to the arbitrariness of the whole business.) Santa Clara University would like to hire a medievalist or early modernist with expertise in “culture, race, social justice, and Digital Humanities.” Vanderbilt is looking for an English professor “whose research engages the study of race, colonization and decolonization, diaspora, and/or empire”; period is unspecified, but “substantive investments in periods prior to 1900” are welcome. 

     

    The Vanderbilt posting represents the completion of the takeover of the field by the subfields. Period is left vague; genre goes completely unmentioned. Both are replaced by a list of linked historical topics. The uninitiated might wonder: Why is this a job in literature? The answer has to do with the political commitments, implicit in some cases and explicit in others, of the subfields, commitments that are much less obviously entailed by the older period or generic categories. This is not to imply that “race, colonization and decolonization, diaspora, and/or empire” are somehow invalid fields of academic inquiry. They are urgent topics for political, sociological, and historical analysis. But they are also, in the context of a literary studies department, frank political signals. Less sophisticated than Vanderbilt, Santa Clara gives the game away by including “social justice” in its litany of subfields. 

     

    This is the ground — Levine’s “affirmative instrumentality” is as good a name as any for it — on which the logic of solidarity and activism behind the statements urgently posted to departmental websites after the murder of George Floyd meets up with the internal concerns of the subfields. But the activist energy of the subfields has also received reinforcement from another source: the fiscal motivations of administrations. Not infrequently, the subfields take a set of priorities geared toward industry and retrofit them for the left political commitments of humanities professors. (One sees this in the funny wobble between “gender studies, postcolonial studies, and/or social media writing” in the Saint Joseph ad.) As Tyler Austin Harper not long ago observed, “If the humanities have become more political over the past decade, it is largely in response to coercion from administrators and market forces that prompt disciplines to prove that they are ‘useful.’” “Largely” is an overstatement, but such misguided utilitarianism is certainly a factor at play. One reason “environmental literature” became, seemingly overnight, a ubiquitous hiring field in literary studies and other humanities disciplines is because it seemed to mesh with the top-down mission of many universities. In 2021, for instance, the University of Oklahoma released a “Strategic Research Framework” naming four areas of concentration: Aerospace, Defense, and Global Security; Environment, Energy, and Sustainability; The Future of Health; Society and Community Transformation. Faculty were told to align their departmental missions with these areas. Scanning such a list and finding no obvious perch for poetry, drama, or the history of the novel, how is a literature department to justify asking for a new hiring line? A thousand Environmental Literary searches bloomed.

     

    The ascension of the subfields occurred just as the overall field shrank radically. Some older hiring categories — Chaucer or modernism, for instance — simply disappeared. Others, such as Shakespeare, became so reliably attached to one or another subfield, such as premodern critical race studies, that graduate research in the field was effectively directed into one channel by fiat. Or, put another way, Shakespeare became a subfield of premodern critical race studies. The result is that many graduate students determine their research agendas in narrow conformity with a very specific and rather arbitrary set of concerns. The point is not that the study of race in pre-modern England has no valid approaches to offer the study of Shakespeare. The point is that a combination of overall scarcity and the monopoly of the subfields has elevated one or two areas of research into the entire field, practically overnight. 

     

    When, thirty years ago, conservatives such as Hilton Kramer lamented queer theory and other subfields that they considered politically suspect, they laid a trap for the present. They ensured that any concern about subfield-proliferation, especially when the subfields were politically committed in one way or another, would appear politically regressive. But the risk that literature itself would disappear in the sea of subfields was apparent even to the one of the most successful subfield-innovators in the discipline, Eve Kosofsky Sedgwick, whose famous turn from “paranoid” to “reparative” reading can be read in part as an attempt to preempt the foreseeable damage, to redirect the energy of the subfield at the point at which it begins to corrode the legitimacy of the object — literature — on which it is founded. 

     

    Consider the case of Shakespeare and premodern or early-modern race studies. When Ayanna Thompson, who teaches Shakespeare at Arizona State University, appeared on the NPR podcast “Code Switch” to discuss anti-Semitism, misogyny, and racism in Shakespeare, she named “three toxic plays that resist rehabilitation”: The Taming of the Shrew, The Merchant of Venice, and Othello. Othello, Thompson says, suffers from “deep racism”; The Taming of the Shrew from “deep misogyny”; The Merchant of Venice from “deep anti-Semitism.” These judgments are of course debatable (the charge against The Taming of the Shrew strikes me as more plausible than either of the others), though it would certainly be irresponsible to suggest that, say, The Merchant of Venice can be taught without any attention to the history of European anti-Semitism. But Thompson doesn’t stop at the uncontroversial insistence that the evocation of historical context is one of the jobs of the English teacher. She argues that The Merchant of Venice is in fact a kind of dangerous text, so dangerous that it should not be read by high school students at all. “You feel,” Thompson claims, “more secure in your anti-Semitism after seeing this play.” Thompson’s larger polemical point is that Shakespeare might actually be ideologically toxic, a kind of poison. “We have a narrative in the West that Shakespeare’s like spinach, right? He’s universally good for you. When, in fact, he’s writing from the vantage point of the sixteenth and seventeenth century.” Not like spinach; like arsenic. 

     

    The reduction of The Merchant of Venice to something like The Jew Süss reflects a deformation of judgment that might be suspected to follow from the eclipse of the field by the subfield, an eclipse that in Thompson’s case is formally announced on her department website, in which she offers this self-description: “Although she is frequently labeled a ‘Shakespeare scholar,’ a more adequate label for Ayanna Thompson is something closer to a ‘performance race scholar.’” And the insistence that literary texts are dangerous, and that students must be protected from their harmful effects, is consonant with a prominent strain of activism among students, which insists that exposure to some literature and art is so wounding that certain works should be either optional or removed from the curriculum entirely. In 2015, an op-ed in Columbia University’s student paper, titled “Our identities matter in Core classrooms,” warned about the “impacts that the Western canon has had and continues to have on marginalized groups”: “Ovid’s ‘Metamorphoses’ is a fixture of Lit Hum, but like so many texts in the Western canon, it contains triggering and offensive material that marginalizes student identities in the classroom. These texts, wrought with histories and narratives of exclusion and oppression, can be difficult to read and discuss as a survivor, a person of color, or a student from a low-income background.” In 2020, a group of University of Michigan students, unhappy about the artist Phoebe Gloeckner’s class on the graphic artist Robert Crumb, summed up their complaints thus: “Prof. Gloeckner should know better than to embed racism and misogyny in her curriculum for the class. This results in curriculum-based trauma.” Nor are such complaints confined to the activist left. Alison Bechdel’s graphic novel Fun Home, for instance, was rejected by a group of Christian students at Duke because, as one of them said, “it was insensitive to people with more conservative beliefs.” 

     

    When students and faculty converge on a conviction that large swathes of literature and art are too poisonous to approach, the disciplines undergirding the various subfields will become anemic indeed. How can you persuade people about the essential importance of art if you make yourself complicit in their fear of it? Skepticism is one of the habits of mind that the humanities classroom is designed to inculcate. But horror, revulsion, the easy and self-congratulatory condemnation of the aesthetic artifacts of the past? The discovery of “trauma” in the contents of the syllabus? The transformation of the representational concerns of the project of canon-revision into therapeutic concerns about safety and harm is not the only face of the crisis of the humanities, but surely it is one of them.  

    Mercenaries 

    In the summer after the fall of Afghanistan, I received an invitation to speak at CIA headquarters. I used to work as a paramilitary officer at the Agency and a former colleague of mine attended the discussion. Afterward we went back to his office to catch up over a drink. The two of us had once advised the CIA-backed Counter Terrorist Pursuit Teams in Afghanistan. At their height, the CTPTs numbered in the tens of thousands. During the fall of Kabul, they played an outsized role in bringing any semblance of order to the evacuation after the government and national army dissolved.

     

    As we discussed those dark days and the role that the CTPT had played, my friend reached behind his desk. He pulled out two overhead surveillance photographs blown up and mounted on cardstock. When Congressional leaders had asked about the CTPT’s performance versus that of the Afghan National Army, the CIA had shown them these photographs. Both were taken at Kandahar airfield in the final, chaotic days of the war. In the first image, a C-17 cargo plane sits on the runway, its ramp lowered with a gaggle of panicked soldiers clambering aboard. Their equipment is strewn on the airfield behind them.  “That’s a photo of the last Afghan Army flight out of Kandahar,” my friend explained. He then showed me the second image. It had been taken a few hours later, also at Kandahar airfield. In it, the C-17 is in the exact same position, its ramp lowered, except the soldiers loading into the back are ordered in neat, disciplined rows. There is no panic and they are carrying out all their equipment. “This is a photo of the last CTPT flight out of Kandahar.” 

     

    Having worked as an advisor to both the Afghan National Army and the CTPTs, this difference came as no surprise. The Afghan National Army, which had systemic issues with discipline and graft, was deeply dysfunctional, while the CTPT was as effective as many elite U.S. infantry units. Unlike the Afghan National Army, the CTPT didn’t report to the Afghan government, but rather to the American government through its CIA handlers. It was a private army.

     

    After the fall of the Taliban in 2001, the newly established government of the Islamic Republic of Afghanistan needed an army. National cohesion was placed at a strategic premium, lest the defeat of the Taliban return the country to the factionalism of the 1990s, in which rival warlords battled one another for primacy. A national army could create national cohesion, or so the theory went. It would seat military power in Kabul and away from the warlords. The creation of an Afghan army designed to recruit from across Afghanistan was viewed as essential to the success of the Afghan national project. At the time we did not focus on the fact that, in Afghanistan’s tribal culture, an ethnically Hazara soldier from Mazar-e-Sharif deployed to Helmand Province would inevitably be viewed by the local Pashtuns as being as foreign as any American.

     

    In those early years, while the Afghan government was building its army, its American allies, led by the CIA, were also hunting al-Qaeda terrorists in Afghanistan. America’s counter-terrorists needed their own Afghan forces and, unconstrained by participation in the Afghan national project, they created a different type of army. The CTPTs would, by and large, be recruited locally, relying on tribal ties to provide the cohesion essential to any unit. If a soldier in the Afghan National Army was found guilty of incompetence, graft, or any other infraction, he was held accountable by a vague disciplinary system in Kabul. In contrast, if a member of the CTPT committed a similar infraction, he was held accountable not only to the military discipline that existed within the unit, but also to those in his tribe, because his noncommissioned officers and officers served double duty: they were also his cousins and uncles. 

     

    Systems of tribal discipline, though effective, were deemed inappropriate for a national army. The concern of the central government — which was not without merit — was that a national army recruited tribally would devolve into a nation governed by tribal armies. In the end, the CIA would play the tribal game even if the Afghan government wouldn’t. The CIA needed Afghan partners of the highest competence to capture or kill al-Qaeda; this was a job for the professionals regardless of how they factored into the broader Afghan national project. For the duration of the war, the CIA’s private army would serve not only as a counterterrorist force; it would also secure hundreds of miles of the border with Pakistan as it gained a reputation as being the most competent force in the country. The army that the Agency built was the one that American policymakers could quietly rely on. It would become a strategic backstop, called upon by four successive presidents, culminating with President Biden when his administration relied on the CTPT to secure critical sections of Kabul International Airport during the disastrous evacuation.

     

    Throughout the history of warfare, private armies have come in many forms and served many purposes on the battlefield. In the case of the CIA-backed CTPT, their mission evolved from being a small counter-terrorism force used to hunt down al-Qaeda to a counter-insurgency force used to capture and to kill the Taliban leadership, eventually evolving into a border security force used to hold the country together, at least as long as it could. 

     

    Private armies have played a critical role in virtually all wars; the CIA-funded CTPT in Afghanistan and the Wagner Group in Ukraine are only the most recent examples. Broadly speaking, they serve two distinct purposes: they act as a force multiplier that expands the regular military’s capacity, and they create political deniability both for a domestic and international audience. Private armies remain a tool used by democratic leaders and authoritarians alike. They are as old as war itself.

     

    In the game of empire, expansion fuels prosperity, and war sustains expansion. Except that war is a dirty business, one that citizens of most wealthy and prosperous nations would rather avoid. But someone has to fight these wars and, afterward, secure the peace. Whether it’s Pax Americana, Pax Britannica, or Pax Romana, pax imperia isn’t really peace: it is the illusion of peace sustained by the effective outsourcing of war. This doesn’t impugn an imperial peace — I certainly would have preferred to live in Pax Romana as opposed to the medieval turbulence that followed — but rather it shows how these periods of political and economic stability are sustained. 

     

    During the Roman Republic — before Pax Romana — when the Senate declared war, the two chief officers of the state, the consuls, levied from the citizens whatever military force they judged necessary to accomplish the war’s objectives. Conscription was then executed through a draft of male citizens. The Latin SPQR — Senatus Populusque Romanus, The Senate and People of Rome — stamped onto the standard of every legion embodied the social contract bonding the military with the will of those it served. 

     

    But this contract frayed and then tore apart under the burden of imperial expansion. In 49 B.C., after Julius Caesar delivered to Rome a series of unprecedented victories in the Gallic Wars — building a bridge across the Rhine, even invading Britain — the Senate feared that his popularity would eclipse their authority. They dismissed Caesar from his command. He responded by openly defying the Senate. With the support of his veteran army, he crossed the river Rubicon and marched on Rome. This crossing ended the Roman Republic. An empire was born.

     

    One of Caesar’s first decrees as dictator perpetuo, dictator for life, was a program of social and governmental reforms. Two centerpieces of these reforms were the generous provisioning of land for his veterans, and the granting of Roman citizenship to those occupying the furthest reaches of the nascent empire. Changing the preconditions of citizenship altered the composition of the army, which had profound effects on Rome, the army being its most important institution. Titus Livius, a historian who lived at the time of Caesar, understood the centrality of the Roman military to Roman society, writing that “so great is the military glory of the Roman People that when they profess that their Father and the Father of their Founder was none other than Mars, the nations of the earth may well submit to this also with as good a grace as they submit to Rome’s dominion.” 

     

    Rome fundamentally changed as it entered its imperial period because the composition of its army changed. Most critically, service in the legions would increasingly fall to non-native Romans. At its height, the Roman military protected seven thousand kilometers of imperial borders and consisted of over four hundred thousand men under arms. At such distances from the empire’s center, legionnaires who fought for the glory of Rome would typically never have seen Rome. In the later years of the Empire, most didn’t even speak Latin. These non-native legionnaires fought the perpetual wars of empire, but their loyalty was often more to their native-Roman officers than to the abstraction of a Rome they barely knew. 

     

    This dissolution of Roman identity within the ranks proved fatal in the Empire’s final years. Those garrisoning the hinterlands became native to those lands, Romans in name only. The burden of empire can only be outsourced for so long. Eventually, the Western Roman Empire collapsed under its own weight. One of its last gasps came in 476. When a barbarian mercenary named Odoacer, who had fought for Rome, was not sufficiently paid, he decided to take Rome for himself. He overthrew the last of the emperors, Romulus Augustulus, and sent the defunct Imperial insignia east, to Constantinople. 

     

    The mercenaries who fueled the empire’s expansion became its undoing. This is not to say that the outsourcing of military service away from Rome’s center was ineffective, even if the slow dissolution of Roman identity within the ranks culminated in the eventual dissolution of Rome itself. Indeed, few nations can boast a military that conquered and garrisoned an empire over a period of thirteen hundred years. For this reason, it comes as no surprise that other erstwhile empires would appropriate many of the societal and military techniques that Rome pioneered. 

    And none more so than the British.

     

    There was no greater latter-day evangelist of the British Empire than Winston Churchill. He was born at its height, witnessed its dissolution in the catastrophe of the First World War, and led its defense against German fascism in the Second World War. His favorite verse as a child, which he often delivered from memory as an adult, was Thomas Babington Macaulay’s long poem “Horatius at the Bridge”, written in 1842 and based on an episode in Plutarch. This epic, which blends classical poetic forms with those of a British ballad, recounts the story of the Roman officer Horatius Cocles (or “Cyclops”, because he had lost an eye in battle), who defended the Pons Sublicius bridge into Rome from an invading Etruscan army in the sixth century B.C. As the Etruscans advanced, Horatius faced certain death.

               Then out spake brave Horatius,

               The Captain of the Gate:

               To every man upon this earth

               Death cometh soon or late.

               And how can man die better

               Than facing fearful odds,

               For the ashes of his fathers,

               And the temple of his gods?

    That this poem, which recounts the defense of a bridge, resonated with Churchill — or with any young British officer — is no wonder. From the perspective of Britons, their empire was a bridge. It connected their small island nation to a broader world, delivering it outsized wealth, influence, and power. Like the Pons Sublicius, the empire had to be defended at all costs; and, like “brave Horatius, the Captain of the Gate,” Churchill and his contemporaries imagined themselves as its defenders. 

     

    The jewel in the crown of the British Empire was, of course, India. Queen Victoria, who reigned from 1837 to 1901, longer than any of her predecessors and over more territory, was always queen in Britain but it was her overseas colonies, and India in particular, that elevated her to an empress. Parliament voted to grant her that title in 1876, two years after Churchill’s birth. This was a period of significant reform and expansion for the British military, to include a rebalancing of the Empire’s reliance on regular versus private armies. 

     

    Like the Romans, the British had increased their reliance on non-British soldiers as their empire expanded. Unlike the Romans, the British did not extend rights of citizenship to the diverse array of cadres that composed their military forces, making them British; instead, they incorporated their imperial charges into the empire as subjects of the Crown. The British East India Company fielded the largest of these private armies, which they paid for with company proceeds. Indian sepoys (an originally Persian term for a native soldier serving under foreign orders) filled the ranks while native-born British officers led them, but those officers held commissions of inferior rank to those in the regular British Army.

     

    The mission of the East India Company’s army was, simply, to secure the interests of the company on the subcontinent. The governance of colonial India is a remarkable example not only of military privatization but also of the privatization of empire. Company rule in India effectively began in 1757, after Lord Robert Clive defeated a force of fifty thousand Bengalis with thirty-one hundred East India Company sepoys at the Battle of Plassey. Company rule extended until 1858, when those same sepoy regiments revolted in what became known as the Indian Mutiny. During this intervening century, Britain’s privatized holdings in India, secured by a private army, delivered vast riches to the empire. East India Company trade accounted for approximately half the trade in the world during the late 1700s and early 1800s. 

     

    The Indian Mutiny represented an existential threat to the British Empire. It was the result of an accumulation of social and economic resentments, as opposed to a single cause. The fighting continued for a year with garrisons of sepoys across the country killing their British officers and their families. By the end of that year the British had regrouped and, along with sepoys loyal to the East India Company, defeated the rebels. Parliament, having concluded that the East India Company was too big to fail, passed the Government of India Act. This placed the administration of India directly in the British government’s hands, creating what became known as the British Raj, which would endure another century. The East India Company continued for a decade, before becoming insolvent in 1874, the year of Churchill’s birth. 

     

    The Indian Mutiny was a debacle. It caused British leaders to question the composition and the quality of their military forces. Between 1868 and 1874, a series of reforms implemented by British Secretary of State for War Edward Cardwell would transform the British Army from a force of gentleman-soldiers to a professional army with a robust reserve that could be mobilized in a time of war. If the Indian Mutiny revealed the dangers of the reliance on private armies, it was the Franco-Prussian War of 1870, in which the German Empire routed the Second French Empire, that proved the importance of having a military reserve which a nation could rapidly mobilize.

     

    After the Napoleonic Wars, British soldiers served brutally long twenty-year enlistments. Often these soldiers would spend many of those years far from home in the colonies and, upon retirement, older and weakened by prolonged active service, they would be of little military use as reservists. This left Britain without a pool of soldiers to mobilize in wartime. Cardwell’s reforms shortened enlistments to as little as six years, allowing soldiers to return to civilian life but remain in the reserve at reduced pay. This new policy granted Britain access to a large reserve army, should they need it. 

     

    Prior to the Cardwell Reforms, officers in the British Army didn’t earn their commissions, they purchased them, with commissions in the most prestigious guards, grenadier, and cavalry regiments fetching the highest premiums. Cumulatively, British families invested millions of pounds in the purchase of commissions. Those who could not afford them served as officers in colonial regiments, which held inferior standing within the British Army. By the time Cardwell began implementing his reforms, this had created a dysfunctional tiered system. Regiments based in Britain saw far less combat than those based in its restive colonies. Officers with lesser experience and acumen, yet who held places in prestigious regiments, advanced. It was the opposite of a meritocracy.

     

    In the Crimean War, between 1853 and 1856, the British army’s aristocratic incompetence was on full display. Tennyson’s poem “The Charge of the Light Brigade” immortalized their ineptitude. The poet chronicles a pointless charge of light cavalry into heavy Russian guns at the Battle of Balaclava:

               “Forward, the Light Brigade!”

               Was there a man dismayed?

               Not though the soldier knew

               Someone had blundered.

               Theirs not to make reply,

               Theirs not to reason why,

               Theirs but to do or die.

               Into the valley of Death

               Rode the six hundred.

     The Cardwell Reforms, which abolished purchased commissions and created an effective reserve force, reversed decades of dysfunctional military policies. The era of the gentleman-soldier in the British Army was over, as was a reliance on private armies such as those deployed by the East India Company. After the Cardwell Reforms, membership in a prestigious regiment lost much of its allure for a certain type of ambitious officer. 

     

    In 1895, Winston Churchill received his commission as a second lieutenant in one such prestigious regiment, the 4th Queen’s Own Hussars, based at Aldershot. His first order of business after arriving was to negotiate a posting elsewhere, to Cuba, where he heard there was a war on.

     

    An empire, once acquired, must be maintained. It requires the control of territory, and this requires — to use the distinctly American term — boots on the ground. Yet a question that naturally follows: whose boots? The reforms that Julius Caesar made to Rome’s legions, or the ones that Cardwell made to the British Army, were both efforts to answer that question.

     

    After the Second World War, when the United States was called “to bear the burden of a long twilight struggle” against communism, as President Kennedy put it in his Inaugural Address in 1961, the question of whose boots would bear that burden became foremost in the mind of American military strategists. Although the ideological differences between the United States and the Soviet Union could not have been starker, their struggle for global hegemony in a nuclear age only existed because both superpowers had acquired an empire at the end of the Second World War. These were empires that each needed to garrison and defend against the other. 

     

    President Kennedy framed the nature of that defense in a speech at West Point in 1962:

    . . . for we now know that it is wholly misleading to call this the “nuclear age,” or to say that our security rests only on the doctrine of massive retaliation. Korea has not been the only battleground since the end of the Second World War. Men have fought and died in Malaya, in Greece, in the Philippines, in Algeria, and Cuba and Cyprus, and almost continuously on the Indo-Chinese peninsula. No nuclear weapons have been fired. No massive nuclear retaliation has been considered appropriate. This is another type of warfare, new in its intensity, ancient in its origins, war by guerrillas, subversives, insurgents, assassins, war by ambush instead of by combat; by infiltration instead of aggression, seeking victory by eroding and exhausting the enemy instead of engaging him.

    At the time Kennedy delivered this speech, he had already authorized a significant expansion of special operations forces within the U.S. military, deploying them as an economical form of civil defense in nations facing Communist aggression. In an official White House memorandum on guerilla warfare, dated April 11, 1962, in which Kennedy authorized members of the U.S. Army’s Special Forces to wear the green beret, he declared: “Pure military skill is not enough. A full spectrum of military, para-military, and civil action must be blended to produce success.” Kennedy understood that wars of empire are wars of exhaustion, and that conventional militaries tire quickly. His commitment to unconventional warfare as a pillar of national defense was a strategic pivot as profound as those that took place in Britain and Rome. 

     

    After Kennedy’s death, the “other type of warfare” he envisioned in his West Point speech would become a reality in Vietnam and a pillar of American warfare into the next century. Department of Defense concepts such as “foreign internal defense” and “counter-insurgency strategy,” the latter first seen in the Philippines in the early twentieth century and then further developed in Vietnam, would appear again in Iraq and Afghanistan. They rely on the American military to train a partner force that, eventually, takes responsibility for the conduct of the war, requiring far fewer American “boots.” 

     

    This was the strategy of “Vietnamization” that sought to bolster the South Vietnamese military. In Iraq, this was the “Surge” and the “Sunni Awakening,” in which American forces doubled down on training the Iraqi military while co-opting Sunni militias once loyal to al-Qaeda. (In 2006, General David Petraeus authored a book-length manual on counter-insurgency strategy.) In Afghanistan, it was a second surge and reinvestment in the Afghan National Army. What these examples all have in common is an American method of warfare that shifts the burden to an indigenous force allowing American troops to withdraw. It also shifts the conditions of victory, which is less defined by conditions on the battlefield. Victory today is defined — this is an extraordinary development — by outsourcing the prosecution of a war and withdrawing our troops. Whether that outsourcing is to a national army, militias, mercenaries, or a blend of all three is important, of course, but not as important as ensuring that our troops return home.

     

    In Vietnam, Iraq, and Afghanistan, this strategy yielded at best mixed results. Vietnam and Afghanistan are wars that America unequivocally lost. With Iraq, it would be difficult to argue that the United States won, but it is equally difficult to go so far as to say that we lost. The Iraqi government that was created after the American invasion endures, and, most critically, the security services that the United States helped train have successfully carried the burden of their own security, in recent years defeating Islamic extremists such as ISIS with little aid from American boots.

     

    When President Biden announced the withdrawal of troops from Afghanistan in a speech on April 14, 2021, he explained that the United States would be more formidable if it focused on future challenges. “Our diplomacy,” he said, “does not hinge on having boots in harm’s way — U.S. boots on the ground. We have to change that thinking.” Although he mentioned China and the pandemic as among these future challenges, he did not mention Ukraine. Few predicted that as America ended its longest war, it would within months find itself enmeshed in an ally’s war of defense against one of its oldest adversaries, Russia.

     

    The war in Ukraine began as a mercenary war. When Russia invaded Crimea in February 2014, it claimed this invasion was the work of separatists. The soldiers who invaded wore no Russian military insignia, causing many to refer to them as “little green men.” During this first invasion of Ukraine, the explicit appearance of Russian soldiers would have cost Putin more politically than he was willing to accept. In the eyes of the international community, as well as in the eyes of his citizens, there was value in deniability. Putin needed to launder his activities in Ukraine. Mercenary armies are very good at doing such laundry.

     

    To lead this mercenary venture, Putin made what seemed like an unlikely choice: Yevgeny Prigozhin, a coarse former restauranteur who many referred to as Putin’s “chef.” Backed by cadres of battle-tested field commanders, Prigozhin helped to found the Wagner Group in 2014 and presided over its rapid expansion. Not long before Russia’s “anonymous” incursions into Ukraine, Putin had intervened in Syria. After Bashar al-Assad crossed President Obama’s “red line” on chemical weapons with no meaningful response from the American administration, Putin saw an opportunity. If the use of sarin nerve gas against civilians failed to provoke a strong American reaction, the deployment of Russian mercenaries — and, later, units of the regular Russian military — into Syria seemed a less risky prospect. Enter Wagner.

     

    Between 2014 and 2021, Wagner rapidly expanded its size and deployments, delivering Russian boots on the ground in places no Russian boots should be. The approximately fifty-thousand-strong Wagner Group would, in those years, fight in Libya, Ukraine, Sudan, Mali, Venezuela, the Central African Republic, and directly against American troops in Syria at the Battle of Khasham in February 2018. All this while the Kremlin denied Wagner’s involvement and, in some cases, their existence. 

     

    The Wagner Group delivered Putin what he wanted. He had an effective military force that he could deploy anywhere in the world that granted him political cover. When Putin decided to invade Ukraine in February 2022, the Wagner Group would participate, contributing about a thousand soldiers to the invasion, but it never assumed the lead. That job would fall to the regular Russian military, which hadn’t taken on an operation of this scope in more than a generation.

     

    Ukraine’s staunch resistance to Russia’s invasion surprised the world and shocked Putin, who expected to march into Kyiv in a matter of weeks if not days. Second only to Putin’s miscalculation of the Ukrainian peoples’ resolve was his miscalculation as to the capabilities of the regular Russian army. Putin’s authoritarian rule, combined with a corrupt kleptocracy, had hollowed out the once vaunted Russian military machine, leaving it with capabilities on paper that did not translate to the field. It was swiftly exposed as a mediocre and confused force, proving the dangers of might without competence, particularly for any ruler who would stake their own security on raw force. 

     

    Six months into the war in Ukraine, the Russian military was in crisis. After sustaining heavy losses, Putin needed to replenish his ranks. But how could he sell the Russian people on a mobilization for a war that wasn’t even a war, but rather “a special military operation”? There is no more dire threat to a political leader’s power than a failed war. This is emphatically true in Russia, where successive regimes — from the Tsar in the First World War to the Soviets in Afghanistan — can trace their demise to failures on the battlefield. Knowing that Putin needed to marshal his military forces while continuing to insulate his political base in Moscow, St. Petersburg, and other affluent urban centers, he expanded his reliance on the Wagner Group, increasing its size and allowing its cadres to recruit in Russia’s prisons, adding another fifty thousand soldiers to Wagner’s ranks.

     

    As the war in Ukraine entered its second year, Wagner Group soldiers played a prominent role, serving as the lead assault force in strategically important and bloody battles, such as Bakhmut. Although increased reliance on Wagner prevented widespread and unpopular conscription in Russia, it created a different type of political liability for Putin. Like Julius Caesar’s legions, or Britain’s Indian sepoys, Putin would learn the dangers of vesting military power in private hands.

     

    For months, Prigozhin had been feuding with Russian Defense Minister Sergei Shoigu, Chief of the General Staff Valery Gerasimov, and other senior commanders, characterizing them as incompetents. After Ukraine’s successful Kharkiv counteroffensive, Prigozhin had said of Russia’s senior military leadership that “’all these bastards ought to be sent to the front barefoot with just a submachine gun.” Prigozhin never attacked Putin in public. Instead, he framed Putin as the victim of generals who weren’t serving his, or Russia’s, best interests. 

     

    For Prigozhin’s Wagner Group soldiers, he became a charismatic populist leader, airing their grievances against Russia’s military establishment. After Wagner Group forces defeated Ukrainian forces in Bakhmut at tremendous cost, delivering Russia a rare victory, Prigozhin’s rhetoric against the Defense Ministry intensified. Prigozhin stood over the bodies of several dead Wagner Group soldiers in Bakhmut and, in a video complaining about chronic ammunition shortages, declared: “Now listen to me, bitches, these are somebody’s fathers and somebody’s sons. And those scum who don’t give ammunition, bitch, will eat their guts in hell. We have a seventy-percent ammunition shortage. Shoigu, Gerasimov, where the fuck is the ammunition? Look at them, bitches!”

     

    Prigozhin had, at great cost, given Russia a sorely needed military victory. And no achievement vests a leader with political power more quickly and acutely than battlefield success. Military leaders so often become political leaders because military achievements neatly translate into political power. This is one of the great dangers of placing military power into the hands of private military leaders, a lesson which Putin would learn early on the morning of June 24, 2023, when Prigozhin marched his Wagner Group soldiers off the battlefield and back into Russia.

     

    Prigozhin’s mutiny (which might have turned into a coup) failed, with his cadres absorbed into the regular Russian army or banished to private wars in Africa, and with Prigozhin assassinated two months later — yet it serves as another example of the dangers that exist when a nation uses private armies. Whether Caesar crossing the Rubicon or Indian sepoys mutinying against their British officers, the investiture of military power outside of state hands often leads to a struggle. Sometimes the only thing more dangerous than a state’s monopoly of force is the lack of such a monopoly.

     

    From Pax Romana, Pax Britannia, and Pax Americana, each story includes distinct yet similar applications of indirect military power. Significant strategic and ethical differences exist between Russia’s reliance on a mercenary force such as the Wagner Group and America’s reliance on a partner force such as the Afghan National Army, or even the CIA-funded CTPT; but all these types of out-sourced military units reside at different points along a spectrum that exists to insulate a domestic constituency from the costs of war. 

      

    We should remain extremely cautious of wars fought with this indirect approach. Proxy wars have long been elements of strategy in international great-power competition, but a war fought under our flag by mercenaries is different from a proxy war. A nation that requires private armies to sustain popular support for wars is likely fighting those wars for the wrong reasons. The “good wars” — wars that must be fought and are typically fought for the right reasons — seldom rely on private armies. 

     Who are Ukraine’s mercenaries? There are none. Who are Israel’s? There are none. War, even just war, is a dirty business. Once it starts, no one keeps their hands clean. But be wary of a nation unwilling to do its own fighting; it will often end up the dirtiest of all.

    Observations on Mozart

    As we know, a musical composition does not by nature have the presence of a picture, a sculpture, a novel, or a movie. It lays dormant in the score and needs to be made audible. It is the performer’s obligation to kiss it awake. “Bring the works to life without violating them,” was Edwin Fischer’s advice.

     

    First, I’d like to explain what Mozart means to me. He is certainly not the charmingly restricted Rococo boy wonder that he may have appeared to be some hundred years ago. I consider him one of the very greatest musicians in the comprehensive humanity of his da Ponte operas, in the universe of his piano concertos, in his string quintets (which are matched only by those of Schubert), in his concert arias and his last symphonies. For the pianist, his piano concertos are one of the peaks of the repertoire; they reach from tenderness and affection to the border of the demonic, from wit to tragedy. 

     

    How may we characterize Mozart’s music? Considering the character of a composer, we are prone to assuming that the person and the composer are an equation. Yet the music of a great composer transcends the personal. There is a mysterious contradiction: while the person is clearly limited, the mastery and the expressive force of the great musician is well-nigh unlimited. In his work, Mozart, according to Busoni, presents the solution with the riddle. Among Busoni’s Mozart aphorisms, we find the following: “He is able to say a great many things, but he never says too much.” And: “His means are uncommonly copious, but he never overspends.” To find such a measure of perfection within a great composer is particularly rare as it is usually the followers, the minor masters, who smooth out whatever the great ones have offered in ruggedness and uncouthness. 

     

    Not that his contemporaries noticed such perfection. Time and again, they considered his music to be unnatural, full of unnecessary complication and unreasonable contrast. The exception was Haydn, who pronounced Mozart to be the greatest of all composers. 

     

    We find amazing boldness particularly in late Mozart. Think of the second movement of his F major Sonata KV 533, or the beginning of the development section in the G minor Symphony’s finale. It would be a mistake to exaggerate such passages in performance — they speak for themselves. The transitional bars in the G minor Symphony almost amount to a twelve-tone row — there is just one note missing (the G). 

            

    In my contemplation of Mozart, I like to start not with musical speech but with singing. Once more, Busoni finds the right words: “Unmistakably, Mozart’s music proceeds from singing, which results in an unceasing melodic production that shimmers through his compositions like the beautiful female contours through the folds of a light dress.” Mozart was a cantabile composer. Not unreasonably, he bears the reputation of being the greatest inventor of melodies next to Schubert. (Permit me to mention in this context a third name, that of Handel.) We can only register with astonishment the fact that there were contemporaries who complained about a lack of cantabile in Mozart’s operas. The operatic traits in his piano concertos, the characterizing incisiveness of many of his themes have been frequently noted.

     

    Not without good reason, the pianist Andras Schiff has called Mozart’s concertos a combination of opera, symphony, chamber music, and piano music. There we imagine a singer singing, but the operatic also includes the characters embodied on stage, the action of temperaments, the lifeblood. The pianist, like the singers, operates within a firm musical frame. Mozart, in his letters, describes his rubato playing as occurring within a firm rhythmical scheme. To be sure, there also will be some modifications of tempo, but they should remain conductible. I know there were scarcely conductors in Mozart’s time. Tempi therefore had to be stricter, you had to play together, and one could often not expect more than one run-through rehearsal, if any. Performances in Haydn’s or Mozart’s time must have been quite different from what we expect today — a rather cursory experience, a rough outline of a work without the refinement of a well-studied concert. 

     

    In Mozart’s correspondence, singing and cantabile are frequently mentioned. How does one sing on the piano? Continuous finger legato is not the answer. Singing has to be articulated. We know that the piano literature offers examples of cantabile passages played by the thumb or the fifth finger. Here and elsewhere, the pedal will be of considerable assistance. I know there are pedal purists. I am not one of them. 

     

    Cantabile calls for continuity. Mozart’s father Leopold, one of the leading musical authorities of the eighteenth century, writes in his Treatise on the Fundamental Rules of Violin Playing, known as the Violin School: “A singer who would pause with every little figure, breathe in, and perform this or that note in a particular fashion would unfailingly provoke laughter. The human voice pulls itself spontaneously from one note to the next… And who doesn’t know that vocal music should at all times be the focus of attention of all instrumentalists for the sake of being as natural as possible?” According to Mozart’s father, the bow should remain on the violin wherever there is no real break, so that one bowing can be connected with the next. (Leopold Mozart’s Violin School appeared first in 1756, the year Wolfgang was born. My citations are from the third edition, published in 1787.) 

     

    Evidently, an all-too-fragmented delivery that dissects the cohesion of the music will not do justice to this ideal. Which doesn’t mean that we may ignore Mozart’s articulation marks at will. I have always done my best to respect them all.

     

    Cantabile themes are most likely happening in the piano’s upper middle range. This is one of the reasons why I prefer the modern piano to a Hammerklavier, notwithstanding the peculiar charms of the older instrument. On our pianos, the sound is longer and lends itself better to singing, in case the pianist feels the inner urge to make it sing. 

     

    Already around 1800, Mozart was compared to Raphael, a favourite artist of the nineteenth century, but also to Shakespeare. The German Romantic writers Tieck and Wackenroder enjoyed spreading such ideas. I readily subscribe to the Shakespeare parallel on account of Mozart’s da Ponte operas. The tombstone of the great French writer and melomane Stendhal testifies for his veneration of Mozart and Shakespeare, with Cimarosa added for good measure. 

     

    The equation with Raphael is another matter: it shows how much the perception of this revered Renaissance master, but also of Mozart, has since changed. Here is what Wackenroder writes about Raphael: “It is obviously the right naivety of mind that observes the poorest and darkest lot of human destinies in a light and jocular manner, facing the most deplorable misery of life with an inner smile.” A similar image of Mozart has dominated for quite a while. Nothing seems easier than to launch dogmatic ideas; like infectious diseases, they spread in no time, and remain difficult to eliminate. 

     

    In connection with a performance of Haydn’s Creation, Goethe and Zelter called naivety and irony the hallmarks of genius, a distinction that should be equally valid at least for part of Mozart’s personality. 

     

    I see Mozart’s piano concertos not only as the pinnacle of the species but as one of the summits of all music. Already in his Concerto in E flat KV 271, written in 1777 when he was twenty-one years old, Mozart gives us a masterpiece of a distinction that he had not reached before and hardly would surpass after. It was only with his Sinfonia Concertante KV 364 that he connects to it. The C minor Andante remains one of his greatest slow movements. In it, Gluck’s loftiness is elevated to Mozart’s heights. 

     

    Among the later piano concertos, the two works in minor keys occupy a different ground. Mozart in minor seems to me almost a changed personality. Both first movements are composed in a procedural manner while elsewhere Mozart prefers to string together his themes and ideas like ready-mades. (He does it with such immaculate seamlessness that it appears it couldn’t be otherwise.) Mozart’s concertos reach from the private to the most official (as in KV 503), and from the loving to the fatefulness of KV 466 and KV 491. 

     

    The significance of his piano sonatas dawned on me much later. Here, another of Busoni’s aphorisms seems to fit: “He neither remained simple, nor did he turn out to be overrefined.” It may still be useful to point out that Mozart is not easier to play because he presents fewer notes, chords and bravura passages. Possibly “the experience of the player has to pass through an infinite” — as in Heinrich von Kleist’s “Essay on the Marionette Theatre” — “before gracefulness reappears.” Artur Schnabel’s remarks on Mozart’s sonatas are well-known: “Too easy for children, too difficult for artists”; or, in a different wording, “children find Mozart sonatas easy thanks to the quantity of the notes, artists difficult due to their quality.” Mozart is so demanding because each note, each nuance, counts and everything lies bare, particularly in the utmost reduction of the piano sonatas. You cannot hide anything. 

     

    In addition, mere piano playing is not enough. While in the concertos the piano sound needs to clearly stand out against that of the orchestra, in the sonatas it frequently acts as a proxy. If we look at his A minor Sonata KV310 — again a piece from another world — we perceive the first movement as an orchestral piece, the second as a scene from an opera seria with a dramatic middle section, and the third as music for wind instruments. (I have, by the way, heard the first movement of this work frequently played presto while it bears the tempo marking allegro maestoso. Leopold Mozart, in his explanation of musical terms, characterizes “maestoso” as “with majesty, deliberate, not precipitated”.) The famous A major Sonata KV331 also appears to me orchestrally conceived. For the “Turkish March”, Mozart would have enjoyed the cymbalom pedal that some Biedermeier pianos presented a few decades later. The Sonata in C minor KV457, as well as the Fantasy KV475, show many orchestral features as well: two marvelous, autonomous works which I would not perform consecutively. (Here I know myself in agreement with Fischer and Schnabel.) Thirty years later a number of orchestral versions of both works were published, one of them produced by Mozart’s pupil Ignatz von Seyfried. 

     

    Mozart’s notation offers extremes hardly encountered elsewhere. It reaches from the completeness of the Jenamy Concerto KV271 to the near-absence of performing instructions in works not prepared for the engraver. In KV271, there seems to be nothing that needs to be added. In contrast, the overly rich markings of Mozart’s solo works in minor keys pose a challenge to the player’s sensitivity and understanding. While the autograph of the superb C minor Concerto shows a hurried hand, it contains a number of variants but also errors and some incompleteness. In contrast to Mozart’s orchestral and chamber music works and their meticulous dynamic markings, the performance of some of the piano sonatas is left to the player’s tastes. Here a different kind of empathy is required, an identification with the composer that should enable the performer to supplement the dynamics in Mozart’s style.

     

    The warrant of a Mozart player considerably surpasses that of a museum clerk. Where Mozart’s notation is incomplete, the written notes should be supplemented: by filling (when Mozart’s manuscript is limited to sketchy indications); by variants (when relatively simple themes return several times without Mozart having varied them himself); by embellishments (when the player is entrusted with a melodic outline to be decorated); by re-entry fermatas (which start on the dominant and must be connected to the subsequent tonic); and by cadenzas (which lead from the six-four chord to the concluding tutti). Mozart’s own variants, embellishments, lead-ins, and cadenzas — of which, to our good fortune, he left a considerable number — give the player a clear idea of his room for maneuver: in lead-ins and cadenzas the basic key is never left, in embellishments and variants the basic character is always maintained. (No transgressions like those by Hummel and Philipp Carl Hoffmann!) It is a pity that original cadenzas for the minor key concertos are missing, his cadenzas in major hardly being indicative, as the different compositional process of these works seem to demand composed-through cadenzas like the one of Bach’s Fifth Brandenburg Concerto that leads from the six-four chord to the orchestra in one sweep.

     

    Embellishments that contradict the basic character of the movement need to be avoided. After all, simplicity and clarity can characterize a piece as well. Sometimes Mozart makes do with highly economical alterations, as in the recapitulations of the first four bars of the scene of KV491’s middle movement. Here to do more would be a misunderstanding. 

            

    In the slow movement of the so-called Coronation Concerto, on the other hand, the extremely simple and frequently recurring first bars of the theme crave embellishment. This movement is hardly more than a vehicle for the player’s extemporizing gifts. The left hand of this movement, by the way, is only elusively written down. 

            

    In the Adagio of the A major Concerto KV488, I see two possibilities: to keep embellishment to a minimum, imagining a woman singer singing the long notes, or to remember the copiously ornamented version published by the Bärenreiter Edition’s critical resumé. It seems to have belonged to Mozart’s estate, though it was written not by Mozart himself but by his esteemed pupil Barbara Ployer, providing evidence that embellishment may be called for. 

            

    It should be noted that, for good reason, orchestral and chamber music does not feature improvised variants. Why, we may ask ourselves, should the listener and player necessarily feel bored by hearing the same notes played again? When dealing with a melodist of the highest order, wouldn’t it be desirable to play a theme so convincingly that the listener would be happy to encounter it again? Wouldn’t it suffice that the performance is slightly modified? Expertise in elaboration may have the effect that the attention of the listener is focused more on the performer than on the music: see how brilliant I am! Here is another quotation from Leopold Mozart: “Some think that they produce wonders when, in an adagio cantabile, they thoroughly curl up the notes, and turn one single note into a few dozen. Such musical butchers show their lack of discrimination, shivering when they need to sustain a long note or play a few of them in cantabile fashion without applying their ordinary and awkward embellishment.” 

     

    The questions to ask are: What does the work need? How much can a work take? What is harmful to a work? While C.P.E. Bach pronounced embellishing to be one of the crucial features of interpretation, we are hardly able to agree with him today. Simplicity easily gets confused with a lack of imagination. For me, inspired simplicity is one of the most precious qualities. Whoever is interested in moving the listener will not discount it. How many of us would be able to remember a theme and its elaboration after one hearing? Constant embellishment, the ceaseless ambition to prove oneself, can become a burden under which a piece of music is crushed. 

     

    Not everything that is suggested by historicizing performance practice is relevant for us today. We are not people of the eighteenth century. Since my young years, Mozart performances have changed considerably. Some conductors and orchestras, but also chamber musicians and soloists, have adopted things that no one would have imagined some decades ago. Baroque performance practice has spilled over to the music of the late eighteenth century. Among the most remarkable gains were the correct execution of appoggiaturas: countless wrong notes have been righted in opera alone. Some other performance habits, on the other hand, deserve a critical evaluation. The majority of them are occasionally valid. Where they are applied in a dogmatic way, however, resistance is called for. Music is too diverse to be left, in its execution, to simplifying recipes. Each case needs to be judged on its own merits. It may come to a surprise to some people that the effect of a musical performance is no less determined by the detail than by the vision of the whole. 

     

    The point of departure of a performance should not be, in my view, textbooks of interpretation — textbooks that frequently are connected to a certain time and place are rarely original from most composers — but rather the fact that each masterpiece contributes something new to the musical experience, that each theme, each coherent musical idea, differs from any other. What we need to observe are not just the similarities (they are easier to spot) but the differences, the diversity, the quality that is particular to a musical idea, and exclusively so. As for the technical demands, we can say that a few recipes and established habits will not suffice. The suitable technical solution has to be found for each single case. There is no limit to discovery. 

     

    Among the habits that have been spreading dogmatically there is the striking accentuation in two-note groups with a strongly accented first note and a soft and short second. I should also mention the compulsion to play whole phrases, repeated staccato notes, and even decisive endings of movements diminuendo; the separation of small units – an overreaction against the “big line” that combines such units; the short clipping of end notes; or the separation of final chords, that are only played after a hiatus. Most of these practices have their justifications as long as they are not applied in a dogmatic and automatic way. They are sometimes right. Two-note groups with a hard-driven first note are never right. I have heard performances where such accents have been routinely exaggerated to such a degree that they sounded like the main purpose of the composition. There are, on the other hand, emotionally charged words like Dio! or Morte! that call for an accent that is expressive without being stiff.

     

    If a number of two-note groups are linked in a chain, the emphasis should stay with the second note. If two-note groups are draped around one note, this central note deserves to be slightly emphasized. Where, in the composer’s notation, the second note is shortened by a rest, the curtailed note needs to stand out! We can still find this kind of notation in Schubert (for example, in the Finale of the A major Sonata D959).

     

    The advice to accentuate heavy beats is so simpleminded that it seems hard to explain how it ever found its way into serious musical textbooks. Franz Liszt called it “discharging potatoes.” The saying, “If I only could afford it, I would print all music without bar lines!”, comes from Artur Schnabel. My own experience tells me that one of the hallmarks of a good Mozart performance consists precisely in avoiding accented heavy beats, and even counteracting them, provided that they aren’t dealing with marches or dances. Besides dancing and stamping, music, after all, is entitled to be able to float. 

     

    The way long notes are executed can strongly contribute to the flavor of a performance. If we look at Baroque instruments, there are several possibilities. The organ will maintain them in continuous loudness. An oboe renders them cantabile. The harpsichord starts each note with an accent. Strings and the human voice can modify their approach. Long notes should often sustain the musical tension and carry it on. If they are rendered abbreviated or without vibrato, they will hardly be able to do so. A good oboist will often play them most naturally. To play such notes routinely diminuendo or to start the note without vibrato and vibrate later (a mannerism some singers have adopted as well) I find neither necessary nor desirable. Edwin Fischer demonstrated that long notes can be sustained on the modern piano without noticeable accent. Here the quality of the instrument should also matter. 

            

    There are musicians who believe that a historicizing approach will bring you closest to a piece. What is called for, they claim, is a different way of listening. The listening experience that has formed us is an obstacle that has to be discarded. I am not ready to be that radical. Even if it were conceivable to return a work to its original meaning and condition, would this really solve the problem of performance? 

     

    The most important criteria remain that the piece should impress, move, and entertain. We cannot and should not discard what we have held precious. There are things that I find unacceptable, such as long notes without vibrato — a crass offense against cantabile — or the routine of equating pianissimo with non-vibrato in the belief that the timbre thus produced strikes one as mysterious and uncanny. No, the sound is merely deprived of any color, it is cold and dead. 

     

    A singer who sings naturally will do this with a vibrating voice, and even with very fast vibrato — think of Lotte Lehmann or Kathleen Ferrier. These days, fast vibrato is unpopular. But shouldn’t the whole range of vibrato be at a singer’s or string player’s command? The use of vibrato is documented since 1600, both for singers and string players. 

     

    It is precisely as a pianist that I want to plead for singing. These days, in the wake of historicizing performance practice, cantabile has widely fallen into oblivion. Within my understanding of music, however, singing, at least before the twentieth century, is at the heart of music. 

     

    It doesn’t tally with my experience that old masterpieces only sound beautiful and persuasive when performed on old instruments. There is, on the other hand, music that I wouldn’t like to hear on modern ones anymore. To listen to Monteverdi on the instruments of his time was a liberation, and two Scarlatti recitals by Ralph Kirkpatrick convinced me that the harpsichord is indispensable for this remarkable composer. Permit me to quote Nikolaus Harnoncourt: “How did they do it at the time? What may the sound have been like? There will, however, be hardly a musician who would make a profession out of this kind of quest – I would call such a person a historian. A musician will ultimately look for the instrument that is most useful to himself. I would therefore like to restrict my observations to those musicians who prefer certain instruments for purely musical reasons; those who do it merely out of interest for old facts and circumstances, do not count for me as musicians. They may, in the best case, be scientists, but not performers.” Here one can only agree with Harnoncourt. 

     

    Whoever insists on old instruments should remember that some of the great composers have frequently transcribed works of their own or those of other composers, such as Bach transcribing Vivaldi, or a work for solo violin turning into his famous Organ Toccata and Fugue in D minor. (As there is no contemporary source for this piece, it may have been done by a later composer. In its toccata, the work persists in one single voice, while the fugue takes consideration of the technical limits of the violin.) A transcribed violin version was performed by Sigiswald Kuijken. 

        

    Only gradually did musical performances become accessible to a wider public. Concert venues expanded in size. This asked for more powerful string and keyboard instruments. While the sound of instruments with gut strings can have a particular charm, it will be too restricted for modern halls. When we look into a score we would, after all, be happy to hear what is written down. The power of the wind players used to easily drown out the strings. Only rarely would the composer have been provided with the multitude of first violins that would have enabled a proper balance with the winds. It is the first violins that frequently carry the main voice. 

     

    Here is another statement by Harnoncourt: “The composer thinks unquestionably in the sounds of his time and by no means in some future ‘utopias’.” This may be correct in many cases. When, however, I think of most later Schubert sonatas, and of his Wandererfantasie in particular, works that turned the piano into an orchestra, the ideal possibilities of that performance surpass by far what his contemporary instruments had to offer. Broadly speaking, I would say this: a composer may compose on, but not for, the piano which graces his music room. 

      

    I have no doubt that Mozart’s music is often better served by a good pianoforte than by the fortepiano. Proceeding from his operas, orchestral works, and chamber music, we are bound to notice that the wider range of color and dynamics will do better justice to Mozart’s requirements. As a rhythmical model as well, the orchestra and ensemble playing should give us a better example than a manner of performance that has lost the firm ground under its feet. Here is Leopold Mozart’s amazing dictum: “The pulse makes the melody: therefore it is the soul of music. Not only is it enlivened by it, it also keeps all its limbs in good order.” 

     

    Permit me, in conclusion, to return to the character of Mozart’s music and quote what I wrote in my younger years in order to specify what Mozart was not: “Mozart is made neither of porcelain, nor of marble, nor of sugar. The cute Mozart, the perfumed Mozart, the permanently ecstatic Mozart, the ‘touch-me-not’-Mozart, the sentimentally bloated Mozart, must all be avoided. There should be some slight doubt too about a Mozart who is incessantly ‘poetic.’ ‘Poetic players’ may find themselves sitting in a hothouse into which no fresh air can enter; you want to come and open the windows. Let poetry be the spice, not the main course. A Mozart who combines sensitivity and fresh air, temperament and control, accuracy and freedom, rapture and shudder in equal measure, may be utopian. Let us try to come near to it.”

     

    Persecution and The Art of Filmmaking

    Iran today may be best known for two things: one of the most repressive regimes in the world and one of the most remarkable cinemas in the world. The coexistence of the two is a conundrum that perplexes many people. How does a country known for ferocious repression of dissent and artistic freedom produce some of the most impressive films in the world? What does this tell us about the relationship between autocracy and art? And how are we to understand Iran’s cinematic community, often a victim of the regime’s policies of censorship and persecution? Are Iranian films political by nature and if so, what is their politics? 

     

    If one wants to think about art and politics, Iran is a worthy starting place, particularly with the most renowned director in Iranian history, whose films were born not out of engagement, positive or negative, with politics, but out of an emancipatory rejection of politics. Abbas Kiarostami began making a name for himself in film festivals in the 1990s. Then in his fifties, the Iranian director had been making films for more than two decades, and was best known in his homeland for his experimental documentaries. If your interest in cinema went beyond the transitory thrills of film festivals, you would have known that his career predated the Iranian revolution of 1979 by many years. Kiarostami had first practiced his art in the Institute for Intellectual Development of Children and Young Adults, a remarkable center founded in the 1960s by Farah Pahlavi, Iran’s last queen. But it was Kiarostami’s first post-1979 fiction film that helped him break out on the festival circuit — and in the process, give birth to a new Iranian cinema. 

     

    Made in 1987, Where is the Friend’s Home? had initially struggled to find a global audience. In 1988, it was shown in the Out of Competition section at the Festival des 3 Continents, a smallish affair in Nantes, dedicated to films from Asia, Africa, and Latin America. A holdover from France’s tiers-mondiste flirtations of the 1960s, the festival in Nantes helped to “discover” directors from places such as Mali, Thailand, and Tunisia. In 1985 it gave its top award, the Golden Balloon, to Amir Naderi’s The Runner, the first time that a film made in the Islamic Republic of Iran received international accolades. The film told the story of Amiru, a young boy in southern coastal areas of Iran, who earns a living selling ice and shining shoes for visiting foreigners, while also learning the alphabet and going to night school, dreaming of a life beyond the Persian Gulf. 

     

    The Runner’s eschewing of a linear narrative and stark formalist imagery made it popular to European festival-goers. Edited by Bahram Beizaei, the dean of Iranian performing arts, the film features the protagonist and his teenage peers in visually unforgettable scenes: running while reciting the alphabet, shouting to drown the whizz of overhead airplanes; young boys sweating in the deadly heat of the Iranian south, made worse by the burning flames of the nearby oil fields. The fact that the film starred teenaged amateurs was not accidental. The elaborate star system of Iran’s film industry had been virtually wiped out overnight by the revolution, with many of its leading names having their careers destroyed forever, often reduced to a meager living in Los Angeles or other destinations of exile. Even before 1979, censorship had long bedeviled Iranian cinema, making most political issues off-limits. But with the puritanical zeal of the nascent Islamic Republic in place, the circle of exclusion, on and off screen, broadened significantly. Most forms of music were now banned, and women could not be portrayed without a veil (just as they had been forced to don it in real life.) How could you make a film in such conditions? Relying on teenage boys and breathtaking scenery was one way. 

     

    Kiarostami’s Where is the Friend’s Home? also relied on teenage boys and breathtaking scenery. But the similarities end there. Naderi’s film was set in his native Abadan, the grand industrial port city on Iran’s southwestern border with Iraq, fabled for its sweltering heat and housing some of the largest petrochemical complexes in the world. The film’s aesthetic is correspondingly rough and austere, filled with abrupt jump cuts reminiscent of Eisenstein. Given Beizaei’s penchant for epics, the film’s visual sensibility is overwhelming to the point of being suffocating. In this sense, it also bore the marks of the 1980s, a harsh decade for Iranians who were suffering from an excruciatingly long and bloody land war with Iraq and a repressive regime that sent thousands of political dissidents to the gallows. 

     

    Where is the Friend’s Home? looks so profoundly different, it could have been made at a different planet. In fact, it was made on the other end of Iran, about a thousand kilometers to the north, in the green temperate fields of Gilan, close to the shores of the Caspian Sea. The film tells the straightforward story of a simple quest. Ahmadreza, a schoolboy, realizes that he has mistakenly taken home the homework notebook of a classmate, who will be punished if he arrives empty-handed to school the next morning. He resolves that he must return it, and to accomplish this mission of schoolboy honor he must traverse the tough rolling hills of the Gilani countryside to get to his friend’s home. There is something so poignant about the child’s quest that even thinking of the film makes me shed a few tears. At a time of war and revolution, when Iranian culture had become so brutal and cruel, Kiarostami created a film whose protagonist was not a stand-in for another ideology; he was a simple schoolboy who would defy all authority and brave forbidding terrain to shield his friend from trouble. 

     

    The film’s title was taken from a poem by Sohrab Sepehri (1907-1980), an Iranian poet and painter whose “Eastern” influences and Buddhist inclinations made him an object of derision by the literati of his time. How could you write poems about rivers and the blue sky, a fellow poet once asked Sepehri, when so many people were being killed nearby? Yet that is precisely what Kiarostami had done. It was so refreshing, to his own people in Iran and beyond. While the Iran-Iraq war was raging, and the ayatollahs were consolidating their tyranny, his simple, humane, and endearingly warm tale of a Gilani schoolboy seemed prophetic, as if it wanted to will a different world into being. One way of triumphing over politics is to neglect them. But the film was not just an escapist route out of the tough Iranian reality; it remained so profoundly and unmistakably Iranian. It relied on the mood-making of Sepehri’s poetry, the delicate sound of the setar (a Persian lute), the magically simple forms of rural Iranian architecture, and the traditions of figurative Iranian art, all deeply familiar to Kiarostami, who had started out as a graphic designer. In its sovereign indifference to the political situation of its day, it was as if the film was telling us: a different Iran is possible.

     

    Following the showing in Nantes, the film found broader audiences. It won the Bronze Leopard in Locarno in 1989. Two of his subsequent films, And Life Goes On (1992) and Through the Olive Trees (1994), set in the same area of Gilan and now known, together with Where Is the Friend’s Home?, as the Koker Trilogy, were shown in Cannes, with the former winning the top award in the Un Certain Regard adjunct to the festival and the latter being the first Kiarostami film shown in the Cannes competition, the top showcase for global cinema. With these subversive but non-protest films, Iranian cinema entered a new era of filmic accomplishment and international prestige. 

     

    Kiarostami’s simple-looking poetic cinema, mixing documentary and fiction, actors and amateurs, would soon come to be denounced as gimmicky by many in Iran. But foreign festival-goers could not get enough. There was something so deeply humane in his elementary tales; he found universal themes in the most provincial corners of the Iranian countryside. In 1997, with the extraordinary A Taste of Cherry, Kiarostami finally achieved the greatest cinematic honor, the Palme d’Or at Cannes. The film’s plot resembled his previous work in some ways but was also a departure from it. Almost the entirety of this beautiful film takes place inside a car, as a middle-aged man drives around rural areas just outside Tehran, seeking to hire someone for a peculiar task: burying him alive. The first two men he sounds out, a conscript soldier and an Afghan cleric, refuse his macabre proposal, but the third one, a museum worker, agrees, although we never see if he actually follows through on the job. 

     

    In its portrayal of rolling hills and simple conversations about life, A Taste of Cherry was reminiscent of the Koker Trilogy. But its protagonist, a Tehrani intellectual-sounding figure played by Homayoun Ershadi, couldn’t have been more different from the plain-speaking northern schoolboy of Where is the Friend’s Home? His quest for a strange assisted suicide was markedly different from the schoolboy’s simple act of courageous charity. 

     

    Only three non-Western nations — Japan, Algeria, and Turkey — had ever landed the Cannes trophy before. Despite its increasing global isolation, despite harboring one of the severest regimes of censorship anywhere in the world, Iran had now joined the upper ranks of the cinematic universe. It would now be known to many in the world not just by the angry and menacing frown of the Ayatollah Khomeini and his fellow “mad mullahs,” or by the racist imagery of the American film Not Without My Daughter (1991), but by the colossal humanity of its art films. With his darkly tinted glasses, which masked his eyes and gave him an aura of stylish mystery, the soft-spoken and relaxed Kiarostami became an unlikely ambassador for a country whose revolution had shocked the world less than a generation earlier. 

     

    Events soon showed that Kiarostami’s poetic humanism had been in some ways a taste of things to come. After the war with Iraq ended in 1988 and Khomeini died a year later, Iran experienced many changes. Just as state socialism was collapsing in the Soviet Union, Iran’s loud ideologues of the previous decade went through their own Damascene conversions. A most exemplary case was the filmmaker Mohsen Makhmalbaf. Born in Tehran in 1957, he was active as a teenager in an underground guerilla group against the Shah’s regime. In the early years of the revolution, he was a militantly Islamist filmmaker who made intensely ideological films — and a thug who physically attacked opponents of the Islamic Republic. But in 1990, starting with his film Time of Love, which was shot in Istanbul and is almost entirely in Turkish, Makhmalbaf went through a shocking transformation. His films subsequently told the stories of earthly romances and poetic meanderings, warm tales of young people seeking a better life. After a successful screening in Tehran’s film festival, Time of Love was banned in Iran. The same fate awaited many of Makhmalbaf’s later films. He went on to become a harsh opponent of the Islamic regime, leaving Iran for an exilic life between London and Paris. In 2013 he was a guest of honor at the Jerusalem Film Festival. 

     

    Makhmalbaf’s transformation was echoed in politics by a group of reformist officials who, under popular pressure from disenfranchised women and youth, attempted to take post-Khomeini Iran in a different direction. Was life imitating art? Just a week after Kiarostami got his Golden Palm, the reformist mullah Mohammad Khatami cruised to a surprise victory in the presidential elections in 1997. A new era of struggle opened in Iran, as partisans of democratic reform fought head-on with the guardians of the theocratic establishment. Although the regime was hard at work trying to train its own cinematic talents, Iranian film remained mostly the realm of the regime’s critics, forever struggling valiantly against censorship. Their efforts took matters beyond where Kiarostami left them. He had never set out to be a political filmmaker, but the very nature of his work made him enemies in the establishment. In fact, his films were not all that alienated the hardliners. After winning the Palm d’Or at Cannes, he kissed the cheeks of Catherine Deneuve, who chaired the jury. Upon his return to Iran, angry pro-government mobs were after him for this public sacrilege. Like Makhmalbaf, he would soon be hounded out of his homeland. 

     

    Iranians have yet to win the battle of democracy, despite hundreds of people losing their lives in the fight. But Iranian cinema has only grown in global stature. Kiarostami, who died in 2016, is still a name to be reckoned with, but now at least a dozen more men and women, working inside Iran and outside, have come to represent the country on the red carpets. One of his most worthy heirs is Jafar Panahi, whose first film, the celebrated The White Balloon made in 1995, was based on a script by Kiarostami. Panahi’s didactic regime-critical cinema and his open support for Iranian freedom movements has earned him repeated bans from filmmaking and landed him in jail. Even from behind the bars, he has collected many laurels from festivals such as Cannes and Berlinale. 

     

    As Panahi is persecuted by the regime, Iran’s cinematic community stands firmly behind him. Support has come from all quarters, including from the filmmaker who has towered over Iranian cinema since Kiarostami Asghar Farhadi, two of whose films, A Separation, from 2011, and The Salesman, from 2016, have won Oscars for Best Foreign-Language Film, making Iran part of the very small club of countries that have received more than one Oscar. (The only other non-European countries to have gained this honor are Argentina, Japan, and the Soviet Union.) Just as Kiarostami had started a revolution in Iranian cinema, Farhadi led his own. Instead of the heavy dose of allusion, abstraction, and poetic inference that had long defined Iranian artistic movies, Farhadi’s hard-hitting dramas of middle-class life relied on the masterful weaving of plot. If Kiarostami had been an arthouse darling, Farhadi knew how to write a tight script. While Kiarostami had lionized the simple country people living far from the cities, Farhadi’s protagonists were Tehrani men and women, many of whose urban woes could have been set in Buenos Aires or Bucharest. His films depicted and dissected the complexities of Iran’s modern society, a far cry from the attractive rural simplicities of others. 

     

    While Kiarostami’s auteur films were mostly lauded by critics and festivals, Farhadi’s dramas have found also commercial success, making him into one of the most sought after filmmakers anywhere in the world. On a break from Iran, when he decided to make a film somewhere else with an entirely non-Iranian story, he was able to land such stars Javier Bardem and Penelope Cruz for a story set in their native Spain, and Todos lo saben was given the honor of the opening Cannes in 2018. On the very same day, Donald Trump abandoned the Iranian nuclear deal of 2015, dashing any hopes of Iranian-American reconciliation and escalating tensions in the Middle East, just as the saber-rattling of Iranian hardliners had done previously. For Iranians, life and art continue to interact in curious ways. As Iranians watched Farhadi in the Cannes spotlight that night, many wished that the fate of their country would be left to people like their beloved filmmakers, and not to their brutal and inhumane theocrats. 

     

    The artistic successes of Kiarostami and Farhadi stirred the national pride of Iranians everywhere. A steadfastly patriotic people, Iranians followed the festival news as avidly as they would the fortunes of the country’s football team in the World Cup or that of its weightlifters and wrestlers in the Olympics. In recent years, however, as things got increasingly worse in the country, and as Iranians have mourned hundreds of their fellow citizens who were murdered in the anti-government protests of 2017-2018, 2019-2020 and 2022-2023, there is much less enthusiasm for celebration of any kind, and much less excitement for Cannes or the World Cup. As Iranians have failed to bring down the unpopular Islamic Republic, many bitterly snipe at each other, engaging in an often mindless and increasingly hysterical blame game. Some attack filmmakers such as Farhadi for somehow not doing enough against the regime. What they seem to imply is, People are being killed and all you can do is make a film?

     

    In May 2022, as the thirty-two-year-old director Saeed Roustayi walked up the steps of Le Palais de Festival in Cannes for his film Leila’s Brothers, he had a lot to be proud of. His very presence there was a grand achievement — and a sign of how far things had come for Iranian cinema since the 1990s. Of the twenty-one films being presented in the main competition at Cannes, Leila’s Brothers was the only one that did not hail from a rich country. (All were from the West, except for two films from South Korea.) After the Belgian Lukas Dhont, who turned thirty-one a few days before the festival, Roustayi was also the youngest director in the competition. But given the national mood, many Iranians were no longer lining up to cheer. 

     

    Certainly the sourness and the disillusionment came as no surprise to Roustayi. His films bristle with bitter hopelessness. Throughout the 2010s, as Iran suffered under the twin pressures of American sanctions and the economic incompetence and mismanagement of the ruling theocracy, the middle classes depicted in Farhadi’s films were increasingly pauperized and destroyed. Indeed, it is accurate to say that, with hopes for change dashed time and time again, Iran is going through the most despairing period of its modern history. And this bleak outlook is perfectly reflected in Roustayi’s films. 

     

    His debut, Life and a Day, from 2016, tells the story of Somayeh, a young woman from a poor and troubled family, about to marry an Afghan man of higher means. She wrestles with the decision: should she marry up and escape her dire straits, as urged by one of her brothers (played by Peyman Maadi, a Farhadi favorite)? Or should she stick with her family, as another brother (played by Navid Mohammadzadeh, a rising star) pleads? With its theme of extreme poverty in southern Tehran, Life and a Day can sometimes feel exploitative and Mohammadzeh’s acting is at times exaggerated. Yet the film showed great directorial ingenuity and became an instant cult favorite. A monologue by Mohammadzadeh pleading with his sister to stay (“Somayeh, don’t leave”) is guaranteed a place among the most memorable scenes in all of Iranian cinema. Most importantly, Roustayi’s uncompromising and realistic portrayal of poverty was refreshingly unsentimental. This was no schoolboy display of humanity, no treacly preaching on the meaning of love. This portrayal of lives in Tehran is as brutal and dark as they actually feel. His films perhaps can best be described as Aye-ye Ya’as, a Persian figure of speech which accuses those with a negative attitude of singing “a song of despair.” With Iran as desperate as it is today, don’t we need to sing such songs? 

     

    With Leila’s Brothers, Roustayi has brought new mastery to the same bleak theme. This film, too, is a song of despair, a cup of tea “more bitter than poison,” to use another Persian turn of phrase. Again, this is not a consolatory cinema, a cinema of false hopes, uplifting tales, or feel-good gimmicks. Yet Leila’s Brothers is not poverty porn, either. In fact, the main family of the story does not quite live in extreme poverty. The movie’s bitterness comes not from an exaggerated portrayal of squalid conditions, but from its sober depiction of tragic constraints that limit even previously middle-class families of modern Iran. The titular character, played by Taraneh Alidoosti, another Farhadi favorite and perhaps the most impressive Iranian actress of her generation, holds an office job which helps her support much of her family. Leila lives in an old house owned by her pensioner father, Esmayil, alongside her mother and two of her four brothers, the unemployed Farhad, who is a bit dim and spends most of his days watching professional wrestling, and Parviz, who has a large family of five daughters and a wife pregnant with their first boy, but only a meager job as a toilet cleaner in Leila’s office. Of the other two brothers, Alireza works a factory job outside Tehran while Manoocher compulsively chases get-rich-quick schemes. 

     

    The story starts with two events: the one-year anniversary commemorations of the death of Haj Qolam Jorabloo, the patriarch of Esmayil’s extended family, which created an internecine jostling for the succession; and the closing down of the factory that employed Alireza, thus sending him home to his parents and siblings. The two events contrast the different stakes faced by the father and his most able son. Esmayil lusts after the patriarch position, even though it is merely symbolic, a figurehead, whereas Alireza has just lost his source of income. But what the two have in common is their pathetic responses to their respective crises. Despite being the oldest surviving Jorabloo, Esmayil is not taken seriously as a contender for the family patriarchy. The position is clearly being prepared for Qardash Ali Khan, a shady gangster-like character with deep pockets. When he so much as dares to suggest himself for the honor, Esmayil is rudely shouted down by others, including Qolam’s son Bayram. Although Iranians are supposedly respectful of their elders, no one cares for Esmayil’s old age. The traditional customs are breaking down; now money rules. Esmayil submits to his fate, yet he lies to his relatives about being feted by the family and wants to force Parviz to name his first son after Qolam. “I shit on all your traditions and customs,” an angry Parviz tells his father, rejecting his request. 

     

    For his part, when the laid-off workers rise up against the closure of the factory in a wildcat riot, Alireza simply runs the other way. As workers fight a pitched battle with security forces, shouting “death to the tyrant,” he changes out of his work clothes and flees the scene. A fellow worker confronts him: “The man without honor and the coward is the one who leaves, because he is sure that his comrades are here and will win back his wages.” Soon we discover Alireza’s escape is not owed to cowardice. Having grown up in the chaos of his miserable family, he has adopted an impregnable calm as a strategy for guarding his individual autonomy in hostile circumstances. Like all stoics, he is not a fighter. Later we hear him admonish his siblings as they debate the rights and wrongs in each of their lives: “We are a bunch of cows who haven’t learned how not to interfere in each other’s lives.” Spurned by the collective, he has retreated into himself. Who are we to judge him? 

     

    This question of determining heroes and anti-heroes in the film, this moral question, nags us to the end. It is not hard to see Esmayil as a villain. His slavish devotion to his clan is repeatedly rebuffed by them, but he persists in it at the expense of his own family. We learn that he has shooed away Leila’s only suitor by warning him about her chronic bone problems in the hope that she would one day marry a Jorabloo. (In reality, no Jorabloo would have her.) In an outrageous act that becomes the film’s central plot point, he secures forty golden bullion coins so that he can offer the top gift at the wedding of Bayram’s son and thus effectively buy the family’s useless position of patriarch. He is clearly being used as a dupe by Bayram, who is exploiting Esmayil’s gullibility to pay for a lavish wedding. Meanwhile Leyla wants to use the coins to help start a business that could employ her brothers. Esmayil is a vain father who wastes a fortune that could help his sons — a bad man. Some critics interpreted Esmayil allegorically, as a symbol for Ayatollah Ali Khamenei, the Supreme Leader of the Islamic Republic — a man who has squandered his country’s wealth on his quixotic adventures in the region and beyond. And yet Esmayil is perhaps more pathetic than villainous. We know that he has worked hard for close to four decades before retirement, never getting any respect from his family. Can we blame him if he wants to gain some symbolic status in the last years of his life? Besides, as he says at one point in the film, why should he be responsible for the livelihood of his adult offspring who are already in their forties? 

     

    But if we are not sure about the villain, can we at least see Leila as the undisputed heroine of the film? A woman holding it all together in a world of feckless men? The film’s structure and title, and Alidoosti’s breathtaking performance as Leila, has led many to such a conclusion, usually made in a feminist register. In a hair-raising scene Leila slaps her father in the face, showing the intensity of her disgust at his actions, and the magnitude of her own autonomy. Is this the hero slapping the villain? 

     

    Yet if Leila is a heroine, she is an ambiguous one. In conditions as dark as these, no hero can lead to salvation. When she slyly steals the forty gold coins and helps to buy the shop for her brothers, we want to applaud her, even as Esmayil is utterly humiliated in front of everyone at the wedding, brought down from his majestic chair at the stage, losing his patriarchal position immediately. But things get complicated when we learn that Esmayil had given a legal promise to Bayram: if he doesn’t give him the coins, he will go to jail. It is now time for the humiliation of the indignant siblings: they must give the shop back to keep their father out of prison, and recover the down-payment, which includes what they had sold the coins for plus all their life savings. 

     

    And things will only get worse. Musings by Trump about leaving the Iran deal leads to massive price jumps for the American dollar and everything else — including the golden coins. Now Leila and the brothers cannot even buy half the golden coins they had their hands on just a week earlier. In a bitter scene that would be painfully familiar to every Iranian, a gold trader tells the siblings: “Each gold coin was sixty million rials yesterday. But Trump gave a speech and it climbed to seventy million. He then tweeted and it became eighty million.” It doesn’t matter, in other words, if you are vain like Esmayil, smart and strategic like Leila, dumb like Farhad, cunning like Manoucher, or stoic like Alireza. This is a tragedy in its traditional Greek meaning: the circumstances leave no possibility of anything but the pre-determined outcome of misery. 

     

    This is not misery for everyone, of course. Throughout the film we see the obscene wealth that surrounds the lives of the siblings. The shopping malls are full of people who lavish cash on shoes and clothes. Just when the siblings realize that the disastrous rise in prices has left them destitute, they see a bunch of well-dressed young women emerge from a massive car: their bags, their clothes, their chic scarves, their pricey dark glasses and, most importantly, their carefree smirk show that they come from a different Iran, from a different world. This is a film with a class-consciousness based in reality; it is also more directly political. In Iran in 2022, the elites who lived (and still live) such privileged lives are almost invariably tied to the theocratic government in one way or another. These are the so-called “rich kids of Tehran” living in an obscene bubble as Rome burns. 

     

    Roustayi is a gifted director, adroit when filming intimate spaces such as Esmayil’s dreary living room, or grand scenes such as the workers facing off the cops in the factory, or the ridiculous floridity of the siblings and their father dancing at the wedding. The latter led some critics to compare Leila’s Brothers to The Godfather. (Of course there is nothing phony or shabby about Don Corleone’s patriarchy.) The film’s symbolic gestures are never too tendentious or didactic (a problem that plagues the films of Panahi and many others.) Roustayi’s narrative skills may not be as good as Farhadi’s, but they suffice to keep us on the edge of our seats for the film’s long running time of two hours and forty minutes. 

     

    My favorite scene is the one in which Leila and Alireza strategize on the roof, conniving a way out of their predicament. He gives her a cigarette, demonstrating that, despite her hiding it for years, he knows that she smokes. (Iranian society places a strangely severe taboo on smoking.) As the only two siblings with their heads properly screwed on, they have a special bond. “I still don’t know,” Leila ponders. “What happened for us to get to where we are? It wasn’t so bad when we were kids.” Alireza replies: “I’ve learnt that growing up means that no matter how long time passes, you are not supposed to get what you want.” And in what seems like Roustayi’s rejoinder to Anna Karenina’s storied opening sentence, Leila says: “All the rich people know each other because there is only a few of them around. But the miserable don’t know one another because there are so many of them. Yet they recognize the misery in each other’s faces.” 

     

    In another poignant allusion, Leila fondly remembers Oshin, a Japanese television drama from the 1980s that was hugely popular in Iran. It told the story of the tribulations of a humbly born Japanese woman who rises to the top, from the Meiji period up to the 1980s. Her many hardships reminded Iranian viewers of the wartime Iran of 1980s. “Do you think Oshin is now alive?” Leila asks Alireza. “If it wasn’t for her story, we would never learn how to cope with misery.” This simple bit of dialogue is in fact quite explosive, since Ayatollah Khomeini once ordered the execution of a woman who had dared to cite Oshin as her favorite role model, as opposed to Fatima, the daughter of the Prophet Mohammad and a holy figure for Shia Muslims. 

     

    But what makes Leila’s Brothers among the best Iranian films of this generation is not its skillful direction and acting, its engrossing plot or its wise and politically relevant dialogue. It is its courage in depicting unvarnished misery, its unapologetic, relentless account of the disastrous breakdown of modern Iranian society. Its remorseless realism is itself a kind of slanted protest. Brecht famously said that art should be “not a mirror held up to reality, but a hammer with which to shape it.” This, of course, is easier said than done. The pretensions of artists to be what we call “change agents” have resulted in a great deal of terrible art without altering much in the real world. Art is anyway never a substitute for politics. But politics — certainly a politics of opposition to tyranny — must begin in truth, and Roustay, by producing an unflinching portrait of absolute hopelessness, courageously shows us the abyss that we are in. If there is to be a way up, we have to first see where we are.

     

    A few months after the curtains went down in Cannes, a massive anti-government revolt erupted in Iran. This time the impetus came from the killing of Mahsa Amini, a twenty-two-year-old woman who had been detained by the morals police because her headscarf fell short of the imposed Islamic standards. Tens of thousands took to the streets. Women threw off their shackles — that is, they publicly burned their mandatory hijabs. The regime went on to kill hundreds of people as it faced a months-long rebellion. 

     

    Many in the country’s cinematic community took unprecedented actions of solidarity with the protesters. A self-proclaimed feminist, Alidoosti had never shied away from being outspoken. Now she published a picture of herself without the hijab on Instagram, and brandished a banner with the movement’s slogan, borrowed from the Kurdish movements in Syria and Turkey: Women, Life, Freedom. She was thrown in jail in December, only to be freed in January after posting a huge bail. Other actors and filmmakers suffered similar fates. 

     

    As he had promised in Cannes, Roustayi refused to accept any of the suggested cuts that the Iranian authorities had imposed on Leila’s Brothers as a condition for its public screening. The film was thus denied a permit, imposing a massive financial cost on the producers. It was able to earn around seven hundred thousand dollars from its screenings in France and a meager amount from those in the Emirates. As expected, pirated versions of the film soon circulated everywhere in Iran, turning it into a much-debated work of art. In August 2023, Roustayi, along with his film’s producer, was sentenced to six months in prison for the crime of taking the film to Cannes without permission from the Islamic Republic’s Ministry of Islamic Culture and Guidance. This led to much outrage in Iran and beyond. Francis Ford Coppola and Martin Scorsese were among many who advocated for his release. He served about nine days, with the remainder of his sentence “suspended over five years,” during which period he would have to take a course in Islamist re-education and stay away from members of the Iranian film community. 

     

    Meanwhile the pro-regime media in Iran continue to attack the film ferociously. One outlet derided it for its damning portrayal of Iranian family structures and went on to suggest a conspiracy. The film, it said, had been “pre-arranged” to collude with the Women, Life, Freedom movement that broke out a few months after its premiere in Cannes. “The ultimate goal of the director is the same of the rioters,” it ominously declared. About that, however, the perfidious state media may be right. For the hated Islamic Republic of Iran, Saeed Roostayi’s mirror is as dangerous as any hammer. 

     

    As Iranian filmmakers have flourished in recent years, the conditions of their country have worsened in every way. In Iran, there seems to be a perverse relationship between cinematic excellence and governmental cruelty. No, the cinematic community has not overthrown the government or changed things fundamentally. Nor are most Iranian films directly political or of the journalistic speak-truth-to-power kind. But those who demand that the artist pick up a bullhorn, or a machine gun, forget the roots of Iran’s cinematic triumph. Iranian films have countered a political regime bent on penetrating every aspect of life by centering a force of sheer humanity; by showing that there was more to life than slogans; by demonstrating that truth is not absolute. In a climate of hostility and repression, what has mattered is not what Iranians films do or say, but what they are. And what they are is a zone of freedom, shrewdly and miraculously extracted from the unfreedom that surrounds them. Their very persistence is an act of heroic cultural resistance against a dark regime and its campaign to suppress and deny the richness of Iranian culture, old and new. 

     

    Writing in 1983, at the height of the bloodletting by the nascent Islamic Republic, a short poem by Ahmad Shamlou, addressing his oppressors, wonderfully encapsulates what art can be in the times of persecution. 

     

    I am a poison for you, without an antidote. 

    If the world is beautiful, it is singing my praises. 

    O you stupid man,

    I am not your enemy, 

    I am your negation.