After Six Hundred Years, the City of Lausanne Appoints Its First Female Night Watch

    for Cassandre Berdoz

    Cassandra does not sing the fall of Troy,
    where Little Ajax raped her in the sanctuary.
    She does not sing her slaughter by Clytemnestra
            while Agamemnon beat his fists on the ground.

    From the tower of the Cathedral called Notre Dame,
    what Cassandra sings is the passage of time.
    She sings the hour, east toward Jerusalem,
            then south across the lake to Évian,

    through the cardinal points, following on
    the great bell that is called Mary Magdalen,
    that shakes you with the aftermath of its hum-tone.
            And there is another bell than this hourly bell,

    smaller, a bell called Clemency, whose do
    sounds a warning to announce and follow
    calamity, carved with a bas relief to show
            a gowned woman kneeling on a scaffold,

    and an angel stays the sword of the executioner.
    But I swear to you by each cobble and rafter,
    Cassandra, I swear by the narrow red door
            you opened for me in a wall of stone

    melting like a sugar cube in the rain,
    I tell you, long ago I knew someone
    who heard the hours pass, felt the commotion,
            but could not rise to flee from where she lay,

    in your city of staircases and Burgundian Gothic.
            Cassandra, sing to the plume of crematory smoke
    that drifted southwest out over the lake,
            sing to the killing hearth, Cover your fire!

    Sing to the night that marriage is unmerciful.
    With every breath, sing that there is no angel.
    Cup your hands, when the growling bronze is still,
            and sing that the executioner’s arm works in different ways.

    Operation Pacific (1951)

    It was just a B-grade submarine movie (or maybe all sub
           movies are B-grade), a vehicle for John Wayne,
                   whose drawling virility I always resent,
           while Patricia Neal plays his ex-wife, though off-screen
                   her lover Gary Cooper visited the set
                           to try to persuade her to abort their fetus.
                           And after all the khaki, depth charges and dud torpedoes,
    while they’re sitting together in a Honolulu nightclub,

    at the very moment when her dazed come-hither look
           was on my last nerve, somehow the writer-director
                  throws himself a life-ring, on this property
           where the hero keeps returning like a commuter,
                  and borrows a few lines from the The Odyssey,
                         so Neal as Lieutenant (j.g.) Mary Stuart
                         reaches way back and, not missing a beat
    while talking about a young pilot, says to Duke,

    He wants to go someplace where they never heard of the Navy.
           His idea is to fly back to Wichita, Kansas
                  and start walking inland carrying a pair of oars
           and stop when he gets to a place where someone says,
                  “What is that anyway you’ve got on your shoulders?”
                         a place where they never season the food with salt,
                         and then he’ll know that his wandering’s done, that’s it.
    He’ll die in his own bed surrounded by family.

    And although the reference goes right over Duke’s head,
           it did feel somehow as if a black hull had surfaced nearby,
                  its teak decks sieving the creamy runnels of salt,
           to watch a broken-backed freighter in its death agony:
                  and you could have knocked me flat for just a moment,
                         that prophecy of peace after so much war
                         definitely the last thing I would have looked for
    amid all the routine lies of postwar Cold War Hollywood.

    Anzeindaz

    For years I lived on the mountain,
    but I never drank from the high stream
    where it flashes over gray scree

    on its way down to the valley
    after percolating through the glacier
    that holds grains of carbon and pollen,

    maize and grasses, smoke, plague and famine,
    spores of fungus on the manure of cattle,
    a thousand years legible in its core.

    I wanted my words to be like that,
    but I was too careful in my body.
    Then one day I found I was dragging

    a sled twice my weight through a plush tunnel
    studded with fool’s gold and inhabited
    by pale eyeless creatures and mouths.

    In one of a hundred gleaming caves,
    I stopped to consider a hunger
    that sickens before it is sated.

    I found I starved down nicely.
    I slept in a fetal curl and earned
    some weeks of my own foul making.

    When I woke, it was with the taste
    of bitter coins in my mouth,
    and the stream was nowhere in sight.

    I was ready to drink anything I could read through.

    The Woodcock

    It was almost vulgar the way that it was just
    so pronounced, how innocent they were.
    —photojournalist Lynsey Addario, 3.15.22

    On the front page of the newspaper this morning,
    there was a photograph of a mother and her children
    killed by a Russian mortar round

    as they tried to flee across a ruined bridge.
    They lay as if a strong wind had blown them down,
    and I thought: They look like dead birds.

    Later, on the ramp behind my office,
    I found a dead woodcock, also known as
    a timberdoodle, bogsucker, night partridge,

    brush snipe, hokumpoke, or becasse,
    and once thought to winter on the moon.
    I studied its cryptic plumage,

    the richest thing I had ever seen,
    an earthworks, a stone tracery, a selvage,
    with tuftings of the softest gray down,

    like dawn coming up on the world
    or the filling torn from an anorak,
    and a row of dark eyes that did not blink,

    rimmed with salt when some infinitely mild
    touch lingered along its side
    before leaving it to the viaticum

    of its wounds and their copious blood
    — since the woodcock is hunted as game,
    “best cooked pink, and just a little bloody.”

    America Giveth

    The abstract principle that human beings are born with dignity is too difficult for a person to formulate on her own. It had to be formulated by many minds in concert over generations and then enshrined in philosophical texts which shape worldviews and governments. Though the idea of human dignity is at least as old as the Bible, it was not set down in a political philosophy until millennia later. Liberalism is the word we use to refer to the systems of belief and the politics which are buttressed by this principle. It instructs that human dignity endures even when it is abused. And as a political philosophy, liberalism constrains the power of government in service to this inalienable human dignity. Liberalism hectors, and long may it do so, that might does not make right. Might can ignore the truth, but it cannot annul it. In a liberal country — and America is the only country founded explicitly on that philosophy — our leaders serve at our behest. Power does not confer dignity: humanity does. We are blessed to be born late, after this tradition had been developed, and after the government formed upon it has grown fat and sophisticated.

    Fat and sophisticated, but alas not always wise. In our country every four years the electorate grants power to a single executive, on the condition that that power be checked and bent to the citizens’ will and rights and good. But it is possible for a liberal system to democratically grant power to an illiberal leader, a leader who has contempt for individual rights and who does not have faith in the liberal system. Insofar as they are only a poll, elections are value-neutral and can reward illiberal leaders and parties without being undemocratic. Such a leader, once elected, can corrupt the power of the presidency by governing as if power itself is constrained by nothing but other power, and that rights are a myth which the citizenry cannot insist upon. Such a leader, like a medieval king, mistakes rights for privileges, which he believes are his to give and his to take away.

    There have been power-hungry American presidents before, but the power that they were hungry for derived from the mechanisms laid down in the Constitution. American identity has been perverted, misinterpreted, swollen, shrunk, and turned in on itself — but before Trump the many varieties of abuse dealt to the country by its elected leaders were all products of a particular president’s perverted interpretation of what America was supposed to mean. Trump is different. For Trump, American power is in service to nothing but the personal interests of the man at its helm. Even more than he detests those who “poison the blood of our country,” as he once put it, he adores himself. He is a misogynist but misogyny is not his worldview, and a racist but racism is not his worldview. Self -preservation and self-aggrandizement are his operating principles. Power for power’s sake — fascism, in a word. Trump’s prejudices are all handmaidens of his avarice.

    Donald Trump’s stranglehold on our power is premised on wringing out the liberalism which is the lifeblood of this country. He is a bloodhound, and he follows the scent of his own interests with a sub-intellectual zeal. The liberal system was designed to constrain precisely his variety of brute greed, and so he is hellbent on destroying the liberal system — our system, the system of government upon which American identity depends. The system that has made this country the most powerful source and defense of freedom in human history, and the philosophy which informs that system, is the price of his perpetual ascent.

    On January 6, 2021, our current president goaded thousands of violent supporters to attack the U.S. Capitol because, he insisted, the election held that year had been stolen from him. He instructed the mobs to fetch for him what was rightfully his. This was a lie which he told in order to protect himself. That day was traumatic for the country because on it a former president sowed distrust in the democratic process throughout the population of a great democracy. But for Trump something else happened that day: for the first time, and irrevocably, he associated his interests with the obliteration of our democracy. When he returned to power — through the very system he now permanently associates with his own defenestration — he returned to destroy it.

    It is often pointed out that, while Trump allies himself with various far-right groups (the Christian right, for example) he is not strictly speaking an adherent of any far-right ideology — of any belief system at all. This line of argument is sometimes trotted out to make the point that Trump is not as bad as the company he keeps. This is true: he is worse. His many allies have various discrete concerns, various uses for Trumpian contempt for the liberal system, but Trump’s concern, his aggressive obsession, is with the system itself. And so every beneficiary of American liberalism — and American beneficiaries in particular — is duty-bound to consider Trump and his allies the gravest threat to our wellbeing and our future.

    Powerful leaders of one minority group in particular, the American Jewish community, are already shirking that duty. I write as a member of that group, in the first-person plural, because I believe we have a duty to denounce the cowardice committed in our name.

    On January 20, 2025 — the first day of Donald Trump’s second and final term as president of the United States, the president issued about one thousand five hundred pardons and commuted the sentences of fourteen of the supporters of his who had, at his instruction, mounted that violent insurrection against the Capitol four years prior. Among those pardoned were members of the Proud Boys and the Oath Keepers, two violent white-supremacist groups. On that same day Rabbi Ari Berman, the current president of Yeshiva University, gave the benediction at Trump’s inauguration.

    Berman is not the only Jewish leader — in both America and Israel — to be displayed by this administration as proof that there are Jews, even important Jews, who give Trump their blessings, despite his flirtation with white supremacy and even Nazism. But it was a poetically significant moment — the Jew blessing the king on the same day that the king had pardoned and defended a man in a Camp Auschwitz sweatshirt. Berman’s treachery is representative of a strategy that deserves analysis and condemnation. Black, Latino, and women leaders have also been called upon to sell their blessings in service to our Bigot-in-Chief, but this dishonor has peculiar relevance for the Jew.

    Berman’s pandering is a betrayal of his people because it gives support to the undermining of the liberal system by which Jews — for the first time in our history, including before both exiles and after the advent of the Jewish state — were acknowledged to have rights that are axiomatic, not an expression of a monarch’s whim or economic need. Those rights, and the full citizenship that followed on them, were based on nothing other than our humanity. America was the first political power to insist that Jews have dignity because all human beings do. George Washington himself said as much in his sterling letter, written in 1790, to the Hebrew Congregation in Newport, Rhode Island:

    The citizens of the United States of America have a right to applaud themselves for having given to mankind examples of an enlarged and liberal policy — a policy worthy of imitation. All possess alike liberty of conscience and immunities of citizenship.

    It is now no more that toleration is spoken of as if it were the indulgence of one class of people that another enjoyed the exercise of their inherent natural rights, for, happily, the Government of the United States, which gives to bigotry no sanction, to persecution no assistance, requires only that they who live under its protection should demean themselves as good citizens in giving it on all occasions their effectual support.

    For once, the Jews’ sameness brings them security and even secures the legitimacy of their own institutions of power. America is special because, finally, Jews are not, because Jews here do not have to be. Separateness, cohesion, defenses against assimilation — all of these are choices Jews can decide to make or not. But America owes us the same rights as our fellow citizens, and for the same reason.

    The Jewish American is a type of significance to both traditions. No minority which benefits from the largesse of American liberalism has spent more time than the Jew languishing in illiberal alien states. None has exerted more effort in developing mechanisms for securing privileges in societies which grant minorities no rights, of growing dependent on those privileges before, as inevitably happens, they are taken away and the process of dependence and gratitude and nervousness had to begin again somewhere else.

    In the millennia that Jews have spent in the Diaspora, no country other than America has based its own right to exist on its duty to honor its citizens by making itself blind to those citizens’ personal identities and group memberships. American liberalism was not an act of charity but of self-preservation. It is the only prudent way to guarantee social peace in a heterogeneous society. Equality is not a favor our elected leader does for us — it is his obligation, and the obstruction of our equality is a betrayal of the office with which he has been entrusted. Liberalism was the philosophical mechanism which granted America authority to separate from Great Britain and govern itself. No country is more indebted to and dependent on the liberal system — and more allergic to monarchy, or any kind of absolute and unquestioned authority, which was what liberalism sought to dissolve — than ours. And so Jewish success in America is a testament to the success of the American project, as is true of all minority groups. Insofar as the Jews are distinct from other vulnerable Americans, we are distinct because we have more empirical evidence, more lived experience, that no other system is as reliable and as ethical as the liberal system is. The alleviation of our vulnerability is proof of America’s promise. America absorbed us — however incompletely, but then America does not demand complete absorption — and we are both, America and its Jewish inhabitants, stronger for it.

    The Trumpian flirtation with monarchism titillates his followers. It winks at a bloody dispensation in which monarchs enjoyed absolute power — executive orders galore. Memory is strong, long, and alive in our bones: people respond automatically to Trump’s invocation of monarchism by reverting to pre-modern genuflections. The tech bros trooping in and out of Mar-A-Lago, and then in and out of the White House, to kiss the ring distinguish themselves in this regard, but they are not alone in their atavistic behavior. Jews have also had to appease all-powerful leaders who had no respect for their rights or wellbeing. By the time liberal society had been invented, Jews had spent many centuries developing tools for securing protections. Trumpian illiberalism seems to have awakened in much of the American Jewish population a slumbering but deeply held faith, a kind of buried collective memory, that in an illiberal system a different relationship to power must hold. And so American Jewish leaders such as Rabbi Ari Berman of Yeshiva University morphed like clockwork into their medieval ancestors.

    It is enlightening to situate Jewish hospitality to Trump in the political history of the Jews in the exile. Throughout the exile Jews used to develop formal alliances — and, where possible, informal relations — with the ruler of the country. These alliances with the supreme central power were preferred to the other option for establishing a measure of security for themselves — an alliance with the population among which they lived, or with local secular and ecclesiastical authorities whose antipathy to and fear of Jews was reliably more powerful than whatever benefit Jews could offer them. The sovereign, by contrast, could be tempted into a mutually beneficial relationship in which privileges — as opposed to “rights” — were granted in exchange for certain goods. The Jewish historian Yosef Hayim Yerushalmi called this strategy — by which Jews circumvented local communities and leaders and went straight to the highest authority in the land — “vertical alliances,” and the rejected option of dependence on their neighbors “horizontal alliances.”

    Beginning with the Babylonian exile following the destruction of the first temple in 586 BCE, the Jews in exile replicated the governing structure held when they had sovereignty in the Holy Land. It was a sophisticated society. Jews were among the first nations in history to hold common property (the land of Israel belonged to the people, not to the king of Israel — a startling fact), to provide social safety nets, and to criminalize exacting interest from fellow Jews. This meticulously structured and tightly run society ensured that Jews were mutually dependent — that they could find political and economic security by depending on each other. The Jewish community in exile functioned, if not exactly like a state within a state, which became one of the classical canards of modern European anti-Semitism, then certainly as a corporate entity, with a greater degree of organizational independence and self-government than would ever be permitted in a modern European nation-state.

    Under Persian, Macedonian, Ptolemaic, and Seleucid rulers, Jews were granted a great deal of autonomy by gentile leaders. The basic structure of the internal Jewish community, and its relationship to its rulers, was essentially fixed from the start of exile through the time of the French Revolution and the advent of the modern era. And throughout that time this independence was secured, often, by paying for it. Jews paid special taxes in exchange for the freedom to govern themselves, and this also spared the monarch responsibility for coercing compliance with the legal and the tax system. This system held the longest, and was the most comprehensive and mutually beneficial, in the exile in Spain. Even after Ferdinand and Isabella forced the Jews into exile in 1492, the Spanish Jewish exile Solomon ibn Verga wrote with pride and nostalgia about the halcyon days of his destroyed community, and of the vertical alliance system throughout Spain and France: “in general, the kings of Spain and France, the nobles, the men of knowledge, and all the distinguished men of the land used to love the Jews, and hatred obtained only among the masses who were jealous of the Jews.”

    During the Middle Ages, the relationships between Jewish communities and their political leaders were formalized in charters negotiated by prominent Jews on behalf of their communities and made directly with the highest authority in the land. The earliest extant examples of the medieval charters are Carolingian, issued by Louis the Pious (778–840 CE), and likely modeled on similar charters issued by Charlemagne himself. All of these charters share the same structure and differ only slightly in details and application (from individuals to communities and ultimately, in the thirteenth century, to the Jews of an entire country — Emperor Frederick II issued a charter for all German Jews in 1236). All have in common the basic commitment to protect the Jewish communities and to honor their right to practice their law, in exchange for a direct feudal relationship sworn on the part of the Jewish representative to the King. Jews were called Juifs du Roi, the king’s Jews, or servus regie camerie, serfs of the royal chamber. Frederick Barbarossa wrote that the Jews “belong to our treasury,” and the Jews appreciated this. It was the highest form of protection.

    Similar dealings were conducted with the Papal authority, whose contempt for Jewish apostasy was tempered by Christianity’s canonical dependence on the Jewish tradition. Popes could be by turns vicious and solicitous about their Jews. Thus Pope Innocent III — who presided over the Fourth Lateran Council, which in 1215 was the first to compel Jews (and Muslims) to wear badges — also issued the Edict in Favor of the Jews, which begins, “Although the Jewish perfidy is in every way worthy of condemnation, nevertheless, because through them the truth of our faith is proved, they are not to be severely oppressed by the faithful” and warns Christians to refrain from, among other indignities, desecrating Jewish cemeteries; killing, robbing, or wounding Jews; forcing Jews to convert; and disrupting Jewish holidays. At the same time the Crusades were viciously violating these papal prohibitions. Similarly, Gregory IX, the first Pope to order that the Talmud be burned, a cultural disaster that occurred in Paris in 1242, also urged Louis IX of France to stop the crusaders and their mobs from massacring Jewish communities. Jews learned to forge alliances with actors whom they knew harbored ill-will against them. They had no choice. Their neighbors and local leaders reviled them, too. Jews preferred an imperfectly reliable but powerful ally bound by utility to perfectly unreliable and weak allies sour with jealousy and fear (though there were quiet times and places in medieval Europe in which Jews and Christians lived in coexistence with each other).

    Until the modern period, Jewish separateness was necessary for securing safety. Separateness — contained autonomy — is what the charters protected. Assimilation was both impossible and undesirable. Cohesion and insulation were protective mechanisms. Mixing with gentiles and weakening the bonds that delineated the Jews from gentiles would have confused the ruler from whom security had been so carefully extracted. In the seventeenth and eighteenth centuries, the Jewish bankers and businessmen who handled the finances of and lent money to the nobility, and who represented the Jewish communities in their dealings with the monarchy, became known as “Court Jews.” These Jews enjoyed privileges withheld not only from their Jewish coreligionists, but often also from the gentiles. This political arrangement, the system of the Court Jews, was also premised on traditional Jewish separateness. The promise of equality, remember, was never on the horizon.

    Until the eighteenth century. The French Revolution brusquely thrust France into the modern period, and the French Jews were the first to be fed the strange fruit of emancipation. Liberalism not only guaranteed rights to everyone, it also made demands of everyone. The possibility of full citizenship came at the cost of an unmixed loyalty to France and a significant degree of self-erasure. Insofar as the organized Jewish community was a quasi-political structure, it had to be dissolved, along with the special access enjoyed by the unelected wealthy Jewish leadership. As Mirabeau, one of the leaders of the revolution, put it, “son pays deviendra sa patrie” — his country must become his homeland. No dualities were allowed.

    The old protectionist system of Jewish security vanished with the monarchy. Suddenly the French Jewish community was called to trust a new system which seemed discomfitingly dependent on the goodwill of their political and social leaders as well as the enlightenment of their neighbors — something Jews, like all minorities of liberal countries, recognized was too often honored in the breach. (Yerushalmi described “the royal alliance” as a “myth.”) And so Jews for the first time had to ask ourselves the same question vulnerable minorities of liberal countries have been asking ever since: how big is the breach? How far from the ideal of itself was the reality of the liberal state?

    In France, granting rights to Jewish citizens certainly marked something of a departure from the nativist identity of the state. But anti-Semitism burbled in the cauldron of French prejudice as a reminder that liberalism exists always in tension with the uglier impulses of every population which aspires towards it. The burble boiled over first with the Dreyfus Affair at the end of the nineteenth century, and then with the establishment of the Vichy government in the middle of the twentieth.

    America was always different. For the American Jew, the distance between liberal promise and liberal fulfillment has always been smaller than anywhere else. Even before total enfranchisement, Jewish Americans recognized that when we were deprived of our rights we were deprived of something that belonged to us, something that the country owed us. We could fight the abuse of our rights with the country’s own principles.

    Isaac Leeser was a Prussian Jew who joined his uncle in Richmond, Virginia, in 1824 at the age of seventeen, and became one of the most important leaders of early American Jewry. He published the first English language translation of the Jewish Bible in America (the twenty-two books arranged according to Jewish custom, as opposed to the thirty-nine books which make up the Christian Old Testament), and he was the founder and publisher of the Occident, the first general Jewish newspaper in the United States. In the pages of that paper Leeser published, in 1845, an article about the Maryland Bill, known as the “Jew Bill,” which had been passed in the Maryland legislature nineteen years earlier, and which gave the Jews of that state the right to hold public office. Leeser pointed out that

    the few highly respectable Israelites who then lived in Maryland took a noble stand in defense of the good cause; they made no concession; they did not explain away the features of their faith, which might appear harsh and unpalatable to the Christians; but asked for their rights, and obtained them, as becomes freemen, unconditionally and without any trammels whatever. And at the present day, the Jews in Maryland are free, like all the other citizens. At the time when this act of justice was awarded, there was not even a Synagogue in all the state; hence it must be considered as the abstract triumph of liberality over bigotry and prejudice; and we therefore rejoiced the more at the passage of the bill in question, since it proved that Americans will be just whenever they are properly enlightened . . .

    Leeser was contrasting the Jewish American strength with the self-conscious weakness of British Jewry’s contemporary Reform movement, which, Leeser insisted, was a systematic attempt to alter Jewish practice in order to make the religion more palatable to British gentiles.

    As Leeser makes plain, American citizenship granted the Jews something they had never had before and could never have anywhere else: a rights-based relationship that was not a deviation from the essential character of the country in which they lived, but was a full expression of the state’s promise. In America, the possibility of Jewish success was and is proof of the health of American identity, because it is proof that a weak minority can prosper if treated as full and equal members of a pluralistic democratic society. There was anti-Semitism in America, of course, but it lacked political legitimacy: it was a contradiction of the country’s founding values. In this way, America offered an epochal deviation, as a matter of principle and practice, from what Salo Wittmayer Baron famously called the “lachrymose pattern of Jewish history.” And it accomplishes this in a way that not even the Jewish state does.

    The Jewish state was founded as a democracy, but its democratic character has always been in tension with its Jewishness. This tension was inevitable, but the task of moderating between these two elements is a balancing act made nearly impossible by various elements. First, Israel has no constitution and no Bill of Rights — powerful tools with which Americans are immeasurably helped in safeguarding our democratic character. Israel does have a Declaration of Independence but, unlike America, Israel, which is similarly a multiethnic society, did not derive its right to exist from the equality of all its citizens, even though it guaranteed all its citizens the same rights. America grants Jews rights on the basis of equality. In practice, in 2025, Israel grants Jews rights on the basis of an increasingly popular theory of Jewish supremacy, which flies in the face of its Declaration of Independence. It did not have to be this way, but this is the way it is.

    There are tragic reasons for this undemocratic devolution. Unlike America, Israel is straddled and partially populated by sworn enemies who have no state, no sovereignty, and no means of either distinguishing their own authority from Israel in a manner that Israel considers legitimate or assimilating as full members into the Jewish state. The Jewishness of Israel’s majority is a bludgeon against the Palestinian members of Israeli society. In Israel, Jewish citizens enjoy full rights and Palestinian citizens do not. The Palestinian citizens of Israel account for twenty-one percent of the Israeli population, and they should enjoy the full rights which citizens of a democracy are owed; and this is true also, and more desperately, of the Palestinians in East Jerusalem, the West Bank, and Gaza, where the Israeli police, military, and other Israeli officials openly consider Palestinians their enemies.

    Israel, under Netanyahu’s leadership, makes the grotesque case on the global stage that non-Jews are not equal to Jews, and that Jews are entitled to rights in Israel because Israel is a Jewish State. This means by implication that in every other state — including America — Jews are not entitled to full rights. Israeli illiberalism undermines American Jewish equality, and it undermines the liberal project in general. This is more the case today than it ever has been since Israel’s establishment.

    Israel’s example, Israel’s brutality, compliments Donald Trump’s, which is why Netanyahu settles so naturally into Trump’s menagerie. No Jew, in America or anywhere else, has dishonored his Jewishness by using it as a fig leaf for fascist and even neo-Nazi sympathies as visibly as Benjamin Netanyahu. Something unprecedented in Jewish history has happened: A Jewish state has determined that it is in its own interests to benefit from and accelerate the erosion of American democracy, thereby jeopardizing America’s Jewish citizens in the name of its own tribal interests. Netanyahu’s illiberalism, Netanyahu’s nativism, gives Jews like Ari Berman and Stephen Miller a calling card for membership in Trump’s thuggish troop.

    It seems likely that the anti-Semitism that this administration tolerates and inflames will not be directly caused by the government, but the government will encourage, downplay, and forgive it. The bizarre prominence of Elon Musk and his raised right arm is evidence enough. Black, Latino, Native, and trans people were the first to bear the brunt of Trump’s earliest executive orders, which were only the beginning. But Jews will pay a price for Trump’s fascism, too.

    And even if we don’t, the policies Trump is already enforcing should be interpreted as a direct attack on the legitimacy of our membership in this country, since not too long ago it was our ancestors who were clambering at the gates to be let in. It was our grandparents — rather than Mexican or Arab migrants — who were slandered as burdens, job-stealers, spies, and thieves. Between 1820 and 1880, the American Jewish population ballooned from three thousand to three hundred thousand. In the half-century that followed, more than two and a half million Jews joined the slim community already here. This increase was despite the passage of the Johnson–Reed Act in 1924, which was intended to “protect” the country’s “racial stock” by staunching the immigration of “undesirable” immigrants — Jews among them. The United States had no immigration policy, and the quotas set on Jews by the Johnson–Reed Act were not adjusted at all, between 1933 and 1941.

    But perhaps more disturbing even than the quota system is the fact that immigration officials perceived that they would be rewarded for letting fewer people in and so they routinely failed to fill the quotas. The Johnson–Reed Act had capped the number of Germans permitted to enter the United States at 25,957, but in 1933 the State Department issued only 1,241 visas to German citizens, while 82,787 German citizens languished on the waiting list for visas that they could not afford. Through 1943, “Hebrew” was a racial category in American immigration law. Between 1939 and 1940, over half of all immigrants to America identified themselves as Jewish, and that number is likely incorrect, since many of the people fleeing the Nazis did not consider themselves Jewish even if the Nuremberg Laws did.

    Two weeks after Kristallnacht, on November 24–25, 1938, a Gallup poll asked American citizens, “Should we allow a large number of Jewish exiles from Germany to come to the United States to live?” Seventy-two percent of the respondents said no. The following January, while Congress was considering the passage of the Wagner–Rogers Bill, which would allow special entry for twenty thousand refugee children, Gallup sent out another poll. This time the question read: “It has been proposed that the government permit ten thousand refugee children be brought into this country and taken into American homes. Do you approve of this plan?” And sixty-seven percent said no. Over the next few years, as war spread across Europe, Americans began to suspect that Germany and the Soviet Union were secreting spies into America in the hordes of Jewish immigrants. Perhaps they thought the children were spies, too. This heartlessness was a failure of the American people to live up to the American promise. A hundred years ago, it was our ancestors who suffered for the American inability to rise to its own ideals.

    One Jew’s contribution to the Trump administration represents a betrayal of American Jewish history more than any other. Consider the case of Stephen Miller, the man responsible for shaping Trump’s draconian immigration policies. Miller’s radicalism began when he was in high school, a century after his mother’s family arrived in America. Wolf Lieb Glosser and his wife Bessie were the first of Miller’s ancestors to arrive in this country. The couple had fled vicious pogroms in Antopol, Belarus, and set sail on the German ship S.S. Motke, which docked in New York on January 7, 1903, twenty-one years before the Johnson-Reed Act instituted its quotas. Glosser spoke Yiddish, Russian, and Polish but not a word of English. As Miller’s uncle, David Glosser, wrote in Politico in 2018,

    I have watched with dismay and increasing horror as my nephew, an educated man who is well aware of his heritage, has become the architect of immigration policies that repudiate the very foundations of our family’s life in this country. I shudder at the thought of what would have become of the Glossers had the same policies Stephen so coolly espouses — the travel ban, the radical decreases in refugees, the separation of children from their parents, and even talk of limiting citizenships for legal immigrants — had been in effect when Wolf Liev made his desperate bid for freedom. The Glossers came to the U.S. just a few years before the fear and prejudice of the “America First” nativists of the day closed U.S. borders to Jewish refugees. Had Wolf Lieb waited, his family likely would have been murdered by the Nazis along with all but seven of the 2,000 Jews who remained in Antopol. I would encourage Stephen to ask himself if the chanting, torch-bearing Nazis of Charlottesville, whose support his boss seems to court so cavalierly, do not envision a similar fate for him.

    American Jews should take it personally when our president describes immigrants as criminals and poisons. It should scare and enrage us that the first bill which Trump signed into law in his second term was the Laken Riley Act, which will result in the deportation of migrants who are merely accused of a crime, and that at the ceremony for that occasion Trump remarked that “today’s signing is bringing us one step closer to eradicating the scourge of migrant crime in our communities once and for all.” Make no mistake, he was raising the specter of our grandparents, no matter the color of the people who will be turned away from our borders this time.

    The ancestors of ours who made it here were not merely lucky. Luck was part of it but not all of it: America granted us what is by right our due as human beings. And as American citizens we have a right and a duty to insist upon that same dispensation for us and for others, which is why American Jews have also contributed significantly to the establishment of institutions in this country which work within the law to protect the vulnerable. It is true, we Jews have a tradition which teaches us to pacify brutes like Trump and to create and fetishize thugs like Netanyahu, but we have other traditions, too. Like every great nation we contain contradictions. We are not fated to emulate our ugliest examples. In Exodus, God told us, “You shall not wrong or oppress a stranger, for you were strangers in the land of Egypt.” It is not moral and it is not wise to align ourselves with the merciless bigots who drew the blood that stains our history.

    The first chapter of the Book of Job ends: “And Job said ‘Naked did I leave the belly of my mother, and naked shall I return there, God gives and God takes away, may the name of God be blessed.” Job’s faith is freighted with lyrical resignation. He makes no demands. God’s power is a force which acts over and through him. We have no right to repeat his concessions, not here, not in our country. America is ruled by no God and Donald Trump is no one’s Lord, despite all his invocations of Jesus. America is an artifact, man-made, a product of human will and wisdom. It is not for us to bow and accept the caprice of its leaders. They work for us. America giveth because we designed her to give. When its goods are revoked out of turn it is because we have been poor stewards of our own inheritance. We must fight — not least in the name of our ancestors, the huddled masses yearning for the breath of freedom which is the only air that our spoiled lungs exchange — we must fight for what belongs to us.

    Impotent Musings

    For Mario Vargas Llosa, a prince of our liberalism.

    With one foot on the platform

    and the other foot on the train

    For many decades I have participated like a good soldier in the war of ideas, by which I mean the application of philosophical notions to public affairs for the purpose of persuading readers for or against certain political and cultural outcomes. It is a private-public activity: you cannot be a “public intellectual” unless you are also a private intellectual. Otherwise you are merely a polysyllabic sloganeer waiting for CNN to call, a college education looking for a buyer. Without philosophy, politics is just a contest for power, and without politics, philosophy is just a pastime for professors. Who in their right mind would abandon power to interests devoid of ideas? The search for justice inexorably leads back to concepts. Don’t be cruel! But why should I not be cruel? Because it is wrong! But why is it wrong? Because it hurts people! But do you really believe in a hurtless world? Well, how would you like it if they hurt you? And just like that we are in the severe and magical kingdom of philosophy, because we are, even those of us who will not be reminded by that last question of the Groundwork of the Metaphysics of Morals, even the most obtuse among us, self-interpreting beings. Sooner or later everybody wants reasons.

    One evening a group of Syrian friends, some of them refugees, all of them activists, asked if I could meet them for a conversation in the lobby of a local hotel. It was the Obama years, and we all despised him for his sanctimonious refusal to lift a finger against the savagery in Syria, but my friends did not want to discuss foreign policy. They wanted to talk about God — or more precisely, God and the chemical attack in Ghouta. A young man spoke first: “I was in Ghouta and I held dead babies and I resolved that I could never believe in Allah again.” Then a young woman in a lavender hijab remarked: “I was also in Ghouta and I also held dead babies and I never needed Allah more.” They asked me if I could help them make sense of the contradiction. The gravity of the request was not lost on me. I felt honored but mainly I felt humbled, since nothing like Ghouta had ever happened to me, though something was done to my mother in Nazi-occupied Poland that may have qualified me slightly to assist them in confronting their perplexity. (I did not mention it.) For almost three hours I discoursed on the multiplicity of God-concepts and the variety of theories of historical causality and the diversity of spiritual temperaments, and they responded with probing questions and more recollections of the horrors. The hotel might have exploded from the intensity. At one o’clock in the morning we embraced and said goodnight. They expressed their gratitude, but the gratitude was mine. I was drenched in sweat — the sweat of a non-recreational and non-journalistic exploration of ultimate meanings. As I walked out into the deserted street I was surprised by the wonderful thought that the entirety of my education, every single Penguin paperback I had ever read, was all for this night, so that I might be in some way helpful to these people, and leave them with some improvement in their understanding of the philosophical torment to which their brutal experience had sentenced them. The utility of the humanities!

    The intellectual agitations that I made in all those decades, and those that I assisted, sufficed for a defensible, even a justified life, as a thinking individual and a dissatisfied citizen. I am one of the lucky ones who has had not a career but a calling. “A life of significant contention,” as Diana Trilling famously described it. The significance to which she was referring was owed to the confidence that the disputations were not trivial, that the stakes for society were considerable. No doubt this led to a certain exaggeration of their — our — own importance (which for a while developed a small sub-genre of its own, the memoir of a visit to the White House); but the vanity is a small price to pay for the possibility that we really are clarifying our society and our culture to themselves. I recall many moments at my various desks when striking the keys felt like a form of national service. An open society seemed designed for precisely what I was doing, for how I was enthusiastically earning my living. It began to seem plausible that I might leave behind what a friend of mine called a scar on the map.

    It was in my time of service, which is not yet over, that the war of ideas spread from the “little magazines” and the medium-sized ones to the editorial pages of newspapers, when Aristotle and Mill were suddenly staining your fingers with fresh ink. One momentous morning William Safire decided that a column of seven hundred words should be named “Essay.” George Will helped to inaugurate what became a culture-wide addiction to quotation — a vernacular version of the old institution of prooftexts and the reverence for authorities that in the religious traditions frequently substitute for reflection. Now the editorial pages are replete with little middlebrow citation-ridden sermons on exceedingly profound themes: who knew that the form of the op-ed piece is adequate for the adjudication of the question of the existence of God, or the definition of a good life? Or that a column should be a regular report on the columnist’s ostentatiously serious reading? (As I write, David French has discovered Carl Schmitt.) The shallowness is deep. The war of ideas has become the board-game of ideas, which has achieved its apotheosis in the festival of ideas, in which analysis and erudition are reduced to entertainment for the affluent and training for their dinner parties. Intellectual life should not be this affable.

    And then, of course, the internet blew it all up. I speak not as a disgruntled gatekeeper, though I believe fervently in gatekeeping as an indispensable cultural position for which one must be qualified by more than merely landing the job. One should hardly be churlish about the digital democratization of intellectual activity: isn’t this the dream of an open society? And so the tumbrils are filling with elites. (Never mind that there never before existed gatekeepers with the power of Zuckerberg and Bezos and Brin and Musk and Altman.) Unfortunately, the more intellectual activity has increased in volume, the more it has decreased in intellectuality. If only the speed and the scale had left the discourse alone! Instead the new bottles molested the old wine. On the internet Nietzsche and Moldbug are equals. Together they float in a contextless entropy as they compete for the attention of untold numbers of passive and gullible consumers who are eager to feel transgressive and to take them at their word. One of the strangest characteristics of mental life on the internet is that as it becomes more hysterical it feels less urgent. (I know, I know: there are praiseworthy islands of serious thought online, but I insist that they are noteworthy precisely for being the sites least affected by their marvelous medium.)

    This has created the vexing problem of how to achieve intellectual impact on the internet. How fleeting can something be and still create something lasting? How evanescent and still leave a mark? Naturally the preferred measurement of influence became quantitative, because the technology generates nothing so abundantly as data, which is to say, numbers, and because this is how businesspeople think. Influence has been re-interpreted as virality. (In the old days we used to inflate our influence to deceive advertisers by including in our circulation reports the large quantities of issues that we would dump at airport terminals and the like.) I will tell a story. One day at my old magazine I ran into its owner — a young man pathetically ill-equipped for the stewardship of the great American institution with which his money had allowed him to play — in the corridors of our office, and he seemed glum. He said that he was actually on his way to see me because he wondered if we had made a terrible mistake. The mistake was that I had devoted too many of my pages — my colleagues had also devoted the cover of the issue — to a lengthy (and, in the view of its editor, unimpeachable) critique of Obama’s foreign policy by Robert Kagan. “The numbers aren’t great,” lamented the boss, another metrics lad. His timing was impeccable: I told him that I had just gotten off the phone with Kagan, who had called to tell me that Obama had just invited him to lunch to discuss the essay. “Does the president of the United States count as one click?” I asked. He disliked the question. Quants do not know what to do with qualts. All I wanted him to understand is that influence — which is, after all, the objective of serious journals of thought — works in many and diffuse ways, even when there are iPhones in our pockets. If we had disturbed the Oval Office, surely it was worth every penny. (Kagan’s analysis has been dismally vindicated in the intervening years.)

    I imagined us in the late eighteenth century having just published “Common Sense” as a special issue of the magazine, and the billionaire in stockings and sticker shock mumbling, “You know, the Paine piece isn’t doing well.” Never mind that it was doing good. Like almost all the owners of “legacy” publications, he had mistaken a sterling act of citizenship and patronage for an investment. (It came as no surprise when he destroyed the institution almost completely.) “This is not a charity!” another of the owners of the magazine, who has in recent years established himself as one of America’s most renowned bullies, once barked at me. I barked back at his disrespect, and at his odd assumption that the writers and the editors were somehow responsible for what he and the business staff failed to achieve. But frankly I can think of many worse charities. Making one’s society less stupid is a noble cause. If I had the money, I would pay for it.

    But who do we toil for now? We must have intellectuals, I mean genuine ones, learned ones, honest ones, inconvenient ones, because we are a polity that was founded on ideas, and because the search for human purposes is not an elite pursuit, and because we are too deeply wounded to carry on thoughtlessly. The work of intellectuals, however underappreciated (and underpaid) by society, can have a strong cumulative effect, which is why I construe it as an enterprise in climate change, the sedulous creation of a better moral and cultural weather, without inflection points or tipping points, without media sensations, just the diligent and careful refinement and promulgation of what one believes is right and true, as clearly and as persistently as one can do it, in the hope, and even the expectation, that it will eventually enlighten an individual and a society. Whatever the intrinsic satisfactions of argument, it is essentially extrinsic. Contrary to the pervasive sentimentality, ideas are not at all the antithesis of power. Without power no idea can be tested, no idea can succeed or fail. When it fails, the blame, and in some instances the shame, will properly be laid at its author’s doorstep. If what matters to you most is that your hands always be clean, you should write poems.

    But who do we write for now? If, as the economists teach, value is to be found in scarcity, then our work has never been more valuable; and yet I am uneasy. My unease about the condition of intellectual life, and about its prospects, is of a piece with what I feel every day about everything public: a sickening powerlessness. Every morning I wake up nauseous and brace myself for the nausea that the afternoon will bring. My deepest convictions about the sanctity of struggle, about the unsinkable force of hope, about the incontrovertible reality of truth, about the resilience of the human heart (and of the miracle that Madison wrought) — all this now sounds too often like happy talk, like Jon Meacham–style uplift. The transports of eloquence are not cutting it. Forgive me, but we are rotting. Solidarity has collapsed and I do not see how we will get it back. The ignorant and the malevolent are winning, here and abroad, and all I can do about it is vote and watch and grieve. I am not the sort who attacks the Capitol.

    What I am experiencing, in this regular sensation of impotence, is not the melancholy of the loser. Sometimes you lose. I like losers; they know more. And there is dignity in losing if you are prepared to continue the fight. A political defeat is not an intellectual defeat, not at all. I will tell another story. In 2017, not long after Trump began his first term, I attended one of those ludicrous high-level international conferences that bring out the Robespierre even in me. (Even at this stratospheric event there were VIP tables! The thirst for hierarchy is never quenched. I was reminded of what an Indian friend once told me about his country’s greatest contribution to the vocabulary of status anxiety: the concept of the VVIP.) Trump had just retaliated militarily against Bashar al-Assad’s chemical attack on the village of Khan Sheikhoun: it was, therefore, for me at least, an exhilarating evening, but at dinner I angered H. R. McMaster, the National Security Advisor, by asking whether this was the blessed beginning of a new policy or a tweet in fifty-four cruise missiles. I had been invited to the gathering to present my thoughts on populism, a subject that was much in demand after Trump’s victory but hilariously ironic in that setting. Tell us, Mr. Wieseltier, who are all those real people whom we hear about? After I finished my talk, in which I discussed the populist myth of the people and the populist myth of the leader, an elegant French woman, a renowned international civil servant, raised her hand with a question. “Now that Trump has won,” she asked, “how should we revise our views?” I was a bit shocked by the question and its unembarrassed expedience about beliefs. I explained to her that a defeat at the polls did not require any revision of what we believe, which in this case was trans-Atlantic liberalism, though it certainly demands a withering look at our empirical assessment of the electorate. Trump’s victory did not prove that our views are wrong. It proved that our views are unpopular. The popularity of a view has no bearing upon its truth. And, intellectually speaking, success proves nothing. Who would want to win as what they are not?

    Who would want to win as what they are not? Quite a few people, apparently. Authoritarian governments would be nothing without accommodationists. The toadies are the grease.

    I am a man rich in loyalties. I love two countries. I love them for the good that they have done and for the values that they have proclaimed. I have known all my life that they are far from perfect, and that they have committed crimes and abuses, but a state that has not committed crimes and abuses has never existed, and the political systems that my countries established for themselves made them corrigible. I have also known that, official declarations and founding documents notwithstanding, no society is morally and philosophically uniform, so that both my countries have always been characterized by bitter debates about their first principles — debates that were possible because of the freedom of speech to which they were committed, and often took the form of political parties and movements. Yet one must love not blindly, but with eyes wide open, so that one is not afraid of the full truth about the object of one’s love and can therefore participate in ameliorations and reforms. I have always admired dissent, not because one must always “speak truth to power” — sometimes power has a point — or because it requires valor — in my two countries it usually requires no valor at all — but because it is evidence of the rarest human attribute in the world, independence of mind. Neither of my countries tremble at the prospect of exigent citizens, even if the success of such citizens in holding their country to a high standard has often been thwarted by the baser regions of its society and its culture.

    Even where courage is not necessary, cowardice flourishes.

    Is it treasonous to love a country conditionally? We know from personal life that it is possible to love a flawed or culpable person. (What other kind of person is there?) Unconditional love, which is the signature of parental love, has been broadened to encompass many bonds in which we would like to avoid the discomfort of judgement. But judgement is one of the obligations of membership and citizenship, so the question of the limits of love in the event of evil recurs. What would my country have to do to forfeit my love? There is honor in not rushing to the exits, in staying to fight; and there is little substance to a love that is renounced at the first sign of disappointment and even disgrace. Such a devotion was never reliable in the first place. But neither is absolute love, because it will go along with anything, which is not how one properly serves a country or a friend.

    I will do everything I can to never take back my love. Such a pledge is an obligation of my Americanism and my Zionism. It is also selfish; a life without these memberships and their treasures would be desolate. And so, I am embarrassed to admit, I have sometimes averted my gaze and looked away. There is nothing shameful about the impulse to defend what one loves, but there are times, such as now, when we should emphatically prefer the full unedifying truth.

    Lincoln held that even the perpetration of human bondage, even the mass genocidal murders that attended it over the centuries, did not justify the destruction of his country. The proposition that slavery did not make the country worthless is morally jarring. But he adamantly did not support the erasure of his own country, even though he was a Biblical man who studied the ancient accounts of national destruction as a divine punishment for sin. This saving complexity of purpose was his retort to John Brown, the absolutist and the terrorist. (And our contemporary.) I do not recall that in the post-Auschwitz period anybody called for the erasure of Germany. Decent societies are not built by absolutists. And indecent societies always contain decent people.

    But now my loyalties have been distressed and darkened. Both my countries, the United States and Israel, have chosen to confront the crises that beset them by means of cruelty. They are pursuing solutions to their problems that inflict misery on the already miserable, that proudly elevate heartlessness into state policy, that make a despicable farce of the humanist values about which their leaders used to boast. They despise the weak and they scorn the unfamiliar. These countries should have been havens from the poisons of ethnonationalism, which anyway flies in the face of their multiethnic realities. They should have pointed away from the retrograde direction of the world. Instead they have joined the malevolent herd in the post-liberal stampede. They are complicit in the creation of the lowest and most debased era in a hundred years.

    The second Trump administration could not be more candid about its appetite for destruction, and about its indifference to the human costs of its policies. Its controlling values are retribution and profit. Consider a few examples. While there are precincts of American life in which DEI has become a protection racket and an exercise in thought-policing, it is impossible to avoid the impression that the government’s frenzied campaign against it is animated mainly by racism. They are against DEI because they are against diversity, equity, and inclusion — the values themselves, regardless of the manner of their implementation. And while there is no doubt that the American immigration system is a shambles with bipartisan origins, and that the deportation of criminals who are in the country illegally is not an abuse of power but a valid exercise in law-enforcement, the virulence with which the government speaks about all immigrants leaves no doubt that what really lies behind its panic about the border is a rank nativism and a fear of the ethnically mottled society that we are. We have for too long flattered ourselves with the legend about our hospitality to immigrants — that uplift again; the truth is that we have struggled against our own callousness toward the refugee and the stranger since our origins. Immigration should be legal, of course, and a country should control its own borders, but if the government really intends to fulfill the spirit and the substance of its immigration program, we are about to witness a nightmare of federally authorized brutality. The bastards are even talking about reviving family separation.

    And while it is undeniable that the size of the federal government is unimaginably large, the administration’s approach in its own bizarre experiment in “downsizing government” seems to relish the human damage, at home and abroad, that its bureaucratic “reform” can accomplish. Yet the MAGA mandarins despise government at their peril. The despised technocrats serve society. One of the reasons that the American government is so huge is that American life is so complex: specialized and arcane knowledge is a condition of effective and humane government in an era when fiscal policy and health care policy and climate policy and regulatory policy and military policy are byzantine in their details. And anti-elitism does not qualify as the knowledge of anything. Quite the opposite. It is usually a prejudice against knowledge. (One of the worst episodes in the history of American nativism was called Know-Nothingism.) The age of Trump will be known for the sudden respectability of the animus against higher education, as if a democracy can function without a knowledgeable citizenry, as if it is an insult against populism and “the people” to want them to study the world and strain their minds. (Among the most comic figures of our unfunny time is the Ivy League–educated skeptic about higher education. It’s Yale or nothing, I guess.)

    The Trump administration will immiserate the people who elected him and bow and scrape before the people who paid for his election. The main product of this government purportedly consecrated to American security will be the relentless expansion of American insecurity. It will preside over a new age of social Darwinism and uncontrolled oligarchy. The most obscene sight in America now is the smile on Elon Musk’s face. In sum, we are entering a new age of American pain.

    Israel, too, has become renowned for its pitilessness. This is Benjamin Netanyahu’s contribution to Israel’s reputation, and to the education of young Jews everywhere. I understand that it has its strategic uses (“the restoration of deterrence,” though never for long). But the scale of what was done to Gaza cannot be explained or scanted in strategic terms, as the acceptable price of a national security policy. There are many ways of establishing national security, of diminishing — because eliminating it by military means is simply impossible — the danger in which Israelis live. But first a few heresies must be defended. The Hamas attack of October 7 was one of the worst atrocities of our time. It was motivated by a bloodthirsty anti-Semitic hatred and by a fanatical interpretation of the Islamic faith. Israel, which will one day wrestle with the astonishing failures of its political and military establishments on that hideous day, was perfectly justified in retaliating, and ferociously, to the slaughter. No self-respecting state would have done otherwise.

    Moreover, the moral and strategic outcome of the Hezbollah-Israeli war in the north have been decidedly positive. Innocent civilians were killed, but why are Israel’s wars the only wars that are measured against the fantastical standard of no non-combatant deaths? The destruction of Hezbollah’s leadership and of most of Hezbollah’s arsenal was an incontrovertible victory over an evil enemy, and also a historic opportunity for Lebanon to regain its sovereignty. It represented also a dramatic weakening of Iran, the mastermind and the operational enabler of all this jihadi violence, which had been already humbled by the haplessness of its missile attacks against the Zionist Satan; and this in turn resulted in the spectacular overthrow of Bashar al-Assad, the highly accomplished butcher of his own people who somehow failed to provoke the ire of Hamasniks and progressives and undergraduates in the West. (These same justice warriors are not to be found on streets or in encampments to protest the murder of a hundred and fifty thousand Sudanese civilians and the use of chemical weapons against them.)

    Yet forty-seven thousand Palestinians were killed in Gaza. Forty-seven thousand! Even if a third of them were Hamas soldiers, that amounts to a wanton slaughter of innocents on a scale that must torment every Jewish conscience. I should immediately note, since the contrary notion is being bruited about, that this requisite outrage has nothing to do with Judaism. Israel is still a secular state and the bombings of Gaza were not carried out by the Netanyahu regime for religious purposes. (The same cannot be said of the anti-Palestinian violence perpetrated in the West Bank by the settlers and Itamar Ben-Gvir’s malicious vigilantes, in which almost a thousand Palestinians — not Hamas, not Islamic Jihad, just men and women and children living on their land — have so far perished. During this period twenty-five Israeli residents of the West Bank were killed by Palestinians.) There are many sources in the Jewish tradition that warn against bigotry and cruelty, but there are many more that endorse bigotry and cruelty, as is the case with every religion that has ever tasted power or embraced exclusiveness, especially in its beginnings. Classical Judaism contains no concept of equality, and along with its Scriptural insistence upon human dignity it is riddled with contempt for the other, as the religious Israeli right will be happy to show you. My advice to Jewish progressives is not to play the prooftexts game, because it is not a game that they will win. And how is their appeal to midrashim any different from Smotrich’s and Ben-Gvir’s appeal to midrashim? I advise them also to search their hearts to verify that the politics that they claim to draw from their religion is not really the religion that they draw from their politics. Whether or not Israel is guilty of war crimes, and I think it is, the Torah is the Torah.

    The attacks of October 7 traumatized every Israeli. Anybody who does not recognize this psychological convulsion holds an imprecise and indecent analysis of what subsequently happened. This collective trauma had, to put it mildly, a basis in reality. It is a commonplace in the literature on trauma that it is experienced as a sudden descent into helplessness and vulnerability — in Freud’s words, as “a breach in an otherwise efficacious barrier.” In the anguished weeks after the attacks, I became possessed by the question of whether post-traumatic rationality is possible. No giant leap of empathy was required for me to grasp the temptation of revenge, but I feared the consequences of a military campaign undertaken in a spirit of angry abandon. It would result in a war fought with moral sloppiness. (Aren’t all wars fought with moral sloppiness?) October 7 inaugurated the most perilous period for Israel in fifty years, and I worried that the catastrophe that had made it essential for Israelis to keep a level head was the same catastrophe that would make a level head impossible.

    To this worry I added my concern, now decades old, about the suppression of the Palestinian question in Israeli life — the cowardly refusal of liberal politicians, most recently Benny Gantz and Yair Lapid, to discuss the matter, so that the subject has become a centrist taboo. (It is not a taboo on the right!) At one of the anti-government demonstrations on behalf of the hostages, a representative of the families’ lobby remarked that it “appealed to Israelis’ hearts instead of their reason, which would dictate that we should occupy Gaza and expel its residents.” Perhaps that is post-traumatic rationality. It is certainly consistent with the growing respectability in Netanyahu’s Israel of the foul dream of population transfer. Will the expelled now expel? Such an eventuality is almost too odious to imagine — but it may take the intervention of the Sunni states, who are on the brink of a grand bargain that will bring the Arab-Israeli conflict to an end, to save Israel from its most inglorious impulses. Not long after October 7, I visited the Israeli ambassador, an acquaintance from shul, and after we discussed various strategic and diplomatic matters I asked him about the Palestinians, and he responded with a dismissive wave of the hand, as if to say that they are not worth talking about. That was all. This dogmatic silence about the Palestinians, which the Palestinians have certainly done their share to inflame, is a prudential blunder and an ethical delinquency.

    For all the differences between the demons of contemporary America and the demons of contemporary Israel, there is another failing that they share: the absence of a vital opposition. A democracy cannot work without the resistances of an opposition. The system is designed for regular challenges of principle and policy, and in the absence of such friction it can degenerate into a kind of de facto one-party state. The point is not that the opposition has to win, though political power is finally the only effective way to reverse a deplorable course; the point is that the opposition must meaningfully exist. The collapse of the Democratic Party in the United States, its damaged sense of reality, its inarticulable identity, its self-immolation in sectarian commotions — all this had the grotesque effect of making Trump increasingly attractive. Kamala Harris’ giddiness now looks vaguely psychotic. Unless there is an about-face in the midterm elections, the unbound authoritarian will proceed unchecked with his chaotic cruelty. Never in our history did a man so deserving of impediments face so few.

    In America, however, the wreckage of the Democratic Party is at least clear about its adversarial stance, even if it cannot settle on a shared sense of its grounds and its methods. There is a healthy disgust at what is happening. And there are sound statistical refutations of despair: in a country of 334.9 million people, 156,202,318 people voted in the last election, in which Trump won 77,284,118 votes, or 49.8 percent, and Harris won 74,999,166 votes, or 48.3 percent. Those are not apocalyptic numbers. Indeed, of all the crises that may afflict a democracy, we may be experiencing the most intractable one of all: a more or less evenly divided society. The same state of affairs is also hobbling my other country. The most recent election in Israel was, in the popular vote, a tie: the Netanyahu bloc received 2,361,739 votes, or 49.57 percent, and the Gantz–Lapid bloc received 2,331,788 votes, or 48.94 percent. It was only his enormous and unscrupulous skill at forming coalitions that returned Netanyahu to power. Israeli society is bitterly divided against itself. It turns out that a house divided can stand, at least long enough to do lasting harm.

    In the beginning of Netanyahu’s new term, it became apparent that he is no longer a traditional conservative who believes in law and equality and the hallowed institutions of democratic governance — a political worldview that he imbibed not only from Israeli conservatism and the example of Menachem Begin, but also from his many years in the United States. But Netanyahu is not, alas, the reincarnation of Begin, who, when he recognized the costs and the deceptions of his war in Lebanon, had the decency to have a nervous breakdown. No, Netanyahu is a repulsive innovation in Israeli politics. After hanging out with Trump and Putin and Modi and Bolsonaro and Orban, he decided that their way — the populist trickery, the, um, unitary executive, the politics of contempt, the cult of the personality — was the better way. He set out, immediately and brazenly, to destroy the Supreme Court, which was the most formidable obstacle to the expansion of his power and the implementation of his policies, especially his Palestinian policies. The months and months of mass demonstrations against his “judicial reform” saw the finest hours of Israeli democracy — the streets were a heaven of peaceful political participation, and the lawyerly coup was defeated. (Still, the anti-Netanyahu protests also illustrated the limitations of Israeli liberal discourse: I noticed at the demonstrations in Jerusalem that placards with the word “Palestinian” displayed on them were shunted to the side.)

    The problem with Russian ethnonationalism and Hungarian ethnonationalism and Indian ethnonationalism is not that it is Russian or Hungarian or Indian, but that it is ethnonationalism. An ethnonationalism is not virtuous because it is ours.

    Then came the disaster of October 7, over which Netanyahu, the self-styled invincible shield of the Jewish people, presided, and then came the war, and suddenly the streets were silent. There were understandable reasons for the disappearance of dissent: the country was collectively plunged into mourning, and everybody’s sons and daughters were in uniform in Gaza. As it became clear — this was another of the many shocks that he has administered to his society — that Netanyahu felt no urgency about the Israeli hostages in Gaza, even if the ransoming of captives had always been the most immediate priority for Jewish communities throughout Jewish history — when this became clear, the families of the prisoners lost their wartime inhibition about criticizing the government and the streets began to fill again. But the hostages were the only socially permissible subject for protest. Which is to say, there was no significant Israeli opposition to the conduct of the war, not even when its drastic nature became plentifully clear. The solidarity, again, was natural, but even when the numbers of the Palestinian dead in Gaza attained terrifying proportions, and the IDF’s tactics obviously made massacres inevitable, all the targeting technologies and ethics manuals notwithstanding, the leaders of the opposition had nothing to say. I do not know of a single political speech that would have outraged the consensus, even if it would have secured a place in the moral history of the Jewish state. (There were a few military resignations, but they had to do with the professional failures of October 7.)

    Is the Israeli public in agreement and at peace about the human costs of the Gaza war? Is it generally believed in Israel that the lives of Palestinians and the rights of Palestinians are worthless? (The other day I came across this in a Talmudic passage: “What do you think, that your blood is redder than his? Maybe his blood is redder.” But never mind the Talmud!) Does the scruple about means and ends vanish when it is we — I am not an Israeli but I am a Jew — who are threatened? All those decades of indifference and apathy and complacence and illusion about the seven million people with whom the Israelis live have done their work. Yet even Israelis who cling to their own callousness about the Palestinians and are weary of the moralism of the left (if they can find the left) must eventually acknowledge that reconciliation with the Palestinians, some kind of political and diplomatic settlement, is a condition of Israeli security. The Zionist idea was that the Jews be not only free but also safe.

    And the American Jews: how long will we applaud our brothers and sisters in their short-sightedness? I would have thought that the telling of truth, however unpleasant, is also an expression of solidarity. Surely the American Jewish community can rouse itself from its all-too-comfortable posture of endless mourning to attend to the actual needs of Israeli security, which include more than the very latest iteration of AI. Kaddish is not a strategy. But the American Jewish community prefers commemoration to candor. Does the magnitude of the destruction in Gaza really not trouble the sleep of the majority of American Jews? I think I know the answer. And so I wish to reassure my brethren that humaneness is not a surrender to Hamas. After all the fortifying sermons about the justness of Israel’s cause, a rattling sermon about moral culpability in a just cause would seem incumbent upon an ethically wakeful community. Disturb only a single Shabbat, but disturb it. We need to know more than our own righteousness. We have many obligations to the Jewish state, and one of them is to keep our consciences intact.

    It is one of the characteristics of an atrocious time that Yeats’ poem about the best and the worst is ubiquitous, but in this instance we really are slouching toward Bethlehem.

    After the depredations of Trump and the depredations of Netanyahu, I live with the bitter feeling that as an American and as a Jew I will never be young again. (At my age, it is probably just as well.)

    A few weeks ago a dear friend, an Iranian dissident, a happy warrior, sent me a wish: “May we always smile in the pursuit of justice!” This put me in mind of a conversation long ago with a brilliant friend who had just written a devastating essay about Judith Butler (God’s work!) for my pages. She concluded her long and meticulous analysis with a remark about “the joy of justice.” I called her to say that the phrase seemed not quite right to me. The achievement of justice, I suggested, cannot be attended only by joy, because justice is always preceded by injustice, it never comes first and it always comes too late. The joy of justice is a fantasy of light without shadow. I have never seen such an unalloyed sight. A just outcome deserves to be celebrated, but when you dance with justice you dance with ghosts. I do not mean to recommend morbidity, which is sorrow as kitsch, sorrow become mechanical, sorrow performed for others. But I recall that conversation now because of the almost stupefying sobriety into which these harsh days have cast me.

    In Nicholas Jenkyns’ magnificent study of Auden in the 1920s and 1930s, I learned that decades later the poet wrote: “Ah! Those twenties before I was twenty / When the news never gave one the glooms.” Now the news gives nothing else. Impotently one lives in the glooms.

    Sorrow does not cease when suffering ceases: what it knows can never be unlearned. 

    The problem is that a grieving spirit is not a fighting spirit. The line between wise acceptance and unwise resignation is difficult to draw. The exhaustion of the opposition in both my countries frightens me, because it suggests a wide diminution of liberal energy. How can the heirs of Milton and Jefferson and Mendelssohn and Mill and Constant and Lincoln and Douglass and Mazzini and Herzen and Weber and Roosevelt and Brandeis and Niebuhr and Camus and Popper and King and Berlin and Havel and Sakharov be so wan, so lifeless, so unexcited by their own patrimony? The path out of our predicament must include a vigorous refreshment of liberalism. What was once the most encouraging transformation in the history of political belief, the one with the likeliest chance of advancing justice, has become a cliché, a banality, without sufficient strength to respond effectively to the slanders against it upon which a new order is being built. For too long liberals have received trouble without making trouble. The old pieties are true, but they need to be made strong, and lustrous, and perhaps less pious.

    Consider the question of patience in politics. Patience is often lauded as the quintessential liberal virtue, which has effectively protected liberals from the ruinous fictions of revolution for centuries. We have been taught to restrain ourselves and to contain ourselves, with the aim of preventing rash mistakes and hasty excesses, so that respect may be established in political society and deliberation may become a meaningful political activity. But sometimes patience has the lamentable effect of turning a player into an umpire, and umpires have no sides. It is easy to confuse the patient man with the man who has risen above the fray. (This was the case with Obama, except when he was running. Then the arc of history had to bend fast.) Sometimes patience signifies that your threshold for political and philosophical pain is too high.

    Don’t just defend the rights of others. Use your own.

    Impatience may be dangerous, but human suffering must not be regarded under the aspect of eternity, at least if one is to view it as a problem in need of a solution — that is, politically. Nothing could be more anti-political than the cosmic view of human suffering, which is a justification for quietism.

    John Lewis, then twenty-three years old, was chosen to be one of the speakers at the March on Washington in the summer of 1963. When he completed a draft of his speech, he showed it to Bayard Rustin, a man who was uncannily able to combine the most uncompromising idealism with the most uncompromising practicality. After Rustin read Lewis’s text, he phoned his hotel room. “We have a problem,” he told the young activist. What follows is taken from David Greenberg’s superb biography of Lewis.

    “A problem? What problem?” Lewis asked.

    “It’s your speech. Some people are very concerned about some of the things you’re going to say in your speech. You need to get down here. We need to talk.”

    Lewis went to Rustin’s room. Several others were there. They explained to Lewis that an associate of Archbishop Patrick O’Boyle, who was to deliver the invocation the next day, had shown the clergyman the speech. O’Boyle was refusing to participate if Lewis delivered the speech as written.

    As Rustin and Lewis spoke, Rustin said he thought O’Boyle might be appeased with one fix. A line in Lewis’ speech spurned counsels of “patience,” calling it “a dirty and nasty word.”

    “This is offensive to the Catholic Church,” Rustin said.

    “Why?” Lewis asked.

    “Payyy . . . tience.” Rustin dramatically articulated the word. “Catholics believe in the word ‘patience’.” It was a theological tenet, cited throughout the Bible and by thinkers like St. Augustine.

    Lewis relented.

    But not completely. In his address from the Lincoln Memorial, Lewis declared, to enormous applause: “To those who have said, ‘Be patient and wait, we have long said that we cannot be patient. We do not want our freedom gradually, we want it now.” And he concluded: “For we cannot stop, and we will not and cannot be patient.”

    I wonder if the archbishop recognized that before a quarter of a million people Lewis had repudiated a quietist conception of time in favor of an activist conception of time. The impatience of the liberal! This conception was in keeping with Dr. King’s thinking. A year later he published Why We Can’t Wait, the most complete statement of his philosophy of political action. He began the book by explaining the historical origin of the alleged long-sufferingness of American blacks. The civil rights revolution, he wrote, “is not indicative of a sudden loss of patience within the Negro. The Negro had never really been patient in the pure sense of the word. The posture of silent waiting was forced upon him psychologically because he was shackled physically.” Later, in the “Letter from a Birmingham Jail,” which was included in the volume, King declared that “for years now I have heard the word ‘Wait!’ It rings in the ear of every Negro with piercing familiarity. This ‘Wait” has almost always meant ‘Never.’ We must come to see, with one of our distinguished jurists, that ‘justice too long delayed is justice denied’.” (It was actually William Gladstone who coined the adage, but it now belongs to King.) This was followed by one of the most eloquent pages that King ever wrote, a rending inventory of concrete acts of oppression perpetrated against individual blacks, with its grim peroration: “There comes a time when the cup of endurance runs over.”

    Liberal cups sometimes fill too slowly. All the sage lessons about moderation notwithstanding, dare one suggest that this is a time when liberals should grow a little impatient with patience?

    Patience should not make us fools, or pawns. The enemies of liberalism have been spoiled by our high-minded forbearance, by the liberalism of old men. And the illiberals, the party of Trump and the party of Netanyahu, have also made good use of patience. (The repeal of Roe v. Wade was a triumph of conservative patience.) “How poor are they that have not patience? What wound did ever heal but by degrees?” Those fine meliorist words were spoken by . . . Iago! He was praising the utility of “dilatory time” for plots and conspiracies.

    A suspicion of impatience can disguise an allergy to alacrity. What are Netanyahu’s adversaries waiting for? Or can the snail’s pace of political outrage be explained by the deeply demoralizing fact that 72 percent of Israelis immediately supported the American plan to expel two million Palestinians from Gaza?

    Alienation can drive one toward the fight or away from the fight. It is not only an emotion, it is also a decision.

    Patience is a friend of impotence.

    And yet there is one thing for which, alas, we must always be patient. It is courage.

    The Job Poet and the Order of Things

    The writer responsible for Job is the greatest of all biblical poets and one of the most remarkable poets who flourished in any language in the ancient Mediterranean world. He is a technical virtuoso, deftly marshaling sound and rhythm for expressive effects, at times deploying brilliant word-play — as when he writes, “My days are swifter than a weaver’s shuttle, / they snap off without any hope,” the word for “hope,” tiqwah, punning on a homonym that means “thread” — utilizing a vocabulary that is the most extensive of any biblical poet, with borrowings from Aramaic, an enlisting of rare words, and even introducing words that seem to be his own invention. His range of metaphors is inventive and often dazzling, drawing on cheese-making, weaving, horticulture, and much more. Had there been bicycles in ancient Israel, I suspect we would find a bicycle simile somewhere in his poem. He exhibits an interest in nature quite untypical of biblical poets. And no other poet of his time and place possessed his ability to link together different passages with recurrent terms and images, even over long stretches of text.

    We know nothing about this anonymous heterodox genius except that he probably lived in the fifth century B.C.E., and even that has been disputed. In the fluidity of forms that characterized the Late Biblical Period, it would certainly have been possible for him to frame his argument as prose, but poetry was an inevitable choice for him. The power of poetic expression gave him the means to articulate the full measure of Job’s anguish and of his outrage at having been severely mistreated by God, as well as conveying the dizzying span of God’s vision of the created world in the Voice from the Whirlwind at the end. And one should also say that he surely knew he had a mastery of the poetic medium and relished its deployment in the great work he produced.

    The outlook of the Job poet is a radical dissent from the mainstream biblical consensus, and in this regard, too, poetry was a powerful vehicle for him to express his dissent. In what follows here, I will be examining two rather long passages, the first a complete poem, in order to show how the resources of poetry enabled him to say what he wanted to say.

    The Job poet strategically frames his poetic argument by beginning with a harrowing death-wish poem that communicates 
Job’s acute sense that his existence has become so unbearable — all his children dead, his property in flocks destroyed, 
his body afflicted with an excruciating burning rash — that he wishes he never would have been born. Here is the poem that takes up all of chapter three. (This and all the excerpts that follow are my translation.)

    Annul the day that I was born
    And the night that said, “A man is conceived.”
    That day, let it be darkness.
    Let God above not seek it out,
    nor brightness shine upon it.
    Let a cloud-mass rest upon it,
    Let day-gloom dismay it.
    That night, let murk overtake it.
    Let it not join in the days of the year,
    let it not enter the number of months.
    Oh, let that night be barren,
    let it have no song of joy.
    Let the day-cursers hex it,
    those ready to rouse Leviathan.
    Let its twilight stars go dark.
    Let it hope for day in vain,
    and let it not see the eyelids of dawn.
    For it did not shut the belly’s doors
    to hide wretchedness from my eyes.
    Why did I not die from the womb,
    from the belly come out, breathe my last?
    Why did knees welcome me,
    and why breasts, that I should suck?
    For now I would lie and be still,
    and would sleep and know repose
    with kings and the councilors of earth,
    who build ruins for themselves,
    or with princes, possessors of gold,
    who fill their houses with silver.
    Or like a buried stillborn I’d be,
    like babes who never saw light.
    There the wicked cease their troubling,
    and there the weary repose.
    All together the prisoners are tranquil,
    they hear not the taskmaster’s voice.
    The small and the great are there,
    and the slave is free of his master.
    Why give light to the wretched
    and life to the deeply embittered,
    who wait for death in vain,
    dig for it more than treasure.
    who rejoice at the tomb,
    are glad when they find the grave?
    To a man whose way is hidden,
    and God has hedged him about.
    For before my bread my moaning comes,
    and my roar pours out like water.
    For I feared a thing — it befell me,
    what I dreaded came upon me.
    I was not quiet, I was not still,
    I had no repose, and trouble came.

    The first word of the poem, yo’vad, literally means “perish,” but unfortunately “perish the day” is no longer a viable English equivalent because the locution in our era has slid into prissiness (“perish the thought”). The effort of a couple of modern translators to give the expression punch in English (“damn the day”) inserts an inappropriate tone or implication because there is nothing about damning, in either the invective or theological sense, in the Hebrew. The transitive verb “annul” has the justification that the poem is all about expunging the day from the calendar. The two versets of this line exhibit an altogether original use of the dynamic of intensification from the first verset to the second. Job wishes not merely never to have been born but, moving back nine months, never to have been conceived. Thus the conventional poetic word-pair, “day” and then “night,” is given a startling new force.

    The poet then picks up “night” from the second half of this line and launches on a rich orchestration of synonyms for darkness. After the primary term “darkness,” he enlists “cloud-mass,” “day-gloom,” “murk.” (“Day-gloom,” kimrirei yom, seems to be his coinage, probably derived from an Aramaic root that indicates darkness, with the expression here possibly referring to a solar eclipse.) The poet’s tapping of the Hebrew lexicon for synonyms is evident throughout the book. (There are five different terms for “lion” in biblical Hebrew, and at one point he uses all five in two consecutive lines.) With elegant appropriateness, Job in this poem wants the night of his conception to have been “barren,” of course the opposite of conception. 

    In the process of intensification as the poem continues, he then moves up to a mythological register: “Let the day-cursers hex it, / those ready to rouse Leviathan.” A minor emendation to the Masoretic text yields “Yamm,” the primordial sea-god who is also Leviathan, instead of yom, “day.” (At this point, the King James Version commits one of its most lamentable errors, rendering the Hebrew for Leviathan, lewayatan, as “their mourning,” which is both grammatically wrong and imagines that the noun is lewayah, a term for “funeral” in rabbinic Hebrew that is not biblical.) The second line in this verse makes the wish for darkness cosmic, still another aspect of intensification — no stars, a night without dawn, without morning stars. The concluding metaphor of this line demonstrates how utterly original the Job poet is in his deployment of figurative language: “let it not see the eyelids of dawn.” 

    This is a daring, and beautiful, metaphor: the first crack of light on the eastern horizon is likened to the opening eyelids of the sleeper looking out to the east. Modern translators, deluded in thinking that readers can no longer understand metaphors, substitute for the metaphor what may be its referent, as in the Jewish Publication Society’s “the glimmerings of the dawn.” The Job poet knew that this was a remarkable metaphor, for he did not hesitate to use it again. This occurs much later, in a radically different context, in representing the fierce appearance of the daunting Leviathan, thus locating beauty at the heart of terror: “His sneezes shoot out light, / and his eyes are like the eyelids of the dawn” (41:10).

    The initial movement of the death-wish poem heads toward a conclusion by Job’s saying of the day he was born that it “did not shut the belly’s doors / to hide wretchedness from my eyes.” The prominent noun in the last phrase is a strong instance of poetic efficiency. We might expect here “life” or “the light,” but for this sufferer life itself has become nothing but wretchedness. In the next few lines, following the characteristic movement of biblical poetry from the general to the specific or concrete, we get an evocation of the physicality of birth: womb, belly, knees (presumably parted in birthing), and breasts giving suck.

    Job’s wish never to have been born joins with a panorama of human life, and it is a bleak panorama. Kings “build ruins for themselves,” imposing structures that inevitably crumble to dust (one thinks of Shelley’s “Ozymandias”) and princes store up gold and silver, futilely, for they will part from it in death. Verse 18 shows the poet’s firm sense of integrated structure, of which we will see a more spectacular instance in the Voice from the Whirlwind, for “babes who never saw light” takes us back to the early lines expressing the wish to be a stillborn and blot out the light. Everyone, in this despairing vision, finds repose only in death, the great equalizer. And what has life been for humankind? The wicked have troubled others, all are weary, there are prisoners and slaves and taskmasters. Existence is so universally miserable that everyone longs for death. In this way, Job invites us to see his wretchedness not as a special case but merely as a particular instance of the fate of misery shared by all. The large resonance of Job’s inveighing against God as he proceeds in his poetic argument derives from his seeing unwarranted suffering not as his alone but as the common plight of humankind.

    A single word toward the end of this poem exemplifies how this writer creates connective links in the overall structure of his work. In the frame-story, the Adversary says to God, “Have you not hedged him about and his household and all that he has all around?” (1:10). The verb here obviously has the sense of “protected.” But the same word in Job’s mouth here, “and God has hedged him about” means the opposite: Job is complaining that God has blocked him on all sides and left him no way out of his terrible plight. This sharp play on two opposed meanings of the same word might suggest that the Job poet did not transcribe verbatim the old folktale that had come down to him, probably orally, but felt free at least at one point and perhaps others to modify a wording. Alternatively, the folktale may have included “hedge,” which the poet then played against. That same verb will occur twice more in the Voice from the Whirlwind, with still a third meaning, but I will postpone commenting on its use there until we have a long look at the God speech that concludes the poetic body of the book. 

    The final lines of the death-wish poem aptly round off its argument. The third line before the end exhibits the expected biblical intensification from verset to verset: “For before my bread my moaning comes, / and my roar pours out like water,” from moaning to roaring. Job goes on to assert that he had lived in a state of anxiety, perhaps intimated in his offering sacrifices for his children because he was afraid that they had committed some offense. Finally, having longed for the quiet of the grave, he admits the realization of those fears. The last word of the Hebrew text is rogez, etymologically a state of disturbance or unrest, as this same verbal stem is used elsewhere to describe the shaking of earthquakes.

    The devastating extremity of Job’s wish for non-being raises a general issue. It serves the obvious purpose of introducing all that he will say by making clear how utterly unbearable his suffering is. He does not yet indict God, though that will follow, for he never ceases to believe in an all-powerful God, which means that God must be responsible for the chain of disasters inflicted upon him. But what good can poetry do in the face of intolerable and unwarranted suffering? The Job poet is clearly not alone in confronting this dilemma. In English literature, perhaps the most memorable instance is Lear driven out on the moor in the fierce storm, blinded, stripped of his possessions, cruelly rejected by two of his three daughters: “For I am bound upon a wheel of fire, / that mine own tears do scald like molten lead.” The Job poet would have appreciated this figurative 
language, especially the way the wheel of fire is extended in the simile of the scalding tears like molten lead.

    In our own time, the signal instance of poetry conveying unbearable suffering may well be Paul Celan’s celebrated Todesfuge, or “Deathfugue.” It is a formally deft and beautifully crafted poem, but it embodies “a terrible beauty,” in Yeats’ phrase. The first two words, repeated as a kind of refrain, are a shock: “black milk.” The violent transformation of the nurturing substance of life into blackness will then work in tandem with another refrain-phrase, “Death is a master from Deutschland.” Celan exploits the resources of poetry to express the unspeakable outrage of six million of his people murdered in the industrial death-machine of Deutschland, the cherished homeland figured in the poem by the fair Margarete. What, in confronting such appalling realities, can poetry possibly do? The reflexive response to outrage against our moral instincts is, I suppose, a scream. Poetry of this order of greatness, from Job to Celan, transforms the scream into articulated, eloquent expression. I do not believe that it is really cathartic, but it communicates a feeling that suffering has been endowed with a sharp focus in language, the primary human medium — that outrage has been given a voice, a voice that gets across the full awfulness of what has occurred and therefore is, strangely, at once horrifying and satisfying. We somehow cling to our humanity in the face of horror.

    As the great poem of Job unfolds, it emerges that the work incorporates three different orders of poetry. Most prominent, until a new poetic voice appears at the end, is Job’s poetry. The poetry of the three comforters in the debate is by and large inferior to his. It is inferior because they are working with the complacent clichés of traditional wisdom, and so the poetry that they speak in their argument invokes many shopworn formulas. From time to time, to be sure, there are brief passages of strong poetry, because this writer was such a fine poet that he could scarcely refrain from intermittently giving the comforters a few good lines. Finally, when we come to the conclusion of the poetic body of the work, the poet takes a certain risk, at which he succeeds splendidly. 

    God is now, at last, speaking, and because He is God, He must be given poetry that transcends the stunning poetry of anguish spoken by Job. It is a challenge that the Job poet undertakes because he must have been confident in his own mastery. If I may propose a seemingly irreverent analogy, let me cite Molly Bloom’s soliloquy that is the concluding episode of James Joyce’s Ulysses. She has been a looming presence in the novel, especially in her husband’s thoughts, as we are taken through a rich variety of remarkable prose-poetry, some of it stream of consciousness and some of it in other forms. Then, at the end, we enter the living current of her unspoken words as she lies in bed, mulling over the day, her husband, her loves, her life, and this soliloquy proves to be prose-poetry arguably even greater than all that has come before. Like the Voice from the Whirlwind, it is poetic language that embodies a grand epiphany, the resonant affirmation that the work as a whole is meant to pronounce.

    God’s poem begins with a challenge to Job: “Who is this who darkens counsel / in words without knowledge?” 
The initial phrase demonstrates how aptly this poet chooses his words. “Darken counsel” is not an idiom that appears elsewhere in the Bible, and so it evidently has been coined 
for the present purpose. Job’s first poem, as we saw, begins with a whole sequence of images that express a longing to blot out the light — daylight, the rising sun, the stars, all things to be engulfed by darkness. In response, God signals in the very phrasing that this is profoundly misguided. After a line in which He tells Job to gird his loins like a man, 
God continues:

    Where were you when I founded earth?
    Tell, if you know understanding.
    In what were its sockets sunk,
    or who laid its cornerstone,
    when the morning stars sang together,
    and all the sons of God shouted for joy?
    Who hedged the sea with double doors,
    when it gushed forth from the womb,
    when I made clouds its clothing,
    and thick mists its swaddling bands?
    I made breakers upon it My limit,
    and set a bolt with double doors.
    And I said, “Thus far come, no farther,
    here halt the surge of your waves.”
    Have you ever commanded the morning,
    appointed dawn to its place,
    to seize the earth’s corners,
    that the wicked be shaken from it?
    It turns like sealing clay,
    takes color like a garment,
    and their light is withdrawn from the wicked,
    and the upraised arm is broken.
    Have you ever come into the springs of the sea,
    in the bottommost deep walked about?
    Have the gates of death been laid bare to you,
    and the gates of death’s shadow have you seen?
    Did you take in the breadth of the earth?
    Tell, if you know it all.
    Where is the way that light dwells,
    and darkness, where is its place,
    that you might take it to its home
    and understand the paths to its house?
    Have you come into the storehouse of snow,
    the storehouse of hail have you seen,
    which I keep for a time of strife,
    for a day of battle and war?
    By what way does the west wind fan out,
    the east wind whip over the earth?
    Who split a channel for the torrent,
    and a way for the thunderstorm,
    to rain on a land without man,
    wilderness bare of humankind,
    to sate the desolate dunes,
    and make the grass sprout there?
    Does the rain have a father.
    or who begot the drops of dew?
    From whose belly did the ice come forth,
    to the frost of the heavens who gave birth?
    Water congeals like stone,
    and the face of the deep locks hard.

    The arresting image of the morning stars singing together in joy singles out these points of light in the night sky, heralding a dawn that Job had wished never would have come, and there are more references to light further on. This is an imagining of creation not hinted at in Genesis, though we do not know whether it is the poet’s original invention or reflects a tradition about creation that did not make it into the text of Genesis on which he drew. In the next line, we get the sea “hedged” with double doors. That verb, we recall, occurred in two different senses in the frame story and in the death-wish poem. Now, the idea that it conveys in this moment of cosmogony is a blocking of the surging waters of the sea from flooding the land. That notion is inherited from Canaanite poetry, where it is accompanied by the mythological motif of the conquest and imprisonment of a monstrous sea-god, variously called Yamm, Leviathan, Rahab, Tanin (the last also refers to lesser sea-beasts), affected by the land-god (or weather-god) Baal. This old story was so familiar in the culture that Job could invoke it without explanation to signify God’s holding him captive under relentless surveillance: “Am I Yamm, or the Sea-Beast [tanin], / that You should put a watch upon me?” (7:12). 

    The explicitly mythological figure is excluded from the representation here. Instead, we witness the waters gushing forth from the “womb” of the primordial sea, an image for it invented by the Job poet. This strategically chosen metaphor thus represents creation as birth, the very thing Job wanted to cancel in his death-wish. Birth is confirmed in the following line: “when I made cloud its clothing, / and thick mist its swaddling bands.” This is a metaphorical coinage of the same level of originality as Shakespeare’s scalding tears of molten lead. In the ancient world, infants were wrapped snugly in swaddling bands, strips of white linen. These are like what one might see looking out to the west at bands of mist over the water. The strikingly visual image is completely unusual while it continues the figuration of creation as birth.

    Verses 17–20 further develop the rejoinder to Job’s initial bleak poem: “Have the gates of death been laid before you, / and the gates of death’s shadow have you seen?” Job had fervently wished for death but knew nothing of its looming reality. God alone is the master of death, His all-seeing eyes taking in the full measure of its dark realm. The next two verses underscore the antithesis to chapter three: “Where is the way that light dwells, / and darkness where is its place / that you might take it to its home / and understand the paths to its house?” Light appears here in poetic parallelism with darkness, as it does elsewhere in the Bible, but that appearance is diametrically opposed to Job’s desire for light to be swallowed up by darkness. Instead, there is a diurnal rhythm of alternation between light and darkness — and just possibly, by implication, between hope and despair. Light and darkness are part of the harmonious ongoing process of the created world, something the devastated Job has chosen to turn away from.

    What follows in the poem is a manifestation of the powerful, at times even violent, energy that pulses through creation, a theme that will continue through this poem all the way down to Behemoth and Leviathan at the end. This might be the relevance at this point in the mythological reference to God’s setting aside the harsh elements as weapons “for a time of strife, for a day of battle and war.” The face of nature itself is limned with violent action — “who split apart the channel for the torrent, / and a way for the thunderstorm.”

    At this junction, the poet introduces a crucial, and radical, idea: God brings “rain on a land without man, / wilderness bare of humankind, // to sate the desolate dunes / and make the grass sprout there.” The version of cosmogony in Genesis is emphatically anthropocentric. Man is the culmination of creation, enjoined to rule over all things, everything set out for his benefit. Here, by contrast, God causes the rain to fall “on a land without man,” a rainfall that will “sate the desolate dunes.” (The poet’s mastery of sound as well as metaphor is evident in these words: “desolate dunes” represents the Hebrew sho’ah umesho’ah, that alliteration merely approximated in my English phrase.) This dissenting notion that the natural world extends far beyond man and is perhaps indifferent to him surely resonated with Melville in Moby-Dick, perhaps as much as Job’s Leviathan, which the novelist chose to construe as a great whale. 

    Finally, in the next lines, the poem returns to the birth imagery prominent at the beginning:

    Does the rain have a father,
    or who begot the drops of dew?
    From whose belly did the ice come forth,
    to the frost of the heavens who gave birth?

    The second line here demonstrates both the originality and the boldness of this poet in his handling of metaphor. Having chosen to figure the origins of ice and dew as a birth, he represents that birth in a way that is almost shocking when he invites us to contemplate chunks of ice emerging from the womb. Poetry can be a means of reorienting or radically shifting perception, especially in conjoining through figurative language totally disparate and surprising ideas or realms. This is clearly what God wants to do with Job — to shake him up, to compel him to see the world in ways that would never have occurred to him.

    After leading Job’s vision to the sky in the next few lines, the Voice from the Whirlwind moves on at the end of the chapter to the animal kingdom, with particular attention to beasts of prey — the lion and the raven. (One should remember that chapter divisions in the Bible were a medieval editorial intervention, and the lines to which I am referring were meant to initiate the tour of zoology that will continue to the end of the poem.) This zoological section, running to the end of chapter 39, then picking up after a few lines of address by God to Job with the climactic Behemoth and Leviathan in chapters 
40 and 41, is too long for a reading here, but we can look at two passages. The first is in 39:1–12:

    Do you know the mountain goats’ birth time,
    do you mark the calving of the gazelles?
    Do you number the months till they come to term
    and know their birthing time?
    They crouch, burst forth with their babes,
    their young they push out to the world.
    Their offspring batten, grow big in the wild,
    they go out and do not return.
    Who set the wild ass free,
    and the onager’s reins who loosed,
    whose home I made in the steppes,
    his dwelling-place flats of salt?
    He scoffs at the bustling city,
    the driver’s shouts he does not hear.
    He roams mountains for his forage,
    and every green thing he seeks.
    Will the wild ox want to serve you,
    pass the night at your feeding trough?
    Bind the wild ox with cord for the furrow,
    will he harrow the valleys behind you?
    Can you rely on him with his great power
    and leave your labor to him?
    Can you trust him to bring back seed,
    gather grain on your threshing floor?

    We are now returned to the theme of birth announced in the figurative language at the beginning of the poem. Birth is universal among animate creatures, and it goes on, far from human observation or human ken, in the mountains and the forests, beyond the grasp of the man who wished for his own birth never to have occurred. The animals of the wild “burst forth” with their little ones. Birth, too, is imagined as a violent process. This is a unique application of this verb to birthing — the general meaning of the verbal stem is “to split apart” — an indication that the poet is imagining procreation in a different way. The lines that follow take up a theme that will become more salient in the representation of Behemoth and Leviathan. The wild ass and the onager, out on the salt-flats and the steppes, live remote from any human control, scoffing at the crowded habitations of men and women, free from the whips and the commands of the driver. 

    As one sees elsewhere in biblical poetry, intensification within the single line is projected forward through a sequence of lines. In the four lines devoted to the wild ox, the poem’s audience is challenged with the question of whether they can ever domesticate the wild ass and subject him to servitude, hitch him to the plow, train him (fantastically) to bring back seed or gather grain from the threshing floor. This theme of the resistance of the beast to human mastery will be elevated to a new level of intensity in Behemoth and Leviathan. All this is a strong expression of the rejection of anthropocentrism: contrary to the assurances of Genesis 1, humankind will never be able to rule over the animal kingdom; there are forms of life simply too powerful for man.

    A few lines down, the poem goes on:

    Do you give might to the horse,
    do you clothe his neck with a mane?
    Do you make him roar like locusts —
    his splendid snort is terror.
    He churns up the valley exulting,
    In power goes out to the clash of arms.
    He scoffs at fear and is undaunted,
    turns not back before the sword.
    Over him rattles the quiver,
    the blade, the javelin, and the spear.
    With clamor and clatter he swallows the ground,
    and ignores the trumpet’s sound.
    At the trumpet he says, “Aha,”
    and from afar he scents the fray,
    the thunder of captains, the shouts.
    Did the hawk soar by your wisdom,
    spread his wings to fly away south?
    By your word does the eagle mount,
    and set his nest on high?
    On the crag he dwells and beds down,
    on the crest of the crag his stronghold.
    From there he seeks out food,
    from afar his eyes look down.
    His chicks lap up blood,
    where the slain are, there he is.

    At this moment in the poem, before the eagle and before Behemoth and Leviathan, the poet introduces his famous description of the war horse (39:19–25). Here are some vivid lines from the beginning of this section:

    Do you give might to the horse,
    do you clothe his neck with a mane?
    Do you make his roar like locusts —
    his splendid snort is terror.
    He churns up the valley exulting,
    in power goes out to the clash of arms.
    He scoffs at fear and is undaunted,
    turns not back before the sword.

    Some readers may wonder what it is doing here. There is one plausible explanation that does not immediately justify its inclusion in the Voice from the Whirlwind. It is that the poet put it here as a demonstration of literary skill: he was drawn to do it and knew that he could evoke this bellicose equine presence with remarkable vividness. He gets the sound of weaponry around the horse just right, enriching the depiction with an expressive alliteration — “clamor and clatter” in my version emulates the Hebrew ra‘ash werogez, with the accent on the first syllable of each alliterated noun. Sound plays an energizing role in the effect of the passage — the rattle of the quiver and the weapons, the clamor of the pounding hoofbeats, the blast of the martial trumpet. With all this, the fierce battle charger prepares the way for the two daunting beasts yet to come, Behemoth and Leviathan. Like them, he is at once glorious and frightening: “his splendid snort is terror.” Also like them, he embodies power and fearsome beauty that are beyond humanity, that do not relate to humankind: “Do you give might to the horse, / do you clothe his neck with a mane?” The war horse, in contrast to the wild ass and the onager, is surely saddled with a rider holding reins, at the command of the mounted warrior. The main point is that this fearless creature galloping into the midst of battle is imagined — in this one respect, unrealistically — as though he were virtually autonomous, charging into the fray out of the sheer love of armed combat. 

    The poet needs this divergence from verisimilitude in order to set the stage for those two creatures, Behemoth and Leviathan, who are impermeable to any human action and resistance. They will make their climactic appearance, beginning in verse 15 of the next chapter, after an intervention addressed by God to Job (40:1–14). In brief words God challenges Job to answer all that He has said, to which Job responds with a profession of his own worthlessness. God then resumes His speech, beginning as before with “Gird up your loins as a man.” But before that, in the last six verses of chapter 39, we move from the battlefield to the sky in the depiction of the hawk and the eagle. It is an appropriate place for the naturalistic phase of the zoological parade to end because it is a realm no human being can ever reach. Even the nests of these creatures of the sky are unreachable, placed in the crags of high mountains.

    But what is it that the poet focuses on in the life-cycle of the eagle? That the eagle nurtures his young, an instinct that impels all creatures. The nurturing of the fledglings, however, necessitates killing: “His chicks lap up blood, / where the slain are, there he is.” I have been contending that this writer, virtually unique in biblical poetry, is a poet keenly interested in nature, but that interest is resolutely unsentimental. There is no anthropomorphizing in his vision of the natural world, no pathetic fallacy, no gentle rhapsodizing over the beauties of creation. He understands that nature is red of tooth and claw — the eagle’s chicks “lap up blood.” That is the harsh order of things. He lucidly sees that violence, even lethal violence, is an intrinsic element of the life-cycle in the animal kingdom. This does not really answer Job’s complaint about unjust suffering, but it does suggest that the world around us does not conform to our comforting assumptions about good and evil and that we have to live with a reality that resists our conventional moral calculus.

    I will not consider Behemoth and Leviathan directly, because our scrutiny of the fierce creatures from the lion to the war horse to the eagle has anticipated much of what needs to be said about them. The difference between these two and the preceding creatures in the catalog of animals is that they straddle the border between zoology and mythology, thus culminating the poetic process of intensification. Presumably, they are based, respectively, on the hippopotamus and the crocodile, creatures of the Nile conveniently removed from the direct observation of the poet and his audience, mainly reported to them through travelers’ yarns. There are realistic touches in the depiction of both: Behemoth in the shallows of the river shaded by lotus, and willow, “hedged” — again that strategic verb — by the lotus, and Leviathan with his crocodile’s plate of armor, “His back is rows of shields, / locked close with the tightest shield,” and his fearsome teeth, “All around his teeth is terror.” But such naturalistic depiction seamlessly slips into the supernatural. “Could one take him with one’s eyes,” it is said of Behemoth, “with barbs pierce his nose?” (In fact the Egyptians did hunt hippopotami.) And before long, the naturalistic crocodile morphs into a dragon, his mouth shooting firebrands, his nostrils emitting smoke. He is impregnable to all man’s weapons (a note Melville would pick up in associating Leviathan with the Great White Whale): “When he rears up, the gods are frightened, / when he crashes down they cringe.” At this point, Leviathan has merged with the ferocious Canaanite sea-god from whom he takes his name. 

    In the logic of the poem, the poet needs these mythologized beasts for his conclusion because they crown his argument that there are things in nature beyond human ken and absolutely beyond any hope of human domination. The Psalmist, in a splendid celebration of man’s supreme place in a cosmic hierarchy, wrote: “You make him rule over the work of Your hands. / All things You set under his feet.” (Psalm 8:7) Tellingly, Job the sufferer quotes another line from this same psalm — “What is man that You should note him?” — but bitterly reverses its meaning to say, what is miserable man that You constantly scrutinize him to persecute him? Here, in the climax of God’s speech to Job, the idea that man exerts dominion over all things is powerfully opposed.

    The writer responsible for this extraordinary book was not only a very bold poet, among other things coining imagery, as we have seen, that would not have occurred to any other poet in ancient Israel, but also a very bold thinker, not hesitating to challenge some of the essential ideas long cherished in the Hebrew tradition. The boldness of the poetry is a necessary vehicle for the boldness of the thought. Poetry of the first order of originality is a way of enabling us to see the world with fresh eyes. It is worth going back to a notion promulgated by the Russian Formalists a century ago, that what literature in general does is to shake us out of what had become complacent, unseeing perception through what they called “defamiliarization,” thereby bringing us back to the realities we had ceased to experience, making us feel anew the stoniness of the stone. 

    The poetry of the Voice from the Whirlwind does this on a philosophical level, serving, I would say, as the Bible’s ultimate defamiliarizer. As countless readers have complained, it does not really provide an answer for the dilemma of unwarranted suffering under a supposedly just God. But that dilemma has no real answer. There is no way of explaining why an innocent child should die of cancer or a benevolent woman perish in a fire with all her family. What the poetry does manage to do is carry us away in its sweep, in the brilliance of its riveting and sometimes startling imagery, occasion us to see the world freshly, prod us to let go of our habitual notions of man as the master of nature and the measure of all things, and realize that contradiction and anomaly and even violence are at the heart of reality — in sum, to accept the limitations of human imagination. We need to take in the power of the poetry in order to have a full sense of the originality of the thought.

    The Wages of Cultural Secularization

    I take my title from the critic and literary scholar Simon During, who coined the phrase “cultural secularization” as a way of understanding the sharp decline in prestige — since the beginning of the twenty-first century and especially in the last decade — of the “high humanities.” The concept will strike many as evasively abstract, and certainly it is as open to skepticism and revision as its predecessor and model, the social-scientific and philosophical account of religious secularization that extends from Nietzsche to Weber to Charles Taylor. But the core of the religious secularization narrative rests on a basically unimpeachable empirical claim — that we have “moved from a condition in 1500 in which it was hard not to believe in God” to a modernity in which unbelief “has become quite easy for many,” as Taylor puts it. During’s parallel claim — “Faith has been lost across two different zones: first, religion; then, high culture . . . The humanities have become merely a (rather eccentric) option for a small fraction of the population” — cannot yet command the same ready assent. But in our universities, where tenure tracks in the humanities are swiftly disappearing, where majors and enrollments in fields such as English and art history are plummeting, some such notion as “cultural secularization” seems necessary — even, once you get past a first recoil at its conceptual hubris, obvious.

    Cultural secularization, During writes, is a “second secularization,” meaning both that it came about after religious secularization and that, to a degree, it is a variant of it. That is because the ascent of the high humanities was understood, and to an extent engineered, by thinkers for whom, as During writes, “culture was consecrated in religion’s place.” The most important expression of this compensatory substitution in the nineteenth century is by Matthew Arnold, for whom “poetry” might preserve what is true in religion from the depredations of scientific positivism: “Our religion has materialized itself in the fact, in the supposed fact; it has attached its emotion to the fact, and now the fact is failing it.” But “poetry attaches its emotion to the idea; the idea is the fact. The strongest part of our religion today is unconscious poetry.” By the 1930s, I. A. Richards could insist that “the fact” was failing or had failed much more than just religion. As he writes in Science and Poetry, “Countless pseudo-statements — about God, about the universe, about human nature, about the soul, its rank and destiny — pseudo-statements which are pivotal points in the organization of the mind, vital to its well-being, have suddenly become, for sincere, honest, and informed minds, impossible to believe as for centuries they have been believed.” Yet there is a “remedy,” Richards declared: “to cut our pseudo-statements free from that kind of belief which is appropriate to verified statements,” from any dependence on facts. “This is not a desperate remedy,” he insisted, “for as poetry conclusively shows, even the most important among our attitudes can be aroused and maintained without any believing of a factual or verifiable order entering in at all.” Not just religious emotions, but also the whole complex of scientifically invalid but existentially inescapable intimations of significance, are what poetry, or more broadly the sublimated religion of the high humanities, might save. 

    This sublimation was institutionalized in the twentieth-
century university’s commitment to the study of art and 
literature, where the humanities secured a repository of post-Christian meaning precisely for the educated classes that had fallen furthest away from faith. As During observes, an important distinction between cultural secularization and religious secularization is that “unlike religion, the humanities have always been classed. In their formalized modes especially, they have belonged mainly to a fraction of the elite.” That historical reality has led, too hastily, to a prevailing diagnosis of the crisis of the humanities as essentially one of status-signaling. One commonly hears humanities professors lament, in a sociological vein, that classes in literature or art history are under-enrolled now because knowledge about those topics no longer confers “cultural capital” — no longer impresses, or even interests, one’s bourgeois dinner party guests. This is true, but question-begging. The loss of cultural prestige follows upon a more primary loss of felt significance. 

    Artistic modernism bears a special relationship to this history, because its emergence coincides with, is indeed an effect of, the decisive acceleration of secularization between the mid-nineteenth and the early-twentieth centuries. W. B. Yeats’ “The Second Coming” might stand as an emblem of what Helen Vendler called, in these pages, this “single historical fact: the exhaustion of Christian cultural authority after its ‘twenty centuries’ of rule.” As the scholar of modernism Matthew Mutter, from whom I borrowed the conjunction of Arnold and Richards, puts it, modernism “is the first sustained moment in the long secularization of Western intellectual culture where writers begin to imagine a comprehensively post-Christian future, and where secularism becomes, out of necessity, an object of reflection.” This does not mean that there were no religious or spiritualist modernists, or that modernists were the first to imagine a religion of art as a compensation for religion as such. They inherited that substitution from the Romantics, but with a difference: they have “become critically aware,” Mutter writes, of earlier “methods of naturalizing and adapting religious concepts,” and with this awareness came critical scrutiny. When 
T. E. Hulme accused Romanticism of being “spilt religion,” he meant to suggest that it didn’t quite know what it 
was doing. The art of his own time, he implied, ought to be more self-transparent about its operations. Modernism lives by a spirit of critical demystification supposedly more reflexive, more thoroughgoing, than its predecessors. This is what Mutter calls “modernist secularism” or, alternatively, “restless secularism.” 

    The result is that secularism itself, and especially its compensations, became an object of acute critical concern. Romanticism came to look too tethered to the religious models it was, only half-wittingly, adapting; that kind of self-delusion would no longer do. The technophilic futurism of F. T. Marinetti and company was one response, the machine being the appropriate god for a secular age. T. S. Eliot’s neo-orthodoxy offered another solution, D. H. Lawrence’s erotic neo-paganism yet another. What all of these strategies have in common is a distressed sense of the vulnerability of the model of culture as sublimated religion — perhaps an adequate compromise when things weren’t quite so far gone but hopelessly naïve after the aeroplane, Freud, and the Great War. 

    Modernism was therefore constituted by its impassioned attempts at resolving contradictions that remain with us. For a while, and with the collaboration of a system of higher education that promoted the study of interpreted aesthetic traditions perceived to climax in modernism, its particular modes of high seriousness held the field. (In some cultural arenas, they still do: the tendency of the Nobel Prize in literature, for instance, has up to the present been to reward writers recognizably working in the wake of a modernism become traditional, an acknowledged source of the major aesthetic possibilities.) But in others — American publishing by and large, as well as in American university curricula — modernism’s rigors have passed into a kind of obsolescence. Its demands made sense only so long as art was felt to matter in something like the same way religion once had. With the faltering of that faith, the study of modernism first, and then of the “high humanities” in general, is contracting almost out of existence. 

    Ivy Compton-Burnett was born in England in 1884. Her first mature novel (she had disowned an earlier work of juvenilia), Pastors and Masters, was published in 1925; her second, 
Brothers and Sisters, in 1929. By that time, as Compton-
Burnett’s biographer Hilary Spurling puts it, there was “no doubt that she represented the last word in modernity.” In what did that modernity consist — and what has happened to it? Today, her nineteen novels are almost entirely out of print (although New York Review Classics offers handsome paperback editions of two of them, A House and its Head and Master and Maidservant, originally published as Bullivant 
and the Lambs). One would be hard-pressed to discover an undergraduate or even a graduate seminar in which her works are taught. 

    After a series of losses in her early years, Compton-Burnett led an almost eventless life, save for the event of writing and publishing. Her adolescence and young adulthood are a record of family deaths, one sibling after another — some to the routine devastations of a pre-antibiotic age (a beloved brother died of pneumonia), others to psychopathological tangles specific to her rather odd family (two sisters died by poison, 
in a suicide pact). Another brother, Noel, died in the Great War, after which his widow tried to kill herself. Ivy nursed her to health. 

    Noel, with whom Ivy was close, went to King’s College, London. Her biographer tells us that Ivy absorbed through him “not only the general skepticism prevalent at King’s but even the mannerisms of Cambridge conversation.” Compton-
Burnett evolved a stylish atheism ballasted by knowledge of the tragic waste of the war. Here is some representative dialogue from Pastors and Masters

    “I think I have found myself at last,” said Herrick. “I think that, God willing, I shall have done my little bit for my generation, done what every man ought to do before he dies.” . . .

    “Assuming God, you wouldn’t do much if he wasn’t willing,” said Masson. 

    The influence of Cambridge was decisive; the most distinctive feature of all of Compton-Burnett’s novels is that her characters talk to one another about their melodramatic problems (involving wills, inheritances, bigamy, false parentage — things like that) with a high degree of linguistic self-consciousness (“Words pass from mouth to mouth. It is the only way you can become conversant with things,” as one character says, describing the circulation of a rumor.) Told largely in dialogue, the novels read as if Samuel Beckett were writing Freudian soap operas parodying the way Henry James’ people talk. Oedipal rage is routine in Compton-Burnett’s books — she rings grim changes on all the ways that children can hate their parents and that parents can destroy their children. Incest is a motif. Murder and theft are not unheard of. These sordid plots play out in dialogue that is arch, hyper-formalized, elliptical, and extremely precise. Unlike the Oscar Wilde comedies to which they are clearly affiliated, there is something cold and forbidding, something deliberately, nastily airless about Compton-Burnett’s novels, even when they are quite funny, as they often are. They are “glacially witty,” in the apt phrase a reviewer applied to them in The Atlantic in 1951. The subjection of extreme emotional intensities, drawn from highly melodramatic plots, to disciplined grids of language and syntax is one face of Compton-Burnett’s modernity 

    In the milieux of decaying gentry that Compton-
Burnett depicted in novel after novel, men of the cloth are not unknown, but religion as such is almost never Compton-
Burnett’s real topic. Religiosity, rather, is a signal of a personal deficit. As Spurling observes, “an active faith is generally a sign of mental or moral obtuseness.” Compton-Burnett’s vicars “are usually odious, fools and toadies or worse.” This cool contempt for religion and the religious, so different from the painfully achieved unbelief of her parents’ generation — “Ivy and her brothers seem to have reached their position of humorous incredulity easily and early . . . apparently without any of the torments suffered by an older generation of conscientious Victorians,” as Spurling puts it — is another face of Compton-Burnett’s modernity.

    But it is not, for the most part, thematized or presented as an artistic problem. Compton-Burnett’s unanguished atheism may be modern, but it is rarely modernist, in the sense that it does not submit the secularization process that it reflects to self-conscious scrutiny. Secularism is its condition but not its theme. “Religion, or the lack of it,” as Spurling says, “plays no great part” in Compton-Burnett’s books, “except as a convenient indication of social and intellectual standing.”

    That is almost true. There is one major exception: Elders and Betters, from 1944, perhaps the greatest of Compton-
Burnett’s mid-career novels, the plot of which can be summarized in a few sentences. The Donnes — the widower Benjamin and his children, Anna, Bernard, Esmond, and Reuben — have recently moved into the neighborhood of Benjamin’s sister, Jessica Calderon, her husband Thomas, and their children Terence, Tullia, Theodora, and Julius. Also living with the Calderons is Benjamin and Jessica’s invalid sister Sukey, possessor of a failing heart and a large fortune. Sukey soon dies and Anna, thirty and therefore approaching old maidhood, contrives by deception to inherit Sukey’s fortune. Jessica, aghast not so much at her failure to inherit as at the discovery, so she thinks, that her sister did not love her, commits suicide. Anna uses her fortune to marry Terence, love of whom seems to have been her motive. 

    This is the only novel in Compton-Burnett’s oeuvre in which a central family, the Donnes, is described in ethno-
religious terms: “The family had a faintly Jewish look, and biblical names had a way of recurring amongst them, but they neither claimed nor admitted any strain of Jewish blood. The truth was that there had been none in the last generations, and that they had no earlier record of their history.” In Compton-Burnett’s world, this is an unprecedented, and unrepeated, instance of genealogical specificity. And both family names, the Donnes and the Calderons, carry associations with Catholicism (John Donne was born to a recusant family; Pedro Calderón, the great seventeenth-century Spanish dramatist, became a Catholic priest). Calderon, moreover, is a not uncommon name among Jews of Iberian origin. It might be true that Jessica “held the accepted faith” — English Anglicanism — “and lived according to it,” but through their Jewish looks and their Catholic names, the Donnes and Calderons might be open to other admixtures.

    That possibility is realized in the private religion of the youngest characters in the book, Julius and Theodora (“Gift of God”), who worship in secret an Asiatic deity called Chung, whom they address in the cadences of the King James Old Testament: “‘O great and good and powerful god Chung,’ said Theodora Calderon, on her knees before a rock in the garden, ‘protect us, we beseech thee, in the new life that is upon us. For strangers threaten our peace, and the hordes of the alien draw nigh. Keep us in thy sight, and save us from the dangers that beset our path. For Sung Li’s sake, amen.’” 

    Theodora and Julius’ charming syncretic faith represents the only sustained treatment of religion in all of Compton-
Burnett’s novels — almost the only treatment at all. Touchingly precocious, the two children work through the vast problems of post-Christian belief as they offer a sort of anthropological commentary on their own invention. “‘Sung Li is a good name,’ said Julius, as they rose from their knees. ‘Enough like Son and yet not too much like it. It would not do to have them the same.’” His sister responds: “‘Blasphemy is no help in establishing a deity,’ in a tone of supporting him.” They go on to speculate about whether the power of Chung will persist into adulthood, or whether he is merely a children’s god. And they stumble upon hard questions about their own moral character: 

    “After the age of fourteen his influence fades,” said Julius, in a tone of suggestion.
    “Then people have to turn to the accepted faith. Their time of choice is past. But the power of the young gods is real for those who are innocent. That would be the test.”
    “But we are not innocent,” said Julius.
    “Yes, I think we are. Children’s sins are light in the eyes of the gods.”
    “We steal things that are not ours, Dora.”
    “Yes, but not jewels or money or anything recognised as theft.”
    “A sixpence would be thought to be money.”
    “But it is not gold or notes or anything that counts 
to a god.”
    But the steps of the pair faltered, and they turned with one accord back to the rock.
    “O great and good and powerful god, Chung,” said Dora, as they fell on their knees, “forgive us any sins that go beyond the weakness of youth. Pardon any faults that are grievous in thy sight, for temptations lie in wait. For Sung Li’s sake, amen.”
    “Temptation does beset us,” said Julius, gaining his feet.
    “It is a pity that so much of the pleasure of life depends on sin,” said his sister. “We could not be expected to live quite without joy. No god of childhood would wish it.”

    Later they wonder whether the exotic names that they have given Chung and Sung Li, “purloined from a book,” are “fitting,” and decide that they are. Dora points out that “a name with a Chinese sound is more reverent than an English one. “We could not call a god John or Thomas,” Julius says. “Or Judas,” says Theodora. 

    For Julius and Theodora, religion might be a fiction, but the sin it polices is a fact. They share this conviction with other exemplary moderns, such as Hulme, who writes of the “sane classical dogma of original sin,” and, most exemplarily, the anthropological Freud of Totem and Taboo, for whom the primordial Oedipal murder is original sin’s original scene. It is their recognition that they are not immune to — that indeed they are already corrupted by — the same cruelty and aggression that infect their elders and “betters” that makes the children in Elders and Betters so poignant and interesting. 
“‘O powerful god, Chung,’ said Julius, in a rapid gabble, turning and inclining his knee, ‘be merciful to any weakness that approaches real transgression.’” Julius and Theodora 
have no illusions about the innocence of childhood, their 
own included. 

    Compton-Burnett is nowhere more modernist than in suffering a lively sense of the reality of sin, a sense which should be incompatible with her atheism. This tension, or contradiction, was typical. It was partially for what he took to be their implausibly anodyne vision of human benevolence that Hulme railed against the Romantics; and in this he was sometimes joined by Yeats, for whom, as he put it in one of his autobiographies, Romantics such as Emerson and Whitman “have begun to seem superficial because they lack the Vision of Evil.” Elsewhere, Yeats averred that “the strength and weight of Shakespeare, of Villon, of Dante, even of Cervantes, come from their preoccupation with evil,” whereas in “Shelley, in Ruskin, in Wordsworth . . . there is a constant resolution to dwell upon good only.” What Mutter calls Yeats’ “sense of recalcitrant evil” compelled Yeats to embrace what he himself called “original sin,” a concept that he thought was compatible with heathenism and paganism. That was one solution to 
the problem. 

    T. S. Eliot’s conversion to Anglicanism, in which his reluctance to dispense with the idea of original sin figured prominently, offered another solution, albeit an incomprehensible one to more resolutely secular modernists such as Ezra Pound and Virginia Woolf. “I have had a most shameful and distressing interview with poor dear Tom Eliot,” Woolf wrote to her sister, Vanessa Bell, in 1928, “who may be called dead to us all from this day forward. He has become an Anglo-Catholic, believes in God and immortality, and goes to church. I was really shocked.” Nine years earlier Pound had been similarly struck by Eliot’s assertion, during a trip to the south of France, that he thought he might believe in life after death. In language he could not have known mirrored Woolf’s to Bell, Pound in Canto XXIX, published in 1930, had Eliot express a certain satisfaction at Pound’s surprise: “‘I am afraid of the life after death.’ / and after a pause: / ’Now, at last, I have shocked him.’” Writing in his own voice, Pound was capable of moralizing anti-religiosity: “All religions are evil.” 

    Pound’s own solution to the felt thinness of secularity, at least before he was seduced into grandiose fascistic politico-
economic visions, was the old nineteenth-century one, the effete “religion of culture” that Matthew Arnold’s critics accused him of desiring. (“It is said to be a religion proposing parmaceti, or some scented salve or other, as a cure for human miseries,” as Arnold paraphrased the charge.) Eliot felt its attractions, too, but decided, in what seemed like deliberate perversity to many of his peers, that the answer was instead to merge an avant-garde poetics with a resuscitated religious orthodoxy, even as Pound was merging an avant-garde poetics with a pseudo-scientific theory of political economy. For Eliot, the critic Matthew Hollis writes, “the weakness of the condition of literature was an effect of the weakness in the condition of religion.” 

    For four months in 1934, Pound and Eliot argued about these issues in the pages of the New English Weekly. Their dispute began with Pound’s aggressively negative review of Eliot’s After Strange Gods, the text of a lecture given at the University of Virginia arguing, among other things, for the importance of “orthodox” religion to culture in general and literary culture specifically. Pound took a programmatic secular line: “‘Religion’ [has] long since resigned,” he wrote. In the old days, “religion was real,” but today, for most people, it is “either a left-over or an irrelevance.” (In later decades Eliot himself suppressed the text of this lecture, which had become notorious for its anti-Semitism — “reasons of race and religion combine to make any large number of free-thinking Jews undesirable.” It is one of the more tangled ironies of the history of modernism that Pound, who would become a systematic anti-Semite and a traitorous fascist during the Second World War, should have objected to After Strange Gods, not indeed out of any love for the Jews but out of distaste for the Christian parochialism of Eliot’s anti-Semitism.) 

    Like Woolf, Pound finds Eliot’s profession of faith essentially incomprehensible — so much so that he claims not even to know what Eliot means by “religion” in the first place. This, despite the fact that there is nothing really obscure in what Eliot intended. As the literary critic Christina C. Stough summarizes the Pound–Eliot controversy, religion for Eliot comprised “a full sense of Heaven and Hell — a religion of sacraments, rituals, orthodoxy, and above all, an acceptance of original sin.” For Woolf, the notion that Eliot could believe in all this was absurd: “A corpse would seem to me more believable than he is. I mean, there’s something obscene in a living person sitting by the fire and believing in God.” For Pound, such belief was perhaps even pathological; he refers, in a letter of 1936, to “Eliot’s crazed and pseudo-religious brain.” 

    Compton-Burnett’s personal convictions on the matter were surely closer to Woolf’s and Pound’s than to Eliot’s. Yet she shared with Eliot, as with Yeats and Hulme, a vivid sense of human sinfulness, and so she was compelled, in Elders and Betters, to take religion seriously, as a poetic system with moral effects. Or at least half-seriously: religion in this novel is after all a children’s game. But is it merely that? 

    Throughout Elders and Betters, the adults use a secularized and ironized religious language to refer to their own actions and emotions. When Anna, whose secret bullying dishonesty triggered Jessica’s suicide, tells her father that Jessica “raised the devil within me,” she is speaking accurately, although dissembling in context. When, later, Anna rebukes her cousin Terence for what she suggests is his hyperbolically gloomy portrayal of human nature — “Oh, we are not such sinks of iniquity. We are most of us well-intentioned, everyday sort of creatures” — her irony of course conceals the truth about herself. Closer to Compton-Burnett’s own vision is Terence’s rejoinder: “The part of us” — where “us” means not just the Calderons and the Donnes but all people — “that we have in common would shock anyone.” Terence’s cousin Bernard deflates Terence’s pessimistic articulation of something like original sin by reducing it to mere shallow moralizing: “You sound as if you would make a resolution on New Year’s Day.” 

    But for Dora and Julius, the problem of “the part of us that we have in common” cannot be so easily contained or dismissed. Small lies trouble them, like their having told their governess that their mother had given them a holiday from instruction. “‘O great and good and powerful god, Chung,’ prayed Dora, ‘forgive us, we beseech thee, the lie that has passed our lips. For we have uttered to thy handmaid, our governess, the thing that is false, yea and even to our mother. And this we did to gain respite from our daily task.’” Their sensitivity even leads them to a sort of scrupulosity, a constant condition of moral auditing; they are always able to discover further subtleties to a misdeed. For instance, after the above prayer for forgiveness:

    “I should think it is especially wicked to take advantage of [Mother’s] being absent-minded, when it is a sort of illness,” said Dora.
    The pair met each other’s eyes and in a moment were back at the rock.
    “O great god, Chung, pardon any wickedness we showed in putting our mother’s weakness to our wrongful purposes. For Sung Li’s sake, amen.”

    The heightened language of Julius and Theodora’s worship, laced with the rhythms of the King James Bible and the secondhand exoticism of Orientalist adventure books, is one of the funniest and most touching inventions in all of Compton-Burnett’s work — touching because, for the children themselves, its magic is absolutely real. The deployment of ritual language really can render atonement for the “wrongful purposes” one is always discovering within oneself. This despite the fact that they know perfectly well that their sacred idiom is merely their own invention, that it lacks any priestly or scriptural warrant. In their games with Chung, Dora and Julius are very serious ironists. Irony is one way of handling the problem of sin in a secular age. 

    Serious irony is modernism’s master mode. It is a way of approaching not just the sacred in a secular age but also the authority of the past in a technological age. One of its basic forms is the quotational — the deployment of a rhetorically self-conscious language larded with quotations or near-quotations or shadowed by the penumbra of quotation. This is what Louis Menand, writing of T. S. Eliot’s verse, calls “the literary quotation marks of imitation and allusion.” (Most infamously, there is Walter Benjamin’s fantasy of producing a book that consists of nothing but quotations.) Eliot’s facility with a kind of virtuosic imitation became clear in his second book of poems, Ara Vos Prec, modeled stylistically and metrically on the French symbolists. That book was a disappointment to many reviewers, who sensed in Eliot’s ironic recourse to an earlier generation of ironists the last redoubt of a minor satirist. But in “The Waste Land,” Eliot performed a kind of alchemy on an allusive poetic idiom that had otherwise come to seem insincere. Eliot understood that what Menand calls “the aura of insincerity” associated with literary quotation was in fact where its modernizing potential most clearly lay. Irony wedded to a sufficiently capacious grasp of tradition could save one from insincerity by estranging the source material — thereby smuggling intensities of emotion through the cracks between the juxtaposed inheritances, the shored fragments, of the poetic past. Irony and collage enable an extreme and sincere intensity. That was “The Waste Land”’s achievement, its instruction to the culture. 

    In their own way, Dora and Julius are similarly masters of a sincerity that issues, paradoxically, magically, from cobbled-together pieces of past language, a King James cadence made newly powerful in the mouths of these precocious, self-aware, vulnerable children. Where Eliot wrung a renewed sense of sacred meaning, however fractured and tenuous, from sources drawn from anthropological texts such as The Golden Bough, so Dora and Julius are attracted to pagan rites of sacrifice (just flowers, no animals) and even to witchcraft. (The two children, atoning for a scuffle over a wishbone, drop some of Dora’s hair into the fireplace. Dora: “Say an incantation over the witch’s cauldron.” Julius: “We ought to have the finger of a dead child, not the hair of a live one.”) Where Eliot’s interest in non-Western religion, in particular the Vedas, supplied a much-needed mystical charge to his exhausted Unitarianism (and supplied some of “The Waste Land”’s most famous and estranging lines, “Datta. Dayadhvam. Damyata. / Shantih shantih shantih”), so the fictional Asiatic divinities “Chung” and “Sung Li” caress the children’s Biblical idiom with an exotic wind from the East, amplifying its authority by suggesting that its truths are transcultural and transhistorical, with rhymes across time and place in the great book of comparative religion. And where Eliot’s own conviction, confessed to Pound before “The Waste Land” was even begun, that he believed — needed to believe — in life after death must have been, in 1919 at least, a component of a generalized mysticism rather than an item of orthodoxy in an Anglican creed he had yet to embrace, so Dora and Julius’ own anxious envisioning of an afterlife flows less from Anglican doctrine than from the multifarious speculations of their uncannily sophisticated childish imaginations. “‘Of course Mother can look down [from Heaven] and see,’ said Dora. ‘It almost seems a pity that people can do that. It might prevent them from having perfect bliss.’” 

    When Julius and Dora get going, their rituals take on a life of their own — they become possessed, transported by their own serious game. Here, in one of the novel’s most striking scenes, Julius encourages Dora, who is reluctant at first, into anathematizing their father, Thomas, for what they rightly perceive as his condescending failure to grasp the emotional complexity of their attitude toward their mother’s death. After scolding them for fighting, Thomas encourages them to mourn in what strikes Julius as a false and sentimental fashion. “If people can’t talk about their dead in a natural way,” Julius says, “they had better be silent.” 

    “Of course we did fight,” said Dora.
    “Well, and why not?” said her brother, with increasing violence. “Are we children or are we not? Are we likely to have the ways of a man and woman, or are we not? Had we been through an impossible day through no fault of our own, or had we not? Is it our fault that Mother is dead? I should like to hear Father answer those questions.”
    “You did not ask them.” said Dora.
    “The time was not ripe. The moment is not yet. But I hold them in store. And then let Father rue the day.”
    “I don’t suppose you would dare to ask them. And it wouldn’t be any good to make him hate you.”
    “There is such a thing as wholesome respect,” said Julius.
    “We are in his power,” said Dora. “I suppose he could starve us if he liked.”
    “Whatever base and dastardly thing he contemplates,” said Julius, striking an attitude, and losing sight as readily as his sister of Thomas’s having no inhuman tendencies, “whatever dark meditations have a place in his heart, there is no easy way for him towards them; there is no royal road. So let him keep the truth in his heart and ponder it.”
    “He gives us food and clothes and has us taught,” said Dora, in a dubious tone, uncertain if mere fulfillment of duty should operate in her father’s favour.
    “The minimum that a man could do,” said Julius. “The least amount of expense and thought, that would save him from the contempt of all mankind. Would you have him cast us forth, as if no tie bound us?”
    “As if we were not his kith and kin,” said Dora, falling into her brother’s tone. “As if we were penniless orphans, driven to seek a moment’s shelter within his doors. As if no sacred tie of blood bound us, hand and heart to heart.”
    “Let him take thought for the dark retribution that is gathering,” said Julius, with a deep frown. “Let him take counsel with himself. That is all I have to say.”
    “The bread he has cast upon the waters, will return after many days,” said Dora.
    “Then he will repent the grudging spirit that stayed his hand.” 

    Unlike all the other scenes of Dora and Julius’ ritual language use, this one does not take place at the temple of Chung, and it is not addressed to that deity. Instead, as Julius seeks to vent his anger at his father and to encourage Dora to join him in that feeling, his speech becomes stained by the ritual idiom — at first, it seems, almost involuntarily, and then with mounting deliberateness. Dora, hesitant at first (“He gives us food and has us taught”), is at length compelled by the rhythm and the sacred diction of Julius’ lines to join him: “‘As if we were not his kith and kin,’ said Dora, falling into her brother’s tone.” The prosody of the sacred here offers the children a sort of therapeutic pressure valve for their pent-up feelings — and affords them one of religion’s less savory uses, the cursing of an enemy. 

    The scene ends with our learning that the children’s older brother, Terence, has been listening in on their strange conversation. “Terence rose and left the room, disturbed by the activities of his brother and sister, whom he believed to be acting some kind of play, a view in which he was right.” “Some kind of play” — rooted in an ever-expanding aura of allusion and quotation, in a pseudo-Biblical speech that riffs but does not mock — is how, for Julius and Dora, the highest devotions are enacted, and the sternest moral judgments passed. Julius and Dora’s private ritual language enacts modernism’s serious irony, in a childish key. 

    The ritual, even magical force of language was a prominent aspect of the modernists’ interest in religion. If modernist literature is the literature of the first historical period in which secularity itself was an object of reflection, its difficult style was the new language in which that reflection could unfold. But its newness — in Joyce, in Pound, in Eliot, in Yeats, in Stevens, and indeed in Compton-Burnett — is compounded of older strains, including, saliently, of idioms drawn from the religious traditions felt to have been superseded. In other words, the “restlessness” of what Mutter calls “secular modernism” inheres in its uneasy — alternatively skeptical and re-enchanted — repurposing of ritual language. Nor is this restricted to the Anglophone world. As Pericles Lewis observes, both Kafka and Proust became the transnational influences they did in part because, “fascinated by the limits of secularization,” each relied on “the frequent use of religious language.” Their very modernity is a consequence of their post-religious navigation of the magical formulas of religion. 

    Matthew Arnold wrote in 1880 that “most of what now passes with us for religion and philosophy will be replaced by poetry.” If that was true in 1880, it was even truer in 1909, in 1922, in 1944. It doesn’t seem to be true anymore. One way of understanding cultural secularization, then, is as the process whereby the ritual emanations of literary language no longer arouse a response among a large enough body of readers to sustain literary culture as a satisfying substitution for religious feeling. Or, more accurately, to sustain it at the scale at which it had become entrenched over the course of the twentieth century. Literary culture persists, of course, but interest in any literature not of the immediate present is increasingly the mark of the antiquarian. 

    The analogy between religious secularization and cultural secularization should not be pressed too far. The most important difference is that the formal humanities have always been the province of the elite. But religious secularization, too, was an elite, rather than a popular, phenomenon in the period encompassing Arnold, Eliot, and Compton-Burnett. (In fact, Arnold’s era saw fairly high rates of church attendance, and although attendance declined at the end of the nineteenth century, it was fairly stable in the first several decades of the twentieth century.) The shock that Woolf and Pound felt at Eliot’s conversion was a distinctly rarified response. The average Englishman would have seen nothing surprising. 

    In the West, irreligion as measured by church attendance as well as by reported self-identification is much more widespread today than it has ever been — even in the United States, long thought to be an outlier in this regard. If “cultural secularization” is a useful concept (as obviously I think it is), its relationship to the uneven distribution and recent acceleration of religious secularization will need to be worked out. It seems fair to suspect that the weakening claims of religion, the increasing unfamiliarity of its ways of thinking and seeing — of its poiesis — have also vitiated the study of literature specifically and the humanities in general. Will a completely secular society create and preserve and transmit high literature? Must supporters of the humanities hope for an incomplete secularization? In the absence of religious forms, how will we honor and express religious feelings? Are the humanities, at least in many of their forms, at bottom the study of the same eternal themes that preoccupied religion — the same questions with different answers? 

    When, in a conversation with her father, Dora lets some of her heightened ritual idiom slip into her speech, she is rebuked. “Suppose we stop quoting other people, and say the things that come into our own little head,” he says. Then, “there was a silence, while Julius and Dora exchanged a glance, and with it a resolution to submit to fate.” That fate increasingly appears to be one of an obsolescent tradition. But the contours of cultural obsolescence were first fully comprehended 
by the modernists themselves. In the coming years, the most vital cultural institutions, whether in the academy or outside of it, will need to reckon with the terrible responsibility of preserving kinds of value and forms of meaning to which the wider culture is inhospitable. 

    9 Poems

    Yehuda Halevi

    Nine Poems

    Yehuda Halevi (c. 1075–1141) was the Hebrew poet who culminated the startling period of Andalusian cultural production that Jewish history calls the Golden Age. In a moment of symbiosis in Islamic Spain from the eleventh to the twelfth centuries, Hebrew poetry flowered as it had not since the Bible and would not again until the modern era. A courtier-rabbi class arose serving Muslim rulers, steeped in their Arabic language and culture. The men of this new Jewish elite adapted Arabic poetics to Hebrew, including quantitative meters and a purist approach to the lexicon, while adopting Arab poetry’s embrace of secular alongside religious verse.

    Halevi was born on the frontier between al-Andalus and Castille, possibly in Tudela, while it was still under Muslim control. As a teenager he went south to Granada, where he rose to prominence as one of the finest Hebrew poets of the age. In addition to over eight hundred extant religious and secular poems, he wrote (in Arabic) The Kuzari, an important work of Jewish philosophy that challenged the rationalism that was ascendant in the Jewish thought of the period. He was also a successful physician and merchant who moved, in the increasing political instability of the period, between a number of communities, including Lucena, Seville, and Christian Toledo.

    Late in his life Halevi repudiated Andalusian cosmopolitanism and prepared for a rare and dangerous pilgrimage to Palestine, which he undertook shortly before he died. His poems from this period are known as his Shirei Tzion (Songs of Zion), and they reflect the “mystical geography” (in Hillel Halkin’s phrase) laid out in The Kuzari, which holds the Land of Israel as the site of utmost Jewish holiness. While many modern thinkers have claimed this work of Halevi’s as proto-Zionism, it resists such retroactive categorization. The scholar Raymond Scheindlin has argued persuasively that Halevi was following the period’s Islamic pattern of the mutawakkil, an ascetic withdrawal from society and dramatic altering of one’s life.

    In keeping with their Arabic models, the Golden Age poets wrote for the most part a highly, almost extravagantly, ornamental verse. Their work is sonically lush with alliteration, assonance, and interwoven consonants and vowels; and syntactically dense with double and triple puns, homonyms, and other wordplay. It is also written within elaborate formal constraints in a Biblical Hebrew which layers it with intertextual references to that canon (as well as the contemporary Arabic poetry on which it is modeled). It thus presents an extreme case of the inadequacy of translation in general. 

    My particular focus as a translator is on the sound, or what poets like to call the music, of this poetry. As a poet reading these poems I experience above all an utter reveling in the materiality of language. My goal is to create versions that approach some of this sonic richness. In this light I have chosen the music over form and precision of content. I aim to render this music as immediate as possible, and so I sometimes adapt archaic images and terms to ones with more resonance in contemporary language.

    Dan Alter

     

    [Beloved did you forget . . . ]
    Beloved did you forget how you slept between my breasts
    & why have you sold me forever into chains

    Didn’t I chase after you once in an untamed land
    Mountains & sand, Dead Sea & Sinai my witnesses

    You had my love & I was your desire, so how
    Can you share out my riches without me?

    Pushed into Seir, forced toward Kedar,
    Turned in the furnace of Greece, abused in the bond of Iran

    Who besides you could set me free
    Who but me, so caged by hope?

    Lend me your strength,
    I will give you my tenderness

    [Oh homeland don’t 
you wonder]

    Oh homeland don’t you wonder on your captives,
    who call to you, the ones left of your pastures

    From seaward to sunrise, tree-line to barrens,
    calling from far or close by from all sides 

    Call of a captive of desire, who sheds tears like dew
    on Hermon’s peaks, & longs to let them fall on your mountains

    When I wail out your torment I’m a hound, & when I dream
    your homecoming has come, I’m strung with your songs

    My heart beats to Beth-El & louder at P’niel
    & Makhanayim & every point your pure ones touched

    Where the holy nearness filled your winds, your maker
    opened your windows to the windows of skies

    & only God’s glow for your light, no
    sun moon or stars to illuminate you

    I would let my last breath spill out right where
    the divine spirit overflowed your chosen ones

    You’re the royal home & the holy seat & how
    have servants sat down on your heroes’ thrones?

    If only I could wander in the places
    God was shown to your envoys & seers

    & who will make me wings & I will range
    with my heart in pieces between your ragged peaks

    I would fall to my face on your ground & thrill
    to your stones & feel your sweet dust with my fingers

    & then standing on my ancestors’ gravestones,
    I’d be stunned in Hevron how your finest are buried there

    I would cross your forests & terraces & pause
    in awe at the Gilead ridgeline where Moses was buried

    His & Aaron’s burial mountains, those two huge
    lights shining on you, showing you the way

    Living breath — the air of your soil, & myrrh fragrance
    the grains your earth, & honey-flow for your rivers

    It would soothe my mind to go shoeless, naked
    in the waste & ruins that were your shrines

    Where your ark was hidden away, where cherubim
    stayed in your innermost chambers

    I’d shear off my hair grown in devotion, curse the years
    in unclean lands that fouled your most devoted

    How fine can the food on my plate taste when I see
    your young lions dragged along by dogs?

    Or how will daylight sweeten my eyes, while
    I watch crows carry away their kill of your eagles 

    Slow now, cup of suffering, ease up, my belly
    & soul are swollen with your bitterness

    When I remember Jerusalem straying, I drink,
    & her fallen sister Samaria, & I drain it

    Zion loveliest crown, how you weave love & grace
    as of old & your friends’ souls woven through you

    Those who smile if you’re at peace & ache
    at your desolation & weep for your shattered pieces

    From a captive’s pit yearning for you, each
    in his place bowing toward the arches of your gates

    Your numberless herds driven out & scattered
    from mountain to hilltop but your fence-lines not forgotten

    Who cling to your fringes & fight to climb
    & grasp onto your date palms’ canopies

    Could Babylon & Egypt ever match you, their hollow
    prayers compare to the gemstones you told truths with?

    Who will compare to your nobles & holy men,
    your chanters in the temple, singers in the choir?

    False-god kingdoms will fall & be gone — your force
    is forever, through the generations your jewels

    God sought you for a home, & joy to a human who
    has chosen, draws closer, to dwell in your courtyards

    Joy to one who awaited, arrives, lays eyes
    on your light dawning & your sunrises burst upon him

    To see your chosen thriving, to thrill in your joy
    as you come back to your long-ago bloom

     

    [West, this is your wind, wings]

    West, this is your wind, wings      perfumed with balm & apple
    You come from the trader’s warehouses,       not the storehouses of wind
    You lift the swallow’s wing calling me to freedom      like the fragrance of just-picked aloes
    How we all long for you, by whom      we ride a few boards over the sea
    Please don’t ease off your touch on the ship      when day catches up or settles down
    Level the depths, spread the sea open,       don’t stop until you reach the holy mountains
    Rebuff the easterly that thrashes up the seas      until the waves seethe like a pot boiling over
    But what can the wind do, bound in God’s hand,       now restrained, now he sends it?
    My only wish: in the hand of the One on high      who raises high mountains & makes the win

     

    [In distances a dove]

    In distances a dove strayed into a forest
    lost, not knowing how to recover

    Hovering, wings swinging spun
    wheeling circles around her beloved


    Counting down the millennia to redemption
    but her figuring leaves her dumbfounded


    Having suffered for her lover long years
    of wandering, soul exposed to the grave

    Saying I will no more remember his name
    but it lights in her heart like flame:


    Why would you be her enemy, her beak
    parted for your salving spring rain


    She believes to her depths, turning from despair
    whether she’s raised up or suffers by him

    Let our God come, no longer silent
    fire rising on all sides of him

     

    [Gently, with your soft waist]

    Gently, with your soft waist but heart      hard, gently with me, I surrender.

    Straying only with my eyes, my heart      pure but my eyes drunk on you.

    Let those eyes gather roses      & lilies where they grow together

    in your cheeks, from which I rake fire      to fight fire & if I’m thirsty find water there.

    I would sip the red lips glowing      like coals, & my moth tongs. My life

    suspends between those crimson lines      but with sundown my death comes. Then

    I find only nights with no end where once      was no midnight for days, time

    was in my hands like clay & constellations      spun like the potter’s wheel.

     

    [Cry out, forest, for a cedar]

    Cry out, forest, for a cedar, for one
    who waited for daylight, but night fell. 

    No, I didn’t know the Seven Sisters
    could die with him, & the morning star.

     

    [Lately with winter rains]

    Lately with winter rains      the land has nursed on a cloud like a baby
    or like a bride shut in by the cold      longing for days of love,
    hours of touching, until spring      comes to ease her aching heart.
    Dressed in flowerbeds of gold & brocade      the way a girl thrills
    at her clothes as she changes      & shares them with everyone.
    Day by day the hues go      amber to ruby to pearl,
    turning pale or green & then      reddening as if kissing.
    So beautiful I wonder      if they’re stars stolen from heaven.
    At dawn we come where the trees lean      with wine & our hearts flaring,
    wine snow-cool to the fingers      that lights fire inside us
    rising from its jug like a sun      to flow into our fine glasses.
    As we stroll under the garden’s shade      it laughs at a rain shower weeping,
    smiles when a cloud cries shaking off       droplets like a burst necklace,
    savoring the swallow-call like liquor      or when the dove behind leaves
    tells a secret like a singer       swinging her body behind a screen.
    How I miss the breeze of those dawns      where my friends’ scent lingers,
    wind which sways the myrtle, wafts      its fragrance to them far away
    lifting & lowering the branches      as palm fronds clap along to bird song.

     

    [She, washing dresses]

    She, washing dresses in the rain
    of my tears, then spreading them in her sunlight

    to dry, has no need of springs with my
    two eyes, nor sun, given how she shines. 

     

    [My heart is in the east]

    My heart is in the east, while I’m in the far west
    how can I savor the food in my mouth?

    How make good my pledges & vows, with Zion
    tied down by Edom & I’m in Arab bonds?

    As easy to leave all the fineness of Spain
    as it would be sweet to see the ruins of the Shrine

    The Adults in the Room

    I was a liberal before I knew what the word meant, before I had read a word of Locke, Mill, Berlin, and Rawls, before, in fact, I knew anything about the world at all. Liberalism was not a political idea; it was a family loyalty, born in the blood, and it became a way of life. We liberals commonly tell ourselves that, unlike the far right and the far left, we reach our beliefs through a rational inspection of the world as it is, but I didn’t get my ideas that way. I didn’t form my convictions through a critical evaluation of evidence about life as it actually was. I was born a liberal. 

    My parents were liberals, their friends were liberals, and my father worked for thirty years for liberal governments in Canada. Some of my earliest memories are political: at the age of five, in 1952, watching the Republican convention with my parents on the first fuzzy black- and-white TV we ever owned. My parents were Canadian diplomats in Washington, and they were for Adlai, not Ike, and like their American friends they were horrified by McCarthy, the scowling Republican bully who presided over the Senate Army hearings. So before I knew anything at all, pretty much as soon as I could stand up and put on my own clothes, the label had been sown into the shirt on my back.

    While other kids had baseball or hockey stars for heroes, mine were Jack and Bobby Kennedy, Bayard Rustin, Martin Luther King, Jr., and Rosa Parks. By the age of twelve, I was copying Jack Kennedy’s mannerisms. I couldn’t do the Boston brahmin accent, but I could put my hand in my blazer pocket, with my thumb down the front seam, the way he did. By the time I was twenty-one I knew by heart Bobby Kennedy’s improvised speech in Indianapolis on the night of King’s assassination, to comfort a shocked and grieving black crowd, quoting Aeschylus about the “awful grace of God”. In those terrible months of spring and early summer in 1968, when both King and Kennedy were murdered, I campaigned for Pierre Trudeau, bringing delegates over to our side in the tumultuous five ballot struggle at the convention that elected him leader of the Liberal Party and then traveling with him on the cross-country campaign that elected him Prime Minister in June 1968. I was twenty-one years old. Bliss it was in that dawn. It was the only political campaign I have ever been part of where we knew we were going to win, the only question was by how much. It was also the only political campaign where I saw what winning meant. Two nights after his victory I was invited out to Harrington Lake, the Prime Minister’s country residence, to dine with him and one of his then current girlfriends. Instead of exhilaration, there was exhaustion in Trudeau’s eyes, and I thought I saw fear too, in his dawning realization of what it meant to hold power. 

    My heroes may have been Americans, but my liberalism was Canadian all the way down. Liberalism prides itself on its cosmopolitanism, but in truth all liberalisms are local, since, as the man said, all politics is local. Canadian liberalism had all the self-congratulatory earnestness particular to a small official elite, to whom my parents belonged. It was a managerial doctrine of moderation appropriate for a small country, with no imperial destiny like its neighbor next door, but instead trying to muddle through, holding together a continental nation-state the size of America but with a tenth of its population in a harsh but beautiful landscape where, as Margaret Atwood said a long time ago, the name of the game was survival. 

    Yet, in its muddling way, Canada did more than survive. In the surge of postwar prosperity Canadian liberalism did some great things, a new national flag, a new constitution and charter of rights, a new immigration policy, a national pension and a national health care program. The canard that liberalism never dares to take on big enemies is false. To make all this happen, liberal governments had to take on provincial governments, resurgent Quebec nationalism, and vested interests coast to coast, chief among them the pharmaceutical companies and the doctor’s lobbies. So I grew up with a liberalism that knew how to fight. It was unafraid to tame capitalism and to “socialize” medicine and pensions in order to take the fear of catastrophic illness and poverty in old age out of people’s lives. Liberalism’s victories in the 1950s and 1960s laid the foundations of a welfare state not just in Canada, but also in Europe and America. Lyndon Johnson’s administration secured Medicare for elderly Americans and Head Start for poor children. 

    We liberals of the 1960s thought we had laid the granite of basic security under everyone’s feet. Sixty years later, the granite is cracking, the liberal state is frayed, contested, underfunded, straining at the seams, and we are defending our achievement, and none too successfully, against populists and authoritarians who want to take it apart. They have mobilized resentment at the price of social solidarity, but they offer no solutions, or solutions so drastic, such as the forcible deportation of millions of migrants, that they would tear society to pieces. A politics that stokes anger without proposing solutions is not a politics. It is only manipulation, and we like to think that we are in the solution business. 

    We are right about that, but we keep on defending achievements of long ago instead of raising our sights and finding a way to fund and reinvent social solidarity for the twenty-first century. For my heyday — 1945 to 1975, what the French call les trente glorieuses, the glorious thirty years of robust growth and relative equality — has gone forever. Beginning with the oil crisis of the 1970s, an abyss slowly opened up between a credentialed elite and an uncredentialed working class whose steady union jobs were stripped out and shipped overseas. Those of us who got the credentials to enter the professional classes did well, but plenty of our fellow citizens fell behind. We didn’t notice this in time, and our failure opened up a chasm between who we were, what we believed, and the people we represented. We kept offering “equality of opportunity,” a chance for the credentialed few to enter the professional elite, without tackling capitalism’s remorseless distribution of economic disadvantage itself. 

    By the late 1990s, the conservatives began to gain power by playing to the resentments of the ignored. The authoritarian right, especially, understood that they could build an entire politics on mocking the blindness of the liberal elite. They didn’t need solutions; stoking the rage was enough. We are now the embattled object of that rage. What will it take to earn the trust of those whose discontent we ignored? Liberalism in the next generation will need to save social solidarity from the “creative destruction” of the market, by rebuilding the fiscal capacity of the liberal state and investing in the public goods that underpin a common life for all. Saying this, at a high level of generality, is easy enough: the tougher part will be finding the language and the cunning to convert a radical liberalism into a politics that wins elections and a governing strategy that pushes change through 
the veto-rich thicket of interests waiting to derail our best laid plans.

    In the meantime we lament the “identity politics” of our populist and authoritarian competitors, when it would be more honest to admit that identity is where all political belief actually comes from, including our own. My identity — charter member of the white professional classes of Canada — defined my liberalism. What the liberal critique of identity politics does get right, though, we owe to our much-maligned individualism. Identity is not destiny. Every formative confrontation with reality presents each of us with political choices. We can either make up our own minds or borrow someone else’s beliefs. The convictions that stick are the ones that we decide for ourselves. The beliefs that we hold onto are the ones that first required a primal Yea or Nay to the allegiances we started life with. In the 1960s, I could have rebelled against my parents’ liberalism. Many of my generation 
did. Instead I said yes to the world I was born into and to the parents I was lucky enough to have. 

    Black friends of my generation also said yes to their parent’s allegiances, and they remain committed to deliver the still withheld promise of American equality. But this liberal inflection isn’t a racial obligatory. The black entertainment superstars of succeeding generations, with their bling and their Bentleys and their “attitude,” appear to have emancipated themselves from the entirety of their civil rights inheritance and its liberal conscience. So no, identity does not give us our politics. I was born a liberal, but I stayed one for life because I chose the liberal tribe as my own.

    How tribes shape you depends on the times that shape the tribe. My liberalism’s primal beginning was World War II. My parents were in their twenties when the war picked them up by the scruff of their necks and changed them forever. They found each other in London in the midst of the Blitz and the V-2s. They came to maturity during the most dramatic expansion of state power in history. In the space of five short years, Canada, like the United States, became an arsenal of democracy and fielded an army that landed in Normandy and liberated Europe. Their generation discovered the power of government, and the idea that government could be the problem and not the solution was inconceivable to them. There seemed to be nothing that a democratic government under arms couldn’t do, even defeat absolute evil. 

    Because they had watched their world burn down, theirs was a liberalism with internationalism at its heart. Human rights, the United Nations Charter, and the spider’s web now known as the “rules-based international order” were not the vapid bromides that they have become, but my father’s life-calling. He was part of an international generation of public servants who believed that the United Nations system, with its rules and its treaties, would tie down the predators in the international system and keep the small fry safe. 

    Their generation also knew what it was to hunker down in a bomb shelter with strangers, trying to keep the talk in the darkness light while the ground shook. They had lived the cross-class solidarity of those wartime shelters, and they came home from the war believing that liberal government could bind the classes together in peacetime. My left-leaning generation was just as sentimental about working people, except that we didn’t know any actual workers. As a student journalist in the 1960s, I once spent a morning on a picket line with printing workers who had been locked out of the plant that printed our university paper. It was a cold autumn morning, and we walked up and down, in front of the plant, carrying picket signs and what I remember best was not the warm glow of solidarity, but a red-faced feeling that I had nothing but good intentions to share with the big men who knew in their bones that they were marching not to victory but to the unemployment line. 

    Looking back now, liberals of my generation didn’t realize that the welfare state we grew up in did not unite classes. It interposed a state bureaucracy between classes, and its programs divided those who earn salaries from those who claim benefits. The rising costs of social solidarity divided citizens into warring camps. Exiting from liberal arrogance means finding a way back to a liberal politics of cross-class solidarity. 

    Exiting from arrogance also means, even if this sounds contradictory, recovering what that inheritance actually believed, before liberalism slipped into the suave managerial discourse it became in the Clinton and Blair years. For my parent’s wartime generation, it was a fighting creed. They knew that they were fighting against: fascism’s cult of death, its loathing of Jews, its national and racial hatred, its lust for conquest and domination. Against these forces of darkness, there was no place for compromise, moderation, splitting the difference, all the liberal virtues. This was a fight to the death that had to be won. 

    We can still learn from this intransigence. The Nazi marches in East Germany, the re-packaged Vichyite racism in the National Rally of Marine Le Pen, and the jeering anti-Semitism in Charlottesville, Virginia, show us that malignity never rests. Liberalism today would do well to be less self-deceiving about its opponents. I used to believe that liberalism only faces adversaries who could be allies tomorrow. I have had to learn there are some enemies in the house, dangerous to democracy, fatal to every liberal achievement, who simply have to be defeated, over and over again. 

    With the onset of the Cold War, my parents’ generation’s anti-fascism turned into anti-communism. By lining up against the Soviet threat, recent revisionists have argued, their generation abandoned their progressive New Deal beliefs and became apologists for American hegemony. According to Samuel Moyn, liberal thinkers such as Isaiah Berlin and Judith Shklar — who happened to have been my teachers — let their anti-communism blind them to the ugly violence of America in its imperial heyday. This progressive critique is meant to chide contemporary liberalism into learning from its mistakes, but it has the opposite effect. It severs the liberalism of today from potential sources of renewal. For if there is anything that Cold War liberalism can teach the next generation, it would be its unflinching opposition to authoritarian tyrannies and a determination to contain and deter their expansionist march. 

    Canadians, like Mexicans, do not need progressive Americans to tell them that America is an imperial power with blunt unilateralist instincts when it comes to defending vital interests. But we never forgot that America had fought fascism, stationed troops and weapons in Europe to deter the Soviets, and ensured that Western Europe stayed free, and we didn’t care overmuch that their motives, like any great power’s, were bound to be mixed. We also did not forget how long it took for America to enter World War II or to let in the refugees — too long, on both counts. So yes, the liberalism that became mine at adulthood was human rights universalist, militantly anti-communist, strongly internationalist, and pro-American at its core. 

    Besides, America also shared with the rest of the world an exuberant popular culture created by artists of genius. My parents’ heroes were Louis Armstrong, Fats Waller, and Ella Fitzgerald. Mine were Buddy Holly, Bob Dylan, Marvin Gaye, Sam Cooke, Wilson Pickett, and the Four Tops, names which still conjure up, across sixty years, what it felt like to be sixteen. It would be comical to call this music liberal, but it was certainly liberating, and it was profoundly American. This was a music of freedom and soulfulness, and tenderness too, and the fusion of black and white music promised something at once exciting, terrifying, and new: a truly inter-racial society. A young white teenager such as myself didn’t know a single black person well, but in our high school dances we danced to black music. We went to basement clubs and listened to grizzled old bluesmen, up from the deep South, and we knew by heart all the haunting and apocalyptic lyrics of Robert Johnson. This was still an innocent time when whites and blacks could learn what they wanted from each other’s culture, before the ban on “cultural appropriation” forced us all back into the false authenticity of our exclusive tribes. 

    We embraced black music, but we had no real idea about what was at stake in the struggles to the south of us. In the summer of 1963, President Kennedy gave his first television address on racial justice. I was sixteen, at my aunt’s house, having dinner with her sister’s husband, Clark Foreman, a New York lawyer, who did pro bono work for the National Association for the Advancement of Colored People. I proclaimed how impressed I had been by Kennedy’s speech. Foreman lowered his bifocals, stared me down across the table, and shook his head. I insisted Kennedy was going as fast as he could. Fast enough? Bull Connor’s dogs were tearing the clothes off black demonstrators in Birmingham parks; Governor Wallace was standing at the door of his state university, barring entry to a qualified black man who wanted to study. Black churches were being dynamited, and young children were dying. 

    My inter-racial enthusiasms were too cautious by half, but the civil rights struggle gripped me. It offered a vision of inter-racial harmony as well as a politics of how to get there: through non-violent civil disobedience, mass rallies, legal challenges in the courts. In the space of a decade, black men and women perfected the whole repertoire of liberal politics for my generation, from the sit-ins and freedom marches in the American south to Rustin’s organizational effort that produced the March on Washington. It was a transformative experience to see that liberalism could be a fighting creed again, as it had been in my parent’s time — to see a young man named John Lewis, no older than I was, daring to cross Selma Bridge and being beaten bloody by the troopers, an image so resonant in memory that when, sixty years later, on Capitol Hill, I was introduced to Lewis, all I could do was clasp his hands and thank him, inarticulately, for the lesson in courage that he had taught us all. It was impossible for someone of my generation to hate an America that produced such a man. 

    When we entered college, we marched against the war in Vietnam, and borrowed the entire repertoire of black civil disobedience, not to denounce America, as our leftist friends wanted us to do, but to redeem it. Looking back now, I see that it was precisely then that a pro-American liberal like me made the emotional commitment that decades later led to a mistake that haunts me to this day: support for the war in Iraq. We had opposed the war in Vietnam to call America back to its better angels, and fifty years later the same instinctive belief in America, despite Bush, Cheney, Rumsfeld and all the avatars of doom I might have noticed, led me to support an operation that has become synonymous with imperial folly. 

    When I entered graduate school in history at Harvard in 1969, I continued marching against the Vietnam war, often with people far to the left of me, but my chief political commitment of the time was prison visiting at the Massachusetts Correctional Institution, Norfolk. My doctoral thesis was on the punitive side of Enlightenment liberalism, how intellectuals and philosophers such as Jeremy Bentham, Benjamin Rush, and others sought to replace the arbitrary violence of ancien regime punishment with the new carceral technology of the penitentiary. By day, I read historical documents in Langdell Law Library, while by night I sat in a poorly lit room, sixty miles south of Boston, behind bars with a dozen young black men doing time for murder, rape, and a host of other crimes.

    For four years, I came out every Tuesday night and sat across the table listening to them arguing, joking, and just being themselves with profane exuberance, often at my expense since I was the only white man in the room. As I got to know some of them, I helped them to get parole, or to get jobs, only to see them skip town or skip parole as soon as they tasted freedom. Some of what I learned was shocking, such as discovering that the handsome and articulate young man across the table was doing life for having thrown a skillet of boiling fat all over 
his girlfriend, blinding and scarring her forever. I had no conception of where such life-devastating rage could come from, or how a man could ever repent or repair such lethal harm. My years at Norfolk were the first moment in my life when I could see, plain as day, that a liberal upbringing was insufficient for me to understand the world I had entered.

    In 1973, in the wake of the Attica prison riots in upstate New York that claimed forty-three lives, a similar riot devastated Walpole, the medium security prison next to Norfolk, and I volunteered to go in and mediate the stand-off between police and inmates that followed the uprising. When I did get inside and toured the cell blocks, the anger that had consumed the place seemed elemental in its force. In the weeks that followed, I played out a classic liberal role — mediating between the prisoners barricaded inside the ruined prison and the state police, waiting outside, tear gas and truncheons and shotguns at the ready, to retake the institution. I did a night shift on the segregation ward, reserved for prisoners too dangerous to be in general circulation, and I remember sitting on a chair in near darkness at one end of the cell block while black men on lockdown poked mirrors out between the bars of their cell doors to keep me under observation. I was twenty-five years old. 

    As a young graduate student in the dark and turbulent 1970s, the education that changed me most was not at Harvard but at Norfolk. It led me out of innocence. Here was rage beyond understanding, directed obliquely at me because of my race. I had never experienced the impersonality of racial hatred, its fixation on your skin, its indifference to who you actually are. After the prison riot of 1973, I retreated back to academic work and wrote my dissertation on the origins of the penitentiary in more privileged precincts, in the stacks and cushioned reading rooms of the Harvard library system. 

    I earned my living teaching the first generation of young students to benefit from Harvard’s venture into affirmative action. Now, fifty years later, affirmative action and race-based admissions have been outlawed by the Supreme Court, to loud expressions of liberal despair, and so it is worth recalling that the first students to benefit from affirmative action were often miserable. I remember a young female student from South Carolina, the first in her family to attend college, saying between choked-back tears that liberal good intentions had gifted her a place in an elite institution, but not the belief that she had a right to be there. 

    Without realizing it, my generation of young white liberals was witnessing the problematic unfolding of a multi-dimensional and all-encompassing revolution. Affirmative action for black students was followed by the feminist upsurge. In the space of a generation, women went from being a minority in university classrooms to the majority. The girls we had dated in high school, with their gardenia corsages and “good girl” proprieties now became the young women discovering their sexuality and challenging our own. Outside the university, in the wider society, a little-noticed change in immigration law opened the door to Asian, Caribbean, and African immigrants who had been barred since the 1920s. In the same period, liberal democracies began decriminalizing homosexuality. As Pierre Trudeau said in 1967, “the state has no place in the bedrooms of the nation.”

    The Canada I grew up in had been white and aggressively heterosexual. By 1980, I was living in a multiracial and sexually pluralistic society, teaming with new citizens from every corner of the globe. The contrast is captured in a comparison of my University of Toronto graduation class photo of 1969 — mostly male, at least professedly straight, all white — and the graduation photo of the same age group, at the same college, in 2024 — majority female and every color of the rainbow, turbans, hijabs, and skullcaps all expressive of a new diversity which we liberals quickly turned into a religion of its own. 

    This still-unfolding multi-dimensional revolution turned out to be the cardinal liberal achievement of my era, but it enormously complicated the liberal task of finding the middle way between the Scylla and Charybdis of extremisms. We were naïve about the nature of this problem, preferring to believe that all reasonable human beings would embrace a revolution of inclusion, when the reality was that our generation had upended the entire social order, and even our own place in it. Diversity — of gender, sexual orientation, race, religion, and class — was a virtue in comparison to the dire cantonment of peoples in silos of exclusion, but liberals turned diversity into an ideology. Once an ideology, it quickly became a coercive program of invigilation of speech and behavior in the name of dignity and respect. 

    Credentialed whites of my generation welcomed the revolution because we could invite new recruits of color into our ranks without ever feeling that our own elite status was being challenged. We didn’t seem to notice that non-elite whites were threatened, even betrayed, by the new multiracial order. Faced with what we thought was white racism and sexism, when it was mostly fear, we began promulgating codes of speech and conduct to impose diversity as a new cultural norm. New bureaucracies in universities, corporate headquarters, 
and government offices enforced diversity at the price of freedom, the freedom to defend unpopular loyalties, to freely dislike others, to be funny at other people’s expense, to be critical of the pieties of others but especially our own. A liberalism whose defining value should have been liberty invented a diversity and inclusion industry, whose guiding principle may have been justice, but whose means of enforcement included coercion, public disgrace, and exclusion. 

    Worst of all, we censored ourselves, willingly turning off our bullshit detectors, and stilling the inner doubts that might have made us confront our mistakes. We abandoned the truism that arguments are true or false, irrespective of the race or the origins of the person who makes them. We began promoting arguments as true based on the gender, race, class, origins, or backstory (oppression, discrimination, history of family violence) of the person uttering them. The value that we placed on diversity and inclusion led us by stages to jettison a care for truth itself. We ended up compromising the very epistemological privilege that had provided us with such unending self-satisfaction. 

    In failing to pay heed to the fears of displacement that the liberal revolution created, we ended up creating a vital political opening for every strand of extreme opinion queuing up to speak on behalf of everyone whom liberals had stopped listening to. By the 2020s most liberals were walking back, at first nervously, and then with increasing speed, from our own self-righteous politics of virtue. First we made everyone else sick of our virtue-signaling and then we became sick of 
it ourselves.

    The irony was that the liberal revolution destabilized liberals as much as it upset those who were resisting it outright. For it was the liberal revolution of inclusion that fragmented the centrist consensus that had made the liberal revolution possible in the first place. Once each group — black, female, gay, and trans — achieved emancipation, many of them began to identify with their own group to the exclusion of wider civic-sized political aggregations of interest. The old political parties — Liberal in Canada, Democrat in the United States, social democratic in Europe — that had presided over the liberal revolution now saw their white working-class base heading for the exits, and their multicultural support splintering into autonomous groups each beginning to make a strange new epistemological claim: you can only understand me if you are like me. Only black people can understand the black experience of racism and police violence. Only women can understand the tyranny of patriarchy and the fear of male sexual violence. Only gays can understand what same sex love truly means. 

    The old liberal epistemology at least rested on egalitarian and universal premises. We believed that everyone was capable of entering to some degree into the mental worlds and lived experiences of others, because all of us, regardless of race, creed, ethnicity, or sexual orientation, were rational human creatures. This rationalist universalism disintegrated in the 1980s and 1990s, attacked by a new generation of “progressive” scholars as masculinist, colonialist, racist, and fundamentally condescending. This assault was supposed to awaken us to “intersectionality” — the interaction of disadvantages — but instead of drawing hurt constituencies together it fragmented them into highly sectarian and identity-based political groupings that foreclosed on alliances, shared understandings, and common political projects across race, class, and gender. So now liberals denounce the prison house of identity politics, without realizing the degree to which this new self-defeating politics is a consequence of the very revolution that we helped to foment. 

    Needless to say, at the time I understood little or nothing of this, but these were some of the factors — the complacent politics of virtue, the blindness to the new inequality, the conceit that ours was the only rational politics — that began to erode the electoral base that had sustained the center ground of Western liberal politics. The convictions of my youth had survived intact from 1968, sheltering me from any mind-changing encounter with the world that had metamorphosed around me after the end of the Cold War. When, in 2005, I left behind a professorship at Harvard and took the plunge into Liberal party politics in Canada, it didn’t feel like a crazy departure from security, tenure, and privilege, 
but instead as if my feet had been traveling homeward bound all along. 

    I had no idea of what I was letting myself in for. I had no understanding of my own inexperience, and no grasp of how weakened and debilitated the liberalism of my party had become. We were a party that kept winning elections and governing the country, but with a vote-share slowly declining in the small towns and rural districts and piling up in the downtown urban centers where the professional and commercial elites liked to live. When I led the party into an election in 2011, truth be told, the liberal platform had not much to offer a people still shocked by the financial crash three years earlier. Our message, though we never said so directly was “trust us, we are the adults in the room.” We even called ourselves “the natural party of government.” On election night in 2011, our party suffered the worst defeat in our history, and I lost my seat in parliament — a verdict that all these years later reads to me like a judgment on me, but also on a liberalism that had allowed itself to be captured by its own self-regard.

    Defeat is a great teacher. It taught me that liberalism endures because it is a way of being and a set of values that tell us who we should try to be. This is what gives liberalism its hidden resilience, its capacity to rebuild after political reversals. If we want to rebuild, we will need to recover what the word used to mean. It once was a synonym for generosity. In the old days, a liberal gentleman was a generous man. We will want to discard these male, elitist associations by marrying generosity to the egalitarian individualism at the core of the liberal creed. The creed tells us that we are no better than anybody else but also no worse. What liberals value should be within everyone’s reach. A liberal person wants to be generous, open, alive to new possibilities, willing to learn from anyone. We want to share whatever wealth and fortune we have, to welcome strangers to our table, to stand up for people when they are in trouble. We know we have to change our minds when someone’s idea is better than ours. We have faith that history rewards those willing to fight for what they believe. Now, none of us is ever as generous as we would like to be and no liberal has a monopoly on generosity, but the largeness of spirit it calls us to does define our horizon of hope. 

    Such values are embattled today, and they need defending because our societies so desperately need largeness of spirit, together with a revived liberal ideal of solidarity. We need to be filling out this vision and bringing our citizens to believe in it. Defeat has taught me that we cannot afford to jettison our values when the tides of politics turn against us. Liberalism’s incorrigible vitality comes from the fact that it tells us who we most deeply want to be, provided that we are willing to fight for it and never surrender to the passing fashions of despair.

    Egalitarian Idealists and Authoritarian Zealots: A Cautionary Memoir

    In 1952, a year after I was born and a decade and a half before I became an active participant on the American left, Daniel Bell published a book called Marxian Socialism in America, the first serious scholarly examination of the subject. He considered, among other questions, why the traditional Marxist parties in the United States had by then descended into abject political isolation. The Socialist Party of America (SP), which four decades earlier had enrolled over a hundred thousand members and attracted nearly a million voters in the presidential election of 1912, was reduced to less than a thousand aging stalwarts by 1952; its youth affiliate, the Young People’s Socialist League (YPSL), had fewer than a hundred. Further to the left (or to the east, given its affinity for the Soviet Union), the Communist Party, USA (CP) still counted roughly twenty thousand members, but many of its leaders were imprisoned or about to be imprisoned for violation of the Smith Act, a federal law making it a crime to conspire to teach or advocate the desirability of overthrowing the government. According to public opinion polls in the early 1950s, a clear majority of Americans believed that the Party should be outlawed entirely. 

    Bell, who had joined YPSL at the age of thirteen in the early 1930s, and who in the first election in which he was eligible to do so voted for Socialist Party presidential candidate Norman Thomas, now concluded that the American Marxists of all persuasions had been destined for failure from the beginning, their fate “rooted in [an] inability to resolve a basic dilemma of ethics and politics”:

    The socialist movement . . . in its rejection of the capitalist order as a whole, could not relate itself to the specific problems of social action in the here-and-now, give-and-take political world. It was trapped by the unhappy problem of living “in but not of the world,” so it could only act, and then inadequately, as the moral, but not political man in immoral society. . . . A religious movement can split its allegiances and live in but not of the world . . . ; a political movement can not.

    Bell continued in later years to describe himself as a socialist in economics if not in political allegiances, and his verdict was delivered in a regretful tone, at least in regard to the Socialist Party’s fate. In any case, from the perspective of the early 1950s, it would have been hard to disagree with his judgment that the Marxist left’s moment as a meaningful player in American political life had come and gone. 

    Fast forward a dozen years to 1964, when I turned thirteen, the age that Daniel Bell was when he joined the socialist movement more than three decades earlier. This was also a coming-of-age moment for me, as I first began paying attention to what was happening in the broader world outside of family, neighborhood, and school. In that year’s presidential election, I was a fierce (if still eight-years-under-the-voting-age) partisan of the candidacy of the Democratic incumbent Lyndon Baines Johnson, and took great satisfaction in his trouncing of his arch-conservative rival Barry Goldwater in November. But something else caught my attention in 1964 that was destined to have a lasting impact on my political trajectory: a sub-drama within the Democratic camp playing out in Mississippi. There, from June through August, about a thousand young civil rights volunteers from around the country were taking part in the Freedom Summer Project directed by a remarkable twenty-nine-year-old activist named Bob Moses, a leader of the Student Non-Violent Coordinating Committee (SNCC). At the risk of their own lives (three would be kidnapped and murdered by the Klan at the very start of the project), the volunteers conducted a voter registration drive among the disenfranchised black population, and helped organize a new political formation called the Mississippi Freedom Democratic Party (MFDP) as an alternative to the regular and staunchly white-supremacist Democratic Party in the state. 

    At summer’s end, the MFDP sent an integrated delegation of Mississippi residents to the Democratic National Convention in Atlantic City to challenge the seating of the all-white regular delegates. Although President Johnson had overseen passage of the Civil Rights Act earlier in the year, fearful of losing the state and perhaps the entire south to Goldwater he dispatched a crew of established liberal and civil rights leaders to persuade the insurgent Mississippians to give up their challenge. In exchange, they were promised that the MFDP would be awarded two at-large delegates to the convention, along with the assurance that by 1968 all Democratic state delegations would then and thereafter be required to be open to black as well as white delegates. From the perspective of realpolitik, or what Bell would describe as the necessity of acting as “political man in an immoral society,” it was not an unreasonable offer. Dr. Martin Luther King, Jr. was among those initially urging the MFDP delegates to accept it. Yet the MFDP delegates refused to do so. As Bob Moses declared, after King had spoken in favor of compromise, “This reasoning you’ve been giving us here is inaccurate. We’re not here to bring politics into our morality, but to bring morality into our politics.” 

    To bring morality into our politics: this was something new in mid-twentieth-century American major party politics, where compromise and consensus, defined as accepting the recognition that you were not going to get everything you initially asked for, were considered the fundamental rules of the game. Some things, the MFDP delegates decided, were not up for compromise, such as their claim to full and equal rights as citizens in the United States. Theirs was an example of the “in but not of the world” stance that, just a dozen years earlier, Bell had described as leading to inevitable political irrelevance. And yet, in this instance, it worked. Less than a year later, building on the uncompromising groundwork laid by Freedom Summer, the MFDP credentials challenge at Atlantic City, and the Selma voting rights campaign of the following spring, Congress enacted a Voting Rights Act signed into law by President Johnson that would transform southern politics. And in 1968 some MFDP alumni would be seated at that year’s Chicago Democratic convention as members of the official Mississippi state delegation.

    Of course, unlike the radicals Bell was analyzing back in 1952, Bob Moses was not a Marxist. Nor, apart from some Red Diaper babies (that is, children of Communists), were most of the Freedom Summer volunteers in 1964. Their political outlook, including their insistence on a morally driven politics that refused to compromise basic principles, came out of a different and older tradition in American radicalism. It stretched back to Anne Hutchinson in the Massachusetts Bay Colony in the seventeenth century insisting on her right to question the authority of the colony’s Puritan ministers based on what she described as direct revelations from God. (For this she was branded an “antinomian” by both the political and religious establishments.) 

    A similar commitment to what in the nineteenth century was called obedience to a “Higher Law” was central to what was seen at the time as the wildly irresponsible demand for the “immediate” end of slavery by the abolitionist movement, as it was shaped by leaders such as William Lloyd Garrison and Frederick Douglass. Later still in the nineteenth- and early-twentieth centuries this same uncompromising morality-infused radicalism could be found in the women’s suffrage movement, represented by such figures as Elizabeth Cady Stanton and Alice Paul. This American radical tradition also had some influence within the Marxist left, most notably on Eugene Debs, who read his Marx and Engels and believed in class struggle and international working-class solidarity, but whose aspirations for the American cooperative commonwealth were based on the moral principles about democratic citizenship and individual conscience that he had imbibed as a young man growing up in Terre Haute. “Despite his Socialism,” Debs’ biographer Nick Salvatore argued, “a fierce individualism fueled his core vision,” evident in his most famous speech. In 1918, upon being convicted in federal court of speaking out against American entry into the First World War, he declared: 

    Your Honor, years ago I recognized my kinship with all living beings, and I made up my mind that I was not one bit better than the meanest on earth . . . While there is a lower class, I am in it, and while there is a criminal element I am of it, and while there is a soul in prison, I am not free.

    Time and again in the history of the United States, outsiders to the political mainstream, raising what were regarded as unreasonable demands and speaking in a moral vernacular that owed much to that antinomian strain of American Protestantism, brought to the fore issues such as slavery, women’s rights, and opposition to war that found few if any supporters in more conventional political circles. The role of this left-wing prophetic minority, which Daniel Bell did not sufficiently appreciate, is part of a vital political tradition that has over the centuries enhanced the freedoms and opportunities of millions of Americans. It is a tradition that I consider worth honoring and emulating — so long as one recognizes that it has also led on occasion to unforeseen and less inspiring outcomes.

    17-year-old Bonnie Raitt, and beside her the then 16-year-old author, American Friends Service Committee project volunteers, Indianapolis, July 1967.

    When I turned sixteen in the spring of 1967, I was a high school junior living in a small town in rural Connecticut, desperately anticipating graduation the following year, and with it the opportunity to move away to a big city to attend college. With a Quaker mother and a Jewish father, coming of age in a community where what passed for ethnic diversity ran the gamut from Yankee Protestant to Irish Catholic, my early desire to fit in with my neighbors and classmates gave way sometime post-puberty to an even stronger desire to escape the whole lot of them, and in doing so lay claim to a new sense of independence. Identifying entirely with the heroic selflessness of students taking part in the Freedom Summer project (some of whom also went on in the fall of 1964 to play leading roles in the Berkeley Free Speech Movement), my otherwise conventional adolescent rebellion took on an increasingly political edge. Of course, those history-making freedom struggles were happening so far away from my tedious isolation in Coventry, Connecticut that they may as well have been set on Middle Earth (it was about this time I also was immersing myself in the paperback edition of Lord of the Rings).

    But in 1967, with the advent of anti-Vietnam war protests in cities relatively nearby, my life changed in ways that felt most welcome, adventurous, and authentic. In mid-April I took the train to New York City to join in the “Spring Mobilization to End the War in Vietnam,” a demonstration called by a recently assembled coalition of radical, pacifist, and student groups, which proved to be the most massive anti-war gathering to that point in American history. Back home, nobody in my high school or community seemed to be against the bloody and unjust and unwinnable war in Vietnam but me. (My parents had their doubts, but prudently kept them to themselves.) But when I reached Sheep Meadow in Central Park on the morning of April 15, I no longer felt quite so lonely. With a quarter of a million other protesters, I marched downtown to a rally at the United Nations building. There, surrounded by my newfound fellowship (no elves, dwarves, or hobbits involved, but lots of students and hippies, as well as veterans, trade unionists, and other grown-ups), I listened intently to speeches by Dr. Benjamin Spock, Stokely Carmichael, and, most memorably, the Reverend Martin Luther King, Jr. He was not calling for compromise this time. “Let no one claim there is a consensus for this war,” he said as he began his remarks. “No flag-waving, no smug satisfaction with territorial conquest, no denunciation of the enemy can obscure the truth that many millions of Americans repudiate this war and refuse to take moral responsibility for it.” Maybe not King’s greatest speech and certainly not his best-remembered one, but it spoke to me then and continues to do so today, as part of the great American radical tradition that we now call “speaking truth to power,” as exemplified 
by Garrison, Douglass, Stanton, Paul, Debs, Moses, and 
King himself.

    I spent the summer that followed in a down-at-the-heels neighborhood in Indianapolis, Indiana, one of a group of teenage volunteers enrolled in an American Friends Service Committee (AFSC) project. An aspiring seventeen-year-old folksinger named Bonnie Raitt, bound for Radcliffe, and then for greater things, was among our number. Working with a local settlement house, we did various good works in the community, while learning about poverty, race, and the Quaker vision of conscience-driven social change. (Our texts ranged from Michael Harrington’s The Other America to Malcolm X’s Autobiography to Richard Gregg’s The Power of Non-Violence.) Bonnie and I and our compatriots organized a silent vigil in downtown Indianapolis on Hiroshima Day to protest the war, to the discomfort of the adult project leaders who were worried about hostile local reaction. (If they had preferred we stayed home that day, they should not have assigned us Gregg’s book.) By summer’s end, under the influence of Dr. King and the Quakers, I considered myself a committed pacifist, and what’s more — with the success of our little vigil — an organizer.

    That fall, a senior in high school, still short of my seventeenth birthday, I traveled from New York City with my Uncle Abe and Aunt Joan to Washington, D.C., to take part in the March on the Pentagon on October 21, organized by the same coalition of anti-war groups that had staged the Spring Mobilization. Although not a “Red Diaper Baby,” I did grow up knowing that people to whom I was related and respected had indeed been members of the Communist Party back in the years of the Great Depression — and some, like my father’s brother Abe, remained so. In fact, he had been one of the lawyers for the eleven Communist leaders convicted in 1949 for violating the Smith Act, and like the defendants he also went to jail, in his case for contempt of court. Despite his continuing allegiance to the Communist cause, he scrupulously refrained from trying to recruit me to his own corner of the organized Left (which, in any case, seemed to me too old and stodgy for serious consideration), but was happy to encourage my growing radical inclinations, wherever they led me.

    I had never been to Washington before, but sightseeing was not on our agenda — it would be years before I had the occasion to travel to the nation’s capital without attending a protest of some kind or another. We stayed in a downtown hotel, and on the morning of the march we joined the mass and legal part of the day’s protest, with about a hundred thousand people attending the rally at the Lincoln Memorial. Afterwards we marched across the Memorial Bridge to the Pentagon, where Secretary of Defense Robert McNamara (who by then had his own private doubts about the war he was so instrumental in launching and escalating) was nervously looking out the window of his upper-story office at the gathering crowd. When we got there, I impulsively decided to part company with my uncle and aunt and ran with a group of other young participants past surprised lines of military police to the very steps of the building. Totally unplanned, this was my first venture into militant but non-violent civil disobedience, which I imagine was the case for most of the rest of the few thousand protesters who now found themselves hemmed in by soldiers and federal marshals. There, at the steps of the Pentagon, we listened to impassioned speeches against the war, chanted anti-war slogans — and did not, contrary to subsequent accusations, spit on anybody. Mostly, astonished at our proximity to the planning center and symbol of an evil and unjust war, we wondered what would happen next. In my case, the answer came shortly before dusk, when a burly federal marshal pulled my feet out from under me and dragged me roughly down the adjacent embankment before depositing me on the pavement of the Pentagon’s north parking lot. I picked myself up and limped back across the bridge connecting Arlington to Washington. All in all, I thought it had been the best day of my life. 

    Among the speakers I remember listening to that afternoon at the Pentagon was a young Swarthmore College graduate named Cathy Wilkerson, then working as a regional organizer for Students for a Democratic Society (SDS) in Washington, D.C. Many years later I read Flying Close to the Sun, her memoir of the years she spent as a radical activist in the New Left. She began the decade of the 1960s as a Quaker and an admirer of Gandhi, and she ended it as a fugitive in the Weather Underground. When I encountered her in 1967, she was half-way through that unfortunate transition, but some part of the earlier Quaker influences apparently lingered. She urged us that afternoon not to take unnecessary risks. “While I had been excited by Debray and Fanon,” she recalled, “here in the heat of confrontation it was the model of the non-violent confrontations of the civil rights movement that seemed most powerful. To the extent we had any power at the Pentagon, which didn’t feel like much, it was the power of a moral witness.” 

    Wilkerson’s life story is, among other things, a cautionary case study of how even the best instincts can sometimes lead to terrible outcomes. The emphasis in the early civil rights movement of “putting your body on the line,” as in the lunch counter sit-ins of 1960 and the Freedom Rides on 1961, took on a darker meaning for some activists by the late 1960s and early 1970s. In Wilkerson’s case, it led to the “townhouse explosion” of 1970, which left three of her bomb-building comrades dead in the rubble of her father’s expensive Greek Revival home in Greenwich Village, while she stumbled out of the wreckage and took flight into the underground. 

    Wilkerson’s memoir provides an illuminating glimpse into the fatal trajectory of the largest radical student movement in American history. I find her particularly astute in describing the internal dynamics of SDS in 1968–1969, the year I joined as a freshman at Reed College in Portland, Oregon (having finally realized my ambition of escaping small town life). I was part of a flood of new recruits to SDS, pushing its numbers that academic year to the vicinity of a hundred thousand strong. “This infusion of young people,” Wilkerson wrote in her memoir, “drawn in more by culture than politics, was becoming the norm” in SDS: “They weren’t looking for a complicated discussion about how to bring about change, but for validation, for community, and for a way to express their anger about the war.”

    That seems exactly right in my memory of the moment: validation, community, anger, all understandable reasons for joining a radical movement. Embracing left-wing causes, historically, has not been restricted to “a complicated discussion” about either abstract ideals or political strategy. It also often includes, especially for young people who provide the majority of converts to such causes, the forging of a new personal identity, as Wilkerson — and my own example — suggest. What matters is how motivation and spirit that goes into shaping that self-redefinition is then channeled.

    A similar insight from an earlier generation comes from another memoir, Starting Out in the Thirties by Alfred Kazin, which I read while writing a senior thesis at Reed on the history of Communist-organized literary groups in the Depression decade. “History was going our way,” Kazin wrote of his own youthful attraction to the radical Left (and briefly Communism) in those years:

    Everything in the outside world seemed to be moving towards some final decision, for by now the Spanish Civil War had begun, and every day felt choked with struggle. It was as if the planet had locked in combat . . . There seemed to be no division between my effort at personal liberation, and the apparent effort of humanity to deliver itself. Reading Silone and Malraux, discovering the Beethoven string quartets and having love affairs were part of the great pattern in Spain, in the Valley of the Ebro, in the Salinas Valley in California . . . Wherever I went now I felt the moral contagion of a single idea.

    Substitute Vietnam for Spain, and, perhaps, the White Album for the Beethoven string quartets (we weren’t quite as attuned to high culture as Kazin’s generation of young Jewish intellectuals thirty years earlier), and that pretty much sums up my own sense of historical destiny in my year in SDS. I don’t know if “contagion” is quite the right word, but even though I and most of the Reed SDS chapter avoided the temptation to follow Cathy Wilkerson into the violent and radical Weather underground, student revolutionary politics in 1968–1969 did, at times, have a feverish quality. 

    Thinking back on those years, the problem was not so much that I had a “single idea” governing my political choices (and certainly not “Communism” as it would have been recognized by a veteran of the 1930s); rather, I had too many contradictory and muddled ideas, which in my youth and inexperience I found impossible to sort out. Like Wilkerson that day at the Pentagon a year earlier, I was still partially under the sway of the Quaker ideal of “moral witness” as espoused in Gregg’s The Power of Non-Violence. But I was also attracted to the in-your-face confrontational politics being pushed in New Left Notes, SDS’s weekly newsletter. The ideal of non-violent “moral witness” was giving way to the vague but not necessarily non-violent ideal of “resistance.” I remember sometime that fall reading The Port Huron Statement, the founding manifesto of SDS, written (largely) by Tom Hayden in 1962, calling for the creation of a left with genuine intellectual skills, and being entirely persuaded. But also, that same fall in 1968, I read his essay in the radical monthly magazine Ramparts called “Two, Three, Many Columbias,” celebrating the strike at Columbia University the previous April, and essentially dismissing higher education in the United States as a wholly owned subsidiary of the war machine, which, he implied, did not deserve to survive. I found that equally persuasive. In sum, I was confused, an intellectual mess. I found wisdom in the works of Lenin and in the works of Lennon (and McCartney, Harrison, and Starr). I somehow believed in all these things simultaneously, and that they all went nicely together. I failed, or refused, to notice any contradictions.

    The author, age 20, at an anti-war protest in Portland, Oregon, 1971 (photo by Michael Kazin).

    As Wilkerson noted, my cohort of young radicals had few fixed political ideas beyond opposing the Vietnam War. And that sentiment, completely justifiable in itself, could lead in any number of directions, some entirely sensible and decent, including marches and vigils, draft resistance and other forms of peaceful civil disobedience — or, in Wilkerson’s case, to her embrace of lethal terrorism. “People make their own history,” as Marx famously observed, “but not under conditions of their own choosing.” I hardly mean to excuse poor choices made by Wilkerson, or myself, or others at the time — but it does suggest why so many individual actors tended to make worse choices at the end of the 1960s than, at an equivalent age, they might have made eight or nine years earlier. 

    SDS’s evolution in the course of the 1960s resembled a streetcar that, depending on the year you climbed aboard, carried you where it would with no fixed route. Had I been old enough to climb aboard in SDS’s early days, from 1962 to 1965, our destination, at least in the short run, would have been what the Port Huron Statement had described as a “participatory democracy,” our chief activity supporting the southern civil rights movement. Climbing aboard in 1968, however, in the midst of an ever-escalating and ever more destructive war in Vietnam, plus domestic warfare in the streets of Newark, Detroit, and Washington, D.C., SDS’s destination was transformed into the revolutionary transformation of, well, everything — the details a little vague, to be achieved through means that were not clear, but in my imagination bore some resemblance to Paris, May ’68, if only American streets were lined with cobblestones. I could have gone either way, and looking back I much prefer the former to the latter destination. But instead, I climbed aboard the streetcar in what proved SDS’s final calamitous year.

    I did learn some valuable lessons in the late 1960s and early 1970s that, when the dust had settled, informed my subsequent career as a historian exploring the fate of twentieth-century American radicalism. Between 1982 and 2000 I published four books on the history of the American left, and then a fifth, Reds: The Tragedy of American Communism, this past year. If I were to summarize the thesis of that latest effort, it boils down to a single sentence in the book’s preface, arguing that the Communist movement “attracted egalitarian idealists, and bred authoritarian zealots.” Although I was never a Communist as such — I was too much immersed in the counterculture of the era to go that route — I have to confess there is at least a little submerged autobiography informing that thesis.

    The lessons I learned in the late 1960s also informed my own subsequent political choices. In 1982, the year my first book came out, I was a founding member of a new left-wing group, Democratic Socialists of America (DSA). DSA’s most influential figure was Michael Harrington, who I had first encountered as an authority on poverty back in that formative summer of 1967. DSA, in Harrington’s vision, would strive to be “the left wing of the possible.” By that he meant that it should devote itself to building the broadest possible coalition of progressive groups within and alongside the Democratic Party, including the labor, feminist, environmental, and civil rights movements. Along with its socialist aspirations, DSA was thus firmly committed to “relate itself to the specific problems of social action in the here-and-now, give-and-take political world,” to borrow Bell’s formula (although Bell, by then describing his own politics as neo-conservative, had no interest in Harrington’s project). DSAers also laid equal stress on both words in the organization’s name, combining an absolute commitment to democracy as well as socialism, in the United States and internationally. Given my experience with the crack-up of the New Left a decade earlier, which led to SDS’s splintering in 1969 into a host of competing would-be revolutionary vanguards, I found DSA’s pragmatism and its democratic principles reassuring. No more charging down blind adventurous alleys for me, literally or figuratively, thank you very much.

    The author, age 31 in 1982, the year he joined Democratic Socialists of America, (Photo by David Weintraub.)

    Over the next three decades, DSA’s membership hovered between five and ten thousand, making it the largest socialist group on the American left, but one that was unable to attract sizeable numbers of new (and younger) recruits in the years between the Reagan and the George W. Bush administrations, and so played at best a minor role in the nation’s politics. Harrington died in 1989 of cancer at the age of sixty-one, and no one of comparable public stature (and intellectual skill) replaced him as a widely recognized symbol of what it meant to be a democratic socialist.

    And then, unexpectedly as these things tend to happen, new opportunities arose. In 2016, Senator Bernie Sanders, a self-described democratic socialist (although not a DSA member), ran a spirited campaign for the Democratic presidential nomination, and attracted many younger voters to his cause. That was followed two years later by Alexandria Ocasio-Cortez’s election to Congress — a young woman of color with extraordinary political skills, who proudly proclaimed her own DSA affiliation. People, especially young people, began to Google this unfamiliar term, “democratic socialism,” and up popped the DSA website. By the early 2020s, forty-four percent of Americans between the ages of eighteen and twenty-nine reported positive views of socialism, according to polling by the Pew Research Center. (Unfortunately Pew failed to ask what they mean by socialism. Probably not Soviet-era Communism.) As a result, and in some ways comparable to the boost that SDS experienced in the mid-1960s driven by the escalation of the war in Vietnam, DSA underwent an enormous expansion in membership, peaking at over ninety thousand by 2020. 

    Most of those new members were in their twenties, with a lot of enthusiasm, energy, and raw political talent, organizing chapters in cities and states across the country that had not seen an active socialist presence since the Debsian era. Soon scores of DSA members, running as Democrats, had won election to state legislatures, city councils, and other offices. By January 2021 there were four DSA members sitting in Congress, more socialists than had ever served in that body at the same time. Harrington’s “left-wing of the possible” was really happening, at long last. Or so it seemed. 

    But in keeping with the history of the modern American left, DSA’s future trajectory and prospects would prove neither uncomplicated nor untroubled. Two factors began to tug the organization in a very different direction politically from its earlier identity. The first was that, like those of us who came of age in the 1960s, the younger radicals now pouring into the organization, who soon vastly outnumbered DSA veterans, were impatient with their elders. In the early 1960s Michael Harrington had served for a few years as a mentor and role model to Tom Hayden and other SDSers, but by the latter part of the decade was either regarded as a sell-out by my generation of SDSers or forgotten entirely. “Perhaps it is inevitable,” he observed ruefully in 1967, “that young people come to the radical movement with the fervor of catechumens,” that is, converts to the Faith preparing for confirmation in the early Church, “and always believe that the veterans of past struggles are tired and going soft.” So it proved in DSA. By 2019 or so, “Harringtonite” was becoming a term of abuse within the organization (although no one I knew in DSA ever labeled themselves as such — cults of personality were never a particular feature of democratic socialism). 

    DSA’s years of maximum growth were also, significantly, years of despair rather than hope on the left, coinciding with Donald Trump’s election in 2016 and the subsequent four years of his dystopian presidency. Those four years climaxed with the killing of George Floyd in 2020. To many of the young people joining the Black Lives Matter demonstrations that ensued, their nation seemed stained with the sin of absolute and irredeemable racism, not unlike the way it seemed to abolitionists in the 1850s and New Leftists in the later 1960s. 

    It might be useful here to think back on the events that took place in Atlantic City in August 1964, when the Mississippi Freedom Democratic Party rejected the compromise offered them by prominent liberals, and chose instead to be ruled entirely by their principles. They were right to do so. But that moment had other, more protracted, and unintended consequences, especially for young radicals, as Bob Moses’ biographer Eric Burner argued in his perceptive account, And Gently He Shall Lead Them: Robert Parris Moses and Civil Rights in Mississippi: “In giving compromise a bad name the maneuvering of the liberals at Atlantic City . . . contributed to a mentality, increasingly aggressive in the years that followed, that purity is to be measured by how many people the pure refuse to cooperate with . . . It was purity of a sort that has since come to define the later sixties, at once fortifying and destructive.” 

    At once fortifying and destructive. What is fortifying for the already persuaded does not necessarily build bridges to those who still need persuading — quite the opposite. That tendency for “purity . . . to be measured by how many people the pure refuse to cooperate with” can be considered historically the Achilles’ heel of the antinomian tradition of American radicalism. All too often in recent decades, those on the American Left — and this is especially true on campuses, among faculty as well as students — seem to be engaged in a competition among themselves to demonstrate that they have individually arrived at a state of political grace, as measured by the ability to deploy ever more esoteric language and the embrace of ever more marginal niche issues. Failure to attract supporters outside the core group of believers can then be attributed to backsliding within the congregation, a morally satisfying but politically self-defeating habit. And seen in this light, Bell’s point about the political problems of “living in but not of the world” is not without insight. What worked in Atlantic City in 1964 decidedly did not work in the streets of Chicago in 1968, or in the Weather Underground in the years that followed. Or in too many left-wing circles today.

    The other factor complicating DSA’s future was that not all of the organization’s recruits were twenty-somethings new to the organized left. Starting in 2016, hundreds of veterans of left-wing groups that had nothing in common with democratic socialism joined DSA, with the intention of turning this now sizeable but somewhat inchoate group into something very much at odds with its founding vision. These included Trotskyists, Maoists, and others who traced their political inspiration and organizational lineage to the Leninist vision of (and here forgive the parade of venerable and stale clichés) creating a “party of a new type” of “professional revolutionaries” who would devote “the whole of their lives” to the cause. This is a vision of revolutionary change whose highest virtues are discipline and hierarchy, neither of which were much in evidence or much valued in DSA before this. 

    This reverence for authority and structure, for all its ideological variations, at its core differs very little from the model promoted by Joseph Stalin in an ideological primer that he composed in 1924 entitled The Foundations of Leninism, in which he asserted that “the working class without a revolutionary party is an army without a General Staff.” An essentially militarist mentality unites Trotskyist, Maoist, and other Leninist cults operating in the United States, which can be seen in their fondness for terms like “cadre” to describe their hard-core supporters. Accordingly, post-2016, various would-be General Staffs set to work to recruit that wave of younger members pouring into DSA, and in doing so convert the organization into the vanguard party of the American proletariat that they had never succeeded in creating when they were operating under their own banners of deepest red. DSA’s dwindling core of aging veterans who adhered to the traditional left-wing of the possible vision, and who did not think of or conduct themselves as a “cadre,” were not up for the ensuing challenge for control. Once again, as in so many factional wars on the left in the past, power became the real prize. In August 2023, at the organization’s bi-annual national convention, a coalition of self-described communist factions effectively gained control of DSA’s ruling sixteen-member National Political Committee (NPC).

    Here I will allow the new leaders of DSA’s NPC to introduce themselves. One of the dominant factions, the Red Star Caucus (with three members elected to the NPC, including one who was appointed co-chair of the national DSA), ran an article in its August 2024 on-line newsletter entitled “Communists Belong in DSA,” announcing that as “a Marxist-Leninist DSA caucus” it was calling “on all communists in the United States to join us” in the struggle to capture DSA and “move toward a revolutionary horizon.” Allied (as well as competing) with the Red Star Caucus is the Marxist Unity Group (MUG), which describes itself as “particularly inspired . . . by those that kept [the movement’s] revolutionary spirit alive in the face of political capitulation: Lenin and the Bolsheviks.” MUG’s vision of the transition to socialism in America is one in which “our class” (that is the working class, in whose name it presumes to speak — another trope of radical history) will take advantage of a future crisis of capitalism

    to topple the old order and convene a revolutionary Popular Assembly . . . Under the democratic leadership of a victorious socialist party [led by the Marxist Unity Group or its political heirs], the Popular Assembly will proceed to construct the socialist order. It will dismantle the slaveholder constitution and write the founding documents of the new republic. . . .  All parties that accept the laws of the new revolutionary order will be free to operate. 

    Tough luck for those Americans who will not be not on board with discarding the old (and certainly flawed!) U.S. Constitution for that imposed by “the new revolutionary order.” Embracing this vision of an extra-constitutional and almost certainly violent road to power at home, the communist majority on the NPC scrupulously avoids any criticism of anti-democratic actors abroad, at least those that are anti-American. As the Red Star caucus explained in its “points of unity”: “We see no benefit in levying public criticism of states or movements that are opposed to US empire, as such critique in effect serves no purpose except to create consent for empire.”

    Thus no criticism can be heard in DSA’s ruling circle of Vladimir Putin’s Russia, Xi Jinping’s China, Kim Jong Un’s North Korea, Nicolás Maduro’s Venezuela — or Hamas. (It was DSA’s response to Hamas’ October 7, 2023 pogrom in Israel, uncritically celebrating it as an act of legitimate “resistance” to Zionist “settler colonialism,” that led to my own resignation from DSA two days later; I believe in Israel’s right to exist and defend itself, but condemn the humanitarian catastrophe that Israel’s subsequent military response has unleashed on Gaza). 

    Daniel Bell had experienced the likes of the Marxist Unity Group and the Red Star Caucus many decades before these latecomers to the perennial authoritarian strain of ultra-leftism came into existence. In Marxian Socialism in America, he described the equivalent kindergarten Leninist fantasies entertained by left-sectarian groups of his own era as consisting of “the illusions of settling the fate of history, the mimetic combat on the plains of destiny, and the vicarious sense of power in demolishing [other left-wing] opponents.” Since Bell’s day, however, the advent of social media has fostered and simplified the process through which the sectarian left can with a few keystrokes go about settling the fate of history on the plains of destiny. Consider the explanation for Kamala Harris’ decision to choose Tim Walz rather than Josh Shapiro as the vice-presidential candidate offered by DSA’s NPC in a statement posted to X on August 6, 2024:

    Harris choosing Walz as a running mate has shown the world that DSA and our allies on the left are a force that cannot be ignored. Through collective action, DSA and the US left more broadly have made it clear that change is needed. The Uncommitted movement, in which DSA members played crucial roles nationally and in multiple states, pressured the Democratic establishment into choosing a new candidate and backing down from a potential VP [Shapiro] with direct ties to the IDF and who would have ferociously supported the ongoing genocide in Palestine. 

    Most political activists engaged in real-world Democratic Party politics would find the claim that DSA somehow determined Harris’ choice of running mate an absurd example of childish boasting. Fox News, however, which has different priorities, really liked that tweet, amplifying DSA’s claims to exercising such powerful influence over Democratic campaign strategy on “Fox and Friends,” with a chyron on the screen reading “Tim Walz: A Radical, Far Left Ideologue,” and adding on its website, “Walz is not a member of the DSA, but has made favorable comments regarding socialism.” 

    As for the left sectarians’ vicarious sense of power in demolishing opponents within their own camp, consider how the Marxist Unity Group, in a posting to “X” on July 21, 2024, described the two individuals who more than anyone else were responsible for DSA’s growth between 2016 and 2020:

    AOC and Bernie Sanders have abandoned the working class to exploitation and the Palestinian people to genocide. Rather than lead the working class in the battle for democracy, they tried to tail Biden to win bandages for a disintegrating capitalism.

    Once again, in the time-honored radical tradition, the sectarian impulse is to eat their own. 

    While the Republic may well be facing its moment of maximum danger since the election of 1860 and its aftermath (see the events of January 6, 2021 and November 5, 2024), the danger does not come from this quarter. DSA’s new amateurish proprietors have already managed to reduce the organization’s membership from its peak of ninety thousand to seventy thousand. Or perhaps lower: it has been some time since they have offered any authoritative figures.

    That is the bad news, I mean for them. The good news, such as it is, is that the views of the NPC sectarian majority likely do not represent that of the actual majority of rank-and- file DSAers. Bernie Sanders and Alexandria Ocasio-Cortez, safe to say, are not seen as traitors to the cause by most democratic socialists. The communist caucuses that gained control of the NPC probably count no more than a few thousand members, if that, between them. Most of the remaining dues-paying DSAers are not active participants in any of the rival caucuses or in their local chapters.

    There still remain sane caucuses in DSA who plan on challenging the control of the ultras at the national convention 
in 2025, and I wish them well, but without much expectation that they will succeed. One thing left sectarians are good at is stacking meetings and manipulating votes. And even if the silent majority of DSAers, let’s call them the “democratic socialist caucus,” do regain power, the damage has been done. Thanks to the Red Star Caucus, and others who have succumbed to the Leninist temptation, DSA has become toxic in the eyes of many on the democratic left, for its juvenile ideological posturing and for lending ammunition to the Murdoch media empire. That legacy of would-be Bolsheviks romping across plains of destiny will prove hard to erase.

    David A. Bell, a historian at Princeton University and a contributor to these pages, and the son of Daniel Bell, penned a fine article for Dissent magazine a few years ago marking the centennial of his father’s birth. He noted that, despite his own repeated urgings, he could never persuade his father, who certainly did not suffer writer’s block, to write his memoirs. And he ends with a telling anecdote about his father’s relationship with my generation of young radicals in the late 1960s. His father, he tells us, “worried about the student movement, feared its wildness, looked askance at the hedonism associated with it, but still could not help sympathizing with its political radicalism.” The senior Bell was then teaching at Columbia University and was one of a number of faculty who tried to negotiate an agreement between the students occupying buildings during the Columbia strike of April 1968 and the administration. “But on the night of April 29,” the junior Bell recalled:

    negotiations broke down, and the police moved in with nightsticks and tear gas. Many of the students were badly beaten, and hundreds were arrested. I remember waking up early on the morning of the 30th — I was six years old at the time — and finding my father, fully dressed, on the couch. He had been up all night and he was weeping uncontrollably.

    For whom did Bell weep during that long and long-ago night? The students, beaten and arrested? Possibly. Columbia’s damaged reputation and future as a learning community? Also possible. His son suggests an additional possibility — that he wept out of frustration and confusion about how to understand the terrible and stark choices that he, his colleagues, and his students were forced to confront in the tragic spring of 1968: “He could never quite reconcile the Jewish conservative and the Yiddish radical within him — never quite decide from what perspective to judge and interpret the times he had lived through.” The tension between the two perspectives, one formed in his youth, the other in adulthood, had a positive side, the son wrote, for it kept the father politically “sensitive to the dangers of extremism, but also to the dangers of injustice.” 

    I did not know Daniel Bell personally, and I cannot say whether that is the case or not. But if it were true, it would make me think back on him with considerable sympathy. Remaining alert both to “the dangers of extremism,” and to “the dangers of injustice” is a tough balance to maintain. After a lifetime of engagement, sometimes hopeful, sometimes despairing, with the American left I can but aspire, imperfectly, to achieve the same. Can a disabused idealist still be an idealist? How zealously should one oppose zealotry? From one old radical to another, rest in peace, Daniel Bell.

    The Master of Attention

    It would be silly to call William Wyler underrated — he was one of the most acclaimed and commercially successful movie directors in American history. A staple in every American film canon, he was my favorite director long before I knew his name. Growing up I watched Dodsworth, The Little Foxes, The Heiress, and Jezebel dozens of times without noticing that four of my favorite movies were directed by the same person. His legacy is really remarkable: the same person who directed these close and complex dramas, and who was Lilian Helman’s favorite collaborator, also made movies such as Funny Girl, Ben Hur, and Roman Holiday. The man whom we have to thank for the stardoms of Audrey Hepburn and Barbra Streisand also taught Bette Davis how to act and Laurence Olivier how not to overact. And yet he does not inspire the kind of cultish attachment that other directors do. Perhaps that is because his style is almost imperceptibly subtle. It is not clear whether there is such a thing as a “Wyler touch.” We love Wyler movies, but we don’t love Wyler.

    Wyler famously quipped that although he was not an auteur, he was one of the only American directors who could pronounce the word correctly. Critics said that Wyler had an “invisible style.” Andre Bazin, in Cahiers du Cinema, called him the “Jansenist of mise-en-scène,” meaning that Wyler was controlled and even self-denying in his visual style, in contrast to the personal and easily identifiable signatures of John Ford, Fritz Lang, and Alfred Hitchcock. Yet there is nothing austere about Wyler. He was no shrinking violet as a director, sacrificing his own artistic individuality for the good of the picture. No, the Wylerian style is not a particular visual brand. It is a special kind of attention. As he himself explained, “I have never been as interested in the externals of presenting a scene as I have been in the inner workings of the people the scene is about.”

    Wyler films are not iconic. They do not dazzle us with specular images that stand on their own, that can be borrowed and parodied. Greta Gerwig’s Barbie can “quote” Kubrick, aping the imagery from 2001: A Space Odyssey and Dr. Strangelove for a gag — images so powerful that they transcend tone, context, and even the stupidity of a movie like Barbie. Wyler, by contrast, cannot be quoted. His genius is quiet, specific, unspectacular, and intimate. If Kubrick’s movies are operatic, Wyler’s are novelistic. Films like Kubrick’s are spectacles suspended in time. The composer of an opera manipulates the audience by controlling the music. Viewers are transported into the pace and mood dictated by the composer; they are held steady, made to wait, and then finally allowed the relief of climax: the aria. Just as an aria can be excerpted from an opera and still retain its power, the images from a Kubrick movie can be stolen, mimicked, and interpreted while still recalling the original. Not so Wyler’s. 

    Wyler’s films have the bounded and internally elaborated intimacy of a novel. Characters are shaped supremely in relation to one another. Wyler’s delicate details — slight movements of flesh, diffusions of light, fluid camera motions — reveal the inner worlds of each character. His greatest stylistic flourishes work on the viewer almost imperceptibly. They are meant to be discovered. Picking one up feels like eyeing a stranger on the street — say, watching as they grapple with some inconvenience. You recognize a slight shadow of annoyance cross their face, and for an instant you feel that you know exactly what they are thinking. The ephemeral intuition of other minds is so powerful precisely because it is so gentle and because it reveals our collective interest and investment in each other. Wyler is always counting on our capacity for close attention.

    Put simply, the “Wyler touch” is a prodigious gift for people, for understanding and conveying on film the truthful appearance of inner experience. He makes real the distances between people, the way they bounce off each other and retreat into themselves, the way they work at themselves and forge each other. In this regard he is one of cinema’s supreme humanists.

    Wyler does not fit into any of the familiar categories of American filmmakers, or rather he seemed to have a foot in all of them. He was a European Jew who came to America as a young man, like Ernst Lubitsch, Billy Wilder, and Michael Curtiz. Unlike them, he never worked in European film or theater; he got his start in Hollywood. He was the son of middle-class Jews, born and raised in Alsace-Loraine, speaking both German and French (but not Yiddish). As a child during World War I, he remembered his family spending nights in the cellar — huddled with their Protestant and Catholic neighbors — waiting for the battle outside to end so they could find out whether the town was French or German that day. When he came to America, he did not come fleeing war or antisemitism. He sailed first-class in 1920, with his violin and his skis. His uncle Carl Laemmle had come over in steerage in 1884 and gone on to found 
Universal Pictures. Laemmle offered the eighteen-year-old Wyler a job in the mailroom in New York. 

    Wyler’s start in Hollywood looked more like John Ford’s than his German Jewish contemporaries. Both he and Ford started out making two-reel quickie Westerns for Laemmle, churning out simple stories starring any guy who could ride a horse, making as many films as possible, and hoping to slip in something creative somewhere to get noticed. Directors were paid less than cameramen, but they were free to change the scripts and try new techniques without asking permission. Like Ford, Wyler’s quickies received positive attention, and he was promoted to making full-length features in 1926, mostly more Westerns. His first talkie was the first all-sound outdoor film Universal made, a Western called Hell’s Heroes, shot on location in the Mojave Desert and Death Valley. Wyler insisted on the location for its flat landscape and cloudless sky. The film was Wyler’s first major success, and it was lauded, at home and in Europe, for its bleak realism and for the innovative directorial choices Wyler made. 

    In 1933, Wyler got the opportunity to direct a prestige picture, an adaptation of an important and successful play written by Elmer Rice, who also wrote the screenplay for the film. The protagonist of Counsellor at Law was a well-respected lawyer named George Simon, a Jew from the Lower East Side who first crossed the Atlantic in steerage and then made himself a prominent member of New York society. Naturally, none of the Jewish actors whom Laemmle and Wyler wanted for the film would agree to play it, for fear of being ethnically typecast, so, less naturally, in an early example of non-traditional casting, the Jewish immigrant was played by, of all people, John Barrymore. Barrymore was glad to have Wyler as a director: “Because you’re Jewish,” he told him, “you’ll be able to help me a great deal with the character.” Barrymore was determined to incorporate “Jewish gestures” into his performance, to the point that Wyler had to assure him that there is no Jewish way to pick up the phone.

    However crude Barrymore’s understanding of his character was, Wyler knew how to build George Simon with his camera, using the interplay of fluid, constant motion with sudden, suffocating zooms to lay bare the paradoxes in the man’s life. As Simon hums along, Wyler’s camera sweeps around his office, following his characters as they rush through a dizzying labyrinth of side rooms and halls, waiting rooms and libraries. Clients and colleagues step in and out of Simon’s office, and as his attention settles on each, Wyler’s camera slowly rests on them, before Simon’s mind hurries along and they are sent bustling out into the hall. As long as Simon can keep moving, as long as his energy fills the succession of rooms that make up his world, he is content. 

    This office, and the lateral motion of the camera as it roves its corridors, embodies life as Simon has constructed it for himself. But these jumbled forms distract from the agonizing contradiction which he endeavors to ignore. His life is divided against itself: he is a man of principles but he is striving to hit it rich; he adores his wife and his mother, but his wife is too snobbish to make civil conversation with his mother; he wants to help the kids from his old neighborhood, but this work jeopardizes his career, and they are not always grateful enough to make it worth it; he is surrounded by people who are fanatically loyal to him, but he is also out on a limb all on his own, risen above his station. 

    As the fates close in, Simon’s world becomes suddenly airless, and collapses in on itself. The camera stops roving and instead it rushes at Simon from below, isolating him in the frame. The claustrophobia of these shots forces us inside Simon, deprived of all the trappings of the bustling life that ordinarily keep him safe. Through movement, Wyler creates an effect reminiscent of the moment in Anna Karenina when Anna’s husband realizes that his wife may be in love with another man: “Now he experienced a feeling akin to that of a man who, while calmly crossing a precipice by a bridge, should suddenly discover that the bridge is broken, and that there is a chasm below. That chasm was life itself, the bridge that artificial life in which Alexey Alexandrovitch had lived.”

    As this torment becomes intolerable for Simon, we start to notice that there is one feature of the world that relieves him of isolation. The camera zooms in on the back of his head as he sits framed in a window, looking out at the city. We have heard, a few times, that his plucky receptionist is feeling unwell because she saw a man jump from a building this morning. It slowly occurs to us that a dark impulse has drawn Simon to the open glass. At the climax of the film, Simon looks out the window, and for the first time in the film the camera leaves the office and we look in at Simon from the outside. We hear the sounds of the city, watching the man inside sit alone in the dark. Slowly he starts to move toward us, toward the window, and in turn the camera accelerates toward him from below, exaggerating grotesque shadows in his face. In one swift motion he opens the window and starts to climb out.

    Though Counsellor at Law is based on Elmer Rice’s play, it does not feel like a filmed stage play, even if that shot into the window is the only time we leave the office. Wyler refuses to open it up, taking the audience inside Simon’s world by trapping us in the office, heightening the mania and the claustrophobia. The film is a masterclass in thoughtful adaptation, a Wyler specialty. Authors and playwrights knew they could trust him with their work. Lillian Hellman, his lifelong friend and collaborator, remarked that “it was Wyler who taught me about movies.” Their first collaboration was his adaptation of her play The Children’s Hour in 1936. This was a play that should have been unadaptable, because censorship under the Hayes Code made it impossible to discuss, allude to, or otherwise hint at its subject: lesbianism. But Wyler understood that the core of the story was the horror of being at the mercy of a careless lie that hits dangerously close to an unspeakable truth. 

    Wyler’s straight interpretation of the story, These Three, was his first of seven collaborations with the great cinematographer Gregg Toland. In 1936, Toland had not yet perfected the deep focus technique for which he would become famous, but he and Wyler used blocking, the arrangement of actors in the frame, to allow the viewer to watch the unfolding drama of people watching each other. In the heterosexual version of the story, we watch a lifelong friendship fall apart when one woman is falsely accused of having an affair with the other’s fiancé. As much as the three victims of the lie declare that they are determined to stand together, Wyler’s blocking shows us how isolated Martha — the alleged guilty party — truly is, by how she stands and moves, how she peers at her friends and how the world glares at her. When the three confront Mrs. Tilford, the woman who has spread the rumor of Martha’s affair, Martha is positioned in the center and alone, pacing and turning in between two poles: Mrs. Tilford in the foreground at the far-left side of the frame and Joe and Karen, the engaged couple, together on the far-right. At times Martha stands in the background, wobbling slightly, at times she lurches forward as if about to faint. The physical separation between Martha and the others is so stark that when Mrs. Tilford addresses the two women separately, we know who she is speaking to in the close-up shots of her face because we can follow her eyes. She is firm, looking directly ahead when she speaks to Karen and saving contemptuous sidelong glances for Martha. 

    Martha herself cannot look at her friends straight on. Wyler trains his camera on her as she searches silently, furtively, for signs of a coming betrayal. Before they leave Mrs. Tilford’s house, Joe makes one final appeal. He stands positioned in the foreground with Mrs. Tilford, while Karen and Martha stand in the background, out of focus but clearly visible standing by the door. Karen turns her whole body to watch directly, proud and fortified by Joe’s loyalty. Martha, however, turns her head slowly, with a hint of shame, her body still angled toward the door as she watches. We know the lie has a kernel of truth: she is in love with Joe, and so his heroism is bittersweet, corrupted by her guilty desire. Even in the blurry background, it is painful to watch her twist and turn out on her own, believing herself undeserving of loyalty.

    Each of Wyler’s adaptations has this inventive quality, giving life to the essence of the story through the creation of a cinematic language. In The Little Foxes, Wyler translates the world of Hellman’s celebrated stage play into a series of intricately sculpted compositions. In one scene, a father and son test the depths of each other’s depravity, facing away from each other but watching each other carefully, each in their own shaving mirror. To allow the audience to clearly see both men’s faces through this maze of mirrors,was a technical achievement that drew bewildered admiration from no less an artist than Sergei Eisenstein. In The Letter, a genuinely extraordinary film from 1940, Wyler builds the strictly segregated but intimately intertwined worlds of British Malaya entirely out of light and shadow. The cutting, brilliant moon burns through the blinds of the carefully arranged, electrically lit colonial society, calling Bette Davis into its penetrating, almost transfiguring light. Dodsworth, in 1936, is a delicately observed portrait of an American marriage on the rocks, a study of the transformation of a middle-aged man (Walter Huston) in thoughtful pursuit of his own happiness, which deserves the compliment that Virginia Woolf’ paid to Middlemarch: It is one of the few movies about marriage for grown-ups.

    But perhaps Wyler’s most stylistically daring adaption is his Wuthering Heights, in 1939, which focuses more on the characters’ emotional states than the actual plot of the story. He almost entirely ignores the wild natural setting of the moors in which Emily Bronte’s story is set, and places most of the action indoors. But despite this choice — the screenplay was by Ben Hecht and Charles MacArthur — he uncannily recreates the dense and savage feeling of the novel, the gothic idealization of the dark and romantic and the contempt for the false brightness of ordinary life. 

    The camera drifts carefully through the interiors in which Cathy and Heathcliff spend their days: the bright, opulent halls of the society world of the Grange and the chiaroscuro gloom of Wuthering Heights. But it is not that the Grange is light and Wuthering Heights is dark. Things are not so simple. There is light in Wuthering Heights — light that obscures, that blinds. This is not the illuminating light of the sun, nor the soft light of lamplight, but the flashing, obliterating light of lightning. There, fire makes the darkness blacker. 

    Cathy says, “Whatever our souls are made of, his and mine are the same.” The strange material is lit with an obfuscating light. Wyler allows us to discover this secret vibrancy as the camera presses into Cathy’s and Heathcliff’s faces, or it glints and pools on their lips, their eyes, their teeth. Both Cathy and Heathcliff are often lit from behind, their profile framed, their features concealed. When Cathy declares, “I am Heathcliff,” lightning strikes behind her, engulfing her, so that for an instant she can barely be seen.

    Carefully and dramatically stylized as Wyler’s Wuthering Heights is, what it delivers is not the director’s artistic signature but his subject. Wyler crafts a visual vocabulary for the world-annihilating passion between two people. When Cathy and Heathcliff think of each other, their faces don’t glow, they shine, as if feverish. Both can move through the ordinary world and appear normal, even beautiful, but in this strange other light they are transformed. When Cathy, played by Merle Oberon, is in the world of the Grange, playing the society lady, her lashes cast soft shadows over dark, intelligent eyes. When Heathcliff calls her out of this comfort, her eyes widen and flash with an unearthly glow. The effect is maniacal. The light is so intense that her eyeballs strike the viewer as horrifically wet and round in their sockets. As Cathy and Heathcliff embrace on her deathbed, one of Cathy’s teeth catches this otherworldly light, and throughout the scene the tooth shines grotesquely in its winking glare. We get to see the animal that Heathcliff loves, sickening and dying inside her expensive cage.

    Laurence Olivier, brought off the stage where he felt he belonged to play Wyler’s Heathcliff, found the director sneering and sarcastic. Wyler, for his part, felt that Olivier was overacting, and after days of frustration brought the issue to a head in front of the whole crew. “Tell me Larry,” he barked, “what dimension do you reckon you got to now?” Olivier shot back, “I suppose this anemic little medium can’t take great acting.” Everyone on set burst out laughing, Wyler especially. Olivier was mortified. Wyler later took Olivier aside to talk earnestly about the potential for great film acting. Later in life Olivier would say, “If any film actor is having trouble with his career, can’t master the medium, and, anyway, wonders whether it is worth it, let him pray to meet a man like Wyler.” 

    Wyler believed that working with actors was one of the most essential jobs of the director, and he could be gentle and encouraging with his actors when he thought it would get results. Audrey Hepburn was a newcomer to acting when she was cast as the lead in Roman Holiday in 1953, and although possessed of a certain kind of natural charisma, she was shy and unsure about how to behave on camera. She was initially terrified of making mistakes, and Wyler set out to calm and secure her with a steady drip of praise. But he had a reputation for harsher treatment. In Jezebel, in 1938, the other — the better — Gone with the Wind, Wyler’s heroine is a vivacious and impudent southern belle named Julie played by a young (that is, exceedingly vivacious and impudent) Bette Davis. Viewers meet Julie arriving late to her own engagement party still dressed in her riding clothes. In her first few seconds on screen, Wyler wanted Julie to hike up the train of her dress with her riding crop and hook it over her shoulder. Easier said than done: it is a nifty bit of choreography and it needed to have the ease and feel of a gesture that she had performed hundreds of times. So, Wyler instructed Davis to repeat the scene over and over again. When an exasperated Davis demanded to know what she was doing wrong, Wyler merely told her, “I’ll know it when I see it.” Years later Davis herself admitted that Wyler’s perfectionism paid off. The gesture is powerfully sexy, tinctured with a masculine vigor and informality and yet elegant and indisputably feminine. In a film that forces viewers to wrestle with this woman on the way to accepting her, this single motion shows us at the start exactly who she is and leaves us as much enthralled to her as every character she bewitches. Julie hooks us with the crop, too.

    There were instances when Wyler used cruder methods to get what he wanted out of an actor. The Heiress was Wyler’s adaptation in 1948 of a play based on Henry James’ novel Washington Square. It stars Olivia de Havilland as Catherine Sloper, the plain single daughter of a wealthy family. Catherine is seduced and then jilted by a handsome but mercenary suitor (a young Montgomery Clift) who had promised to elope with her when he thought her money would be coming with them. He had not been prepared for her to renounce her fortune when her father, one of the cruelest characters in the history of film and played with blood-chilling elegance by Ralph Richardson, threatened to cut her out of his will. She waits by the door for the fiancé who had promised he would come. Hours crawl by. 

    The playwrights, Ruth and Augustus Goetz, who had adapted the James novella for the stage, had written a speech for the moment when Catherine realizes in agony that her suitor is not coming. They considered the speech essential to their interpretation of James’ story, which focuses on the tragedy of Catherine being so cruelly treated — by her father, her aunt, and her unscrupulous suitor — as unlovable. (Cruelty is the tale’s true subject.) The play called for her to say: “I used to think my misfortune was that Mother died. But I don’t think that anymore . . . If she had lived, she too could not have loved me.” The speech remained in Wyler’s script, but on the set he decided to cut it out. Instead he instructed De Havilland to silently, laboriously drag up the staircase the suitcase she had packed for her elopement. He made De Havilland repeat the scene again and again, waiting in vain for her frail form to communicate the desperate heartbreak which must have tormented the character. Finally he filled the suitcase with heavy books, such that De Haviland could barely lift it. With unfeigned exhaustion, she lugs the laden suitcase up the stairs, and we realize she is bereft of any hope of comfort or strength or love, that she must carry on living alone in a world that will only ever treat her unkindly. She didn’t need to tell us; she shows us. We are expected to see it because we are expected to look carefully.

    Wyler’s novelistic films allow us into the internal world of the individual through these delicate moments of discovery. It takes a special talent for collaboration, for other people, to construct a story out of the flesh and movement of others. This was Wyler’s special gift: recognizing, respecting, and at times manipulating the gifts of others. He was very demanding, but his style of work was designed to allow for as much cooperation as possible. The day before shooting he would schedule a day of rehearsals, which would begin with the cast gathering together to read and discuss the scene they would film the subsequent morning. Then he would stage the scene, allowing the actors to play it the way that was most comfortable for them. This also gave the cinematographer an opportunity to observe the staging and think through the photography. Then, before shooting, Wyler would have the cast rehearse the scene again, this time giving the actors suggestions on their performances. Shooting would not begin until every person involved had the opportunity to come to their own understanding of how it should work. Film is always a collaborative art, of course, but Wyler’s craft of collaboration was at another level altogether.

    The most important collaboration of Wyler’s career was his partnership with Gregg Toland. They worked together on seven films, including These Three, Wuthering Heights, and The Little Foxes. Toland, who may be the most famous American cinematographer of all time (particularly for his work with Orson Welles on Citizen Kane), worked with Wyler more than any other director. Some critics have declared that all there is to the “Wyler touch” is the Toland touch. In 1955, in an issue of Cahiers du Cinéma devoted to “The Situation of American Cinema” (dedicated to Orson Welles, “without whom the new American cinema would not be what it is”), the French critics offered a simple explanation for what they saw as Wyler’s artistic decline after 1948: “Gregg Toland était mort.” Short, ruthless, and to the point: no Toland, no worthwhile Wyler. The American critic Andrew Sarris, who lived under the spell of the French critics, wrote: “Subtract Gregg Toland from Welles and you still have a mountain; subtract Toland from Wyler and you have a molehill.” 

    But comparing Citizen Kane with The Best Years of Our Lives, the apotheosis of the partnership between Wyler and Toland, it is clear that Wyler has a different use for Toland’s powers than Welles did. Both films make extensive use of the deep focus shot, Toland’s signature achievement — and a real challenge considering the limitations of contemporary cameras. This technique allows the background and the foreground of a scene to be clear and uniformly visible simultaneously. In Citizen Kane, Welles uses deep focus shots for dramatic irony, the action in the background giving poignant context to the events in the foreground. The shots are highly stylized, giving voice to the director’s God-like view of the characters. The image is an icon of the story. 

    For Wyler, by contrast, deep focus photography had a more practical, and more poignant, purpose. Having more than one area of focus allowed Wyler, as he wrote in 1947, to have “action and reaction in the same shot, without having to cut back and forth from individual cuts of the characters.” Even before Toland was technically able to render both action and reaction in focus, he and Wyler worked together to achieve this effect, particularly in These Three and The Little Foxes. The viewer feels that we are taking in a scene the way that we take in life, shifting our attention from one person to another. There is no irony, there is no dramatic distancing, no iconography in these shots. Wyler was not creating myths, he was seeking the textures of human feeling. He composed the actors in the frame to encourage the viewer to watch the characters react to each other. A man in the background is placing a difficult phone call. A man in the foreground is watching as a friend plays piano, but he keeps glancing back at the man in the background. We know that our man in the foreground is troubled; he cares for the man in the phone booth, and he worries about him, but he feels that the phone call must be made nonetheless. We stumble into this private moment of internal conflict and watch with him as the man hangs up the phone and leaves. 

    The Best Years of Our Lives is Wyler’s masterpiece. In 1946, filmmakers around the world sought to understand what the recent war had made of their societies. What was left after the wreckage? How could they distinguish between what was essentially Italian, Japanese, German, and therefore unchanged, and what had been transformed forever? The Best Years of Our Lives is the American contribution to this exercise in global introspection. Through the stories of three men returning from war to the families they left behind, Wyler reveals a society weighed down with a new awareness of the horror of the world. His characters struggle with disability, posttraumatic stress disorder, reemerging antisemitism, surging reactionary anticommunism, and pervasive fears of nuclear annihilation, as they try to discover who they — and we — are going to be now that we are on the other side of an apocalypse. And he does it by allowing the audience to watch the characters watch each other. The three men keep looking behind the eyes of the people around them for signs that they are not welcome, that they cannot be understood. As we catch characters regard each other, or pointedly avoid each other’s gaze, we feel that we are discovering for ourselves something that is happening inside them, the surging and subsiding feelings they have about each other.

    One of the most moving scenes in The Best Years of Our Lives comes when Al, the bank vice president turned infantryman turned bank vice president again, is invited to give a speech at a banquet, and he is so miserable about his job denying small loans to veterans like himself that he gets outrageously drunk. Wyler shoots his speech in a deep focus down the head table. As Al staggers through his speech at the center of the table, we see audience members listening with anger and astonishment in front of us, and his wife Milly (Myrna Loy, exquisite as usual), almost at the end of the table, sitting in pained anxiety, occasionally exchanging nervous looks with Al’s boss. But as Al continues, he becomes more eloquent: 

    I wanna tell you that the reason for my success as a sergeant is due primarily to my previous training in the Corn Belt Loan and Trust Company. The knowledge I acquired in the good old bank I applied to my problems in the infantry. For instance, in Okinawa, a major comes to me . . . he says, “Stephenson, you see that hill?” “Yes, sir. I see it.” “All right,” he said. “You and your platoon will attack said hill and take it.”

    So I said to the major, “But, uh, that operation involves considerable risk. We haven’t sufficient collateral.” “I’m aware of that,” said the major, “but the fact remains, that there is the hill and you are the guys who are going to take it.” So I said to him, “I’m sorry, major. No collateral, no hill.” So we didn’t take the hill, and we lost the war. Uh, I think that, uh, little story has considerable significance. But I’ve, uh, forgotten what it is.

    He finishes his speech by professing his belief that the bank will end up granting so many small loans to returning servicemen “that people will think we’re gambling with the depositors’ money.” He concludes: “And we will be. We’ll be gambling on the future of this country.”

    As Al speaks, we watch his boss look up sharply and furrow his brow, and the audience look on in confusion. Milly, however, is transformed, now staring at Al proudly and lovingly. We watch her realize that, unhappy and drunk as he is at this moment, Al is becoming a better man than he was before the war. He had been a selfish man and had not understood sacrifice or fellowship. He had no sense of civic responsibility and he, like his bosses at the bank, was happy to disguise his greed, to himself and the world, by recourse to economic principles and their alleged moral neutrality. Now, through all his pain and confusion, he is profoundly idealistic that Americans do share a collective destiny, and a collective commitment to the betterment of the world. All this Milly understands, as we can see in her eyes, because she is not so unchanged as she appears, and because she loves him. 

    Mrs. Miniver was released in 1942, but Wyler directed it before the United States entered the war. Even Wyler would — and did — acknowledge that the film was propaganda, intended to stir American sympathy for the British under Nazi attack by showing life on the home front as the war becomes increasingly desperate. The film ends with a speech given by the local priest after the town — the fictional village of Belham, near London — has been disfigured by Nazi bombs, when the community comes together to mourn. Wyler himself worked on the speech, which was later translated into French, German, and Italian to be broadcast throughout Europe on the Voice of America, and airdropped in millions of leaflets into German-occupied territories. This time Wyler did want an iconic moment, poignant but clear, to send the audience a forceful message. And that is what he achieved. As characters take a break from burying townspeople whom the audience, over the course of the film, has been taught to love, we are told: “They will inspire us with an unbreakable determination to free ourselves and those who come after us from the tyranny and terror that threaten to strike us down.” When the priest declares that “this is the people’s war, and the people must fight it,” audiences watching today are no less stirred than the many who watched the film when it came out. 

    That tone was profoundly unlike the tone of The Best Years of Our Lives, the keynote of which was not righteousness but goodness. Goodness is not iconic. Goodness cannot be broadcast over radio waves to move men to war. Goodness is even banal. The last scene of Wyler’s great film is a small wedding, in which all these people who were mauled or magnified by history are gathered. Homer, the football star who lost his hands in the Navy — played, unforgettably, not by a professional actor but by a war veteran with prosthetic arms and hands — has finally stopped shutting himself away in shame from the people who love him, and allowed his high school sweetheart Wilma to show him that she is as devoted to him as ever. Fred — a moving and only intermittently tough Dana Andrews — is a heroic fighter pilot who came home to find himself trapped in a bad marriage and a humiliating job, and by the end of the film he is still adrift. The wedding is the first time he has seen Peggy, the girl he really loves, and who truly loves and understands him, since he broke things off with her and was subsequently dumped by his frivolous wife. She is played by Teresa Wright, as uncannily subtle here as she is everywhere. Fred and Peggy stand separated — him closest to the camera in the foreground and her in the background in brilliant white — locked in their own internal struggles on the left side of the frame.

    We, with the rest of the wedding guests, are on edge as the ceremony unfolds, afraid that Homer will not be dexterous enough with his hooks to place the ring on his bride’s finger. The crowd watches as Homer promises to be with Wilma for richer or for poorer, but we start to notice a private moment unfolding. Fred turns slowly to look back at Peggy. Their eyes meet, but they grimly look away. As Wilma returns Homer’s promise, Fred looks back again and we realize Peggy is staring at him with tears in her eyes. Their gaze holds, and we can see clearly in Peggy’s eyes that it is full and confident, echoing the loving look that Wilma is giving Homer on the right side of the frame. As the ceremony ends, and the rest of the wedding party swells to the right to hug Homer and Wilma, Fred and Peggy stand still, in their own world. Fred crosses the room back to Peggy, slowly and deliberately, and they kiss in the background. Even as we share in the relief and joy of the rest of the wedding party, we alone catch Fred and Peggy’s private moment of loving understanding.

    We are utterly in thrall to the powerful emotions of the characters, but we barely notice the technique that is making it possible for us to feel that we have stumbled upon the intimate pains and joys of an entire generation. There is nothing brash and operatic about the blocking; almost nothing is said. The pinnacle of Wyler’s and Toland’s collaboration, the apotheosis of their powers, is this quietly climactic moment, when we can at last watch two good people meet each other’s eyes and see each other.

    Is it ironic or perfectly correct that the great American postwar epic is a simple story, an intimate encounter with ourselves as we find each other — unsure, foolish, hurt — and good? Without affectation or swagger, Wyler builds a new country stumbling into a new world, fat and safe, apparently untouched by man’s greatest atrocities, and yet firm under the weight of a grandiose but clear-eyed sense of responsibility. This delicate idealism would be strangled by spectacular iconography; the brand of the artist would crush it. Here is the gentle profundity of Wyler’s humanist generosity. He gives us to each other to discover, teaching viewers nothing less than how to recognize the human, how to live in community with one another, how to watch, to listen, to pay attention. There is no shorthand for the quality of Wyler’s attention, which is, ultimately, the Wyler Touch. Attention, after all, is not only the condition of cinematic experience, but also the beginning of all our moral and emotional duties.