Saudi Arabia: The Chimera of A Grand Alliance

    Even alliances between countries that share similar cultures and rich, intersecting histories can be acrimonious. France and Israel, for example, provoke vivid and contradictory sentiments for many Americans. Franco-American ties are routinely strained. No one in Washington ever believed that Charles de Gaulle’s nuclear independence, guided by the principles of tous azimuts, shoot in any direction, and dissuasion du faible au fort, deterrence of the strong by the weak, meant that France might try to intimidate the United States. But there were moments when it wasn’t crystal clear whether Paris, free from the North Atlantic Treaty Organization, might harmfully diverge from Washington in a confrontation with the Soviet Union. Still, even when things have been ugly, quiet and profound military and intelligence cooperation continued with the French, almost on a par with the exchanges between Washington, London, and Canberra. It didn’t hurt that a big swath of affluent Americans have loved Paris and the sunnier parts of France for generations, and that French and American universalism essentially speak the same language. These things matter. 

    The United States has sometimes been furious at Israel — no other ally has, so far as we know, run an American agent deep inside the U.S. government hoovering up truckloads of highly classified information. Israel’s occupation of the West Bank, much disliked and denounced by various American governments, is probably permanent: setting aside the Israeli right’s revanchism, the proliferation of ever-better ballistic weaponry and drones negates any conceivable good faith that might exist in the future between Israeli and Palestinian leaders, who never seem to be able to check their own worst impulses. Geography is destiny: Israel simply lacks the physical (and maybe moral) depth to not intrude obnoxiously into the lives of Palestinians. The Gaza war has likely obliterated any lingering Israeli indulgence towards the Palestinian people what used to be called, before the Intifadas eviscerated the Israeli left, “risks for peace.” An ever larger slice of the Democratic Party is increasingly uncomfortable with this fate: the rule of (U.S.-subsidized) Westerners over a non-Western, mostly Muslim, people. But the centripetal forces — shared democratic and egalitarian values, intimate personal ties between the American and Israeli political and commercial elites, a broader, decades-old American emotional investment in the Jewish state, a common suspicion of the Muslim Middle East, and a certain Parousian philo-Semitism among American evangelicals — have so far kept in check the sometimes intense official hostility towards Israel and the distaste among so much of the American intelligentsia. 

    None of this amalgam of culture, religion, and history, however, works to reinforce relations between the United States and Islamic lands. Senior American officials, the press, and think tanks often talk about deep relationships with Muslim Middle Eastern countries, the so-called “moderate Arab states,” of which Egypt, Jordan, and Saudi Arabia are the most favored. Presidents, congressmen, diplomats, and spooks have certainly had soft spots for Arab potentates. The Hashemites in Jordan easily win the contest for having the most friends across the Israel divide in Washington: sympathizing with the Palestinian cause, if embraced too ardently, could destroy the Hashemites, who rule over sometimes deeply disgruntled Palestinians. American concern for the Palestinian cause rarely crosses the Jordan River. (Neither does even more intense European concern for the Palestinians intrude into their relations with the Hashemite monarchy.) 

    The Hashemites are witty enough, urbane enough, and sufficiently useful to consistently generate sympathy and affection. Even when King Hussein went to war against Israel in 1967 or routinely sided with Saddam Hussein, his style and his manner (and I’m-really-your-friend conversations with CIA station chiefs and American ambassadors) always encouraged Washington to forgive him his sins. The Hashemites, like the Egyptian military junta, have routinely, if not always reliably, done our dirty work when Washington needed some terrorist or other miscreant held and roughly interrogated. Such things matter institutionally, building bonds and debts among officials. 

    But little cultural common ground binds Americans to even the most Westernized Arabs. Arabists, once feared by Israel and many Jewish Americans, always had an impossible task: they had to use realist arguments — shouldn’t American interests prefer an alliance with twenty-two Arab countries rather than with a single Jewish one? — without underlying cultural support. They had to argue dictatorship over democracy or belittle Israel’s democracy enough (“an apartheid state”) to make it seem equally objectionable. Outside of American universities, the far-left side of Congress, the pages of The Nation, Mother Jones, and the New York Review of Books, and oil-company boardrooms, it hasn’t worked — yet. Too many Americans have known Israelis and visited the Holy Land. And too many are viscerally discomfited by turbans and hijabs. Culture — the bond that rivals self-interest just isn’t that fungible.

    Even the Turks, the most Westernized of Muslims, didn’t have a large fan club in America when the secular Kemalists reigned in Ankara — outside of the Pentagon and the Jewish-American community, which deeply appreciated Turkey’s engagement with Israel. The Turks’ democratic culture never really blossomed under the Kemalists, who couldn’t fully shake their fascist (and Islamic) roots. The American military still retains a soft spot for the Turks — they fought well in Korea, and their military, the southern flank of NATO, has a martial ethic and a level of competence far above any Arab army; and they have continuously allowed the Pentagon to do things along the Turkish littoral, openly and secretly, against the Soviets and the Russians. 

    Yet their American fan club has drastically shrunk as the Turkish government, under the guidance of the philo-Islamist Recep Tayyip Erdoğan, has re-embraced its Ottoman past, enthusiastically used state power against the opposition and the press, and given sympathy and some support to Arab Islamic militants, among them Hamas. The Turks have pummeled repeatedly Washington’s Kurdish allies in Syria (who are affiliated with Ankara’s deadly foe, the terrorism-fond Kurdistan Workers Party). More damning, Erdoğan purchased Russian S-400 ground-to-air missiles, compromising its part in developing and purchasing America’s most advanced stealth fighter-bomber, the F-35. The Pentagon, always Turkey’s most reliable ally in Washington, feels less love than it used to feel.

     No great European imperial power ever really integrated Muslim states well into their realms. Great Britain and France did better than Russia and Holland; the Soviet Union did better than Russia. With imperial self-interest illuminating the way, the British did conclude defensive treaties with allied-but-subservient Muslim lands — the Trucial States and Egypt–Sudan in the nineteenth and twentieth centuries — that could greatly benefit the natives. The emirs in the Gulf, once they realized that they couldn’t raid Indian shipping without fierce retribution, accepted, sometimes eagerly, British protection, grafting it onto the age-old Gulf customs of dakhala and zabana — finding powerful foreign patrons. The emirates needed protection from occasionally erupting militant forces from the peninsula’s highlands — the Wahhabi warriors of the Saud family.  

    The British, however, failed to protect their most renowned clients, the Hashemites, in their native land, the Hijaz in Arabia, after wistfully suggesting during World War I that the Hashemites might inherit most of the Near East under Great Britain’s dominion. King Hussein bin Ali had proved obstinate in accepting Jews in Palestine and the French in Syria. Britain switched its patronage to the Nejd’s Abdulaziz bin Abdul Rahman Al Saud. Backed by the formidable Ikhwan, the Brothers, the Wahhabi shock troops who had a penchant for pillaging Sunni Muslims and killing Shiite ones, Ibn Saud conquered the Hijaz, home to Mecca and Medina and the Red Sea port of Jeddah, in 1925. Checked by the Royal Air Force in Iraq and the Royal Navy in the Persian Gulf, Saudi jihadist expansion stopped. In 1929 Ibn Saud gutted the Ikhwan, who had a hard time accepting the post-World-War-I idea of nation-states and borders, and created more conventional military forces to defend his family and realm. In 1932 he declared the kingdom of Saudi Arabia. The dynasty’s Hanbali jurisprudence remained severe by the standards enforced in most of the Islamic world, but common Sunni praxis and political philosophy held firm: rulers must follow the holy law, but they have discretion in how they interpret and enforce it; in practice, kings and princes could sometimes kick clerics and the sharia to the ground when exigencies required. 

    Since Britain’s initial patronage, the Saudis officially have remained wary of foreigners, even after the 1950s when the royal family started allowing thousands of them in to develop and run the kingdom. Ibn Saud put it succinctly when he said “England is of Europe, and I am a friend of the Ingliz, their ally. But I will walk with them only as far as my religion and honor will permit.” He might have added that he appreciated the power of the RAF against the Ikhwan on their horses and camels. After 1945, Ibn Saud and his sons sought American protection and investment. They saw that Britain was declining; it was also massively invested in Iran. The modern Middle East has been an incubator of ideologies toxic to monarchies. The three Arab heavyweights of yesteryear — Baathist Iraq, Baathist Syria, and Nasserite Egypt — were all, in Saudi eyes, ambitious predators. The Soviet Union lurked over the horizon, feeding these states and, as bad, Arab communists and other royalty-hating leftists who then had the intellectual high ground in the Middle East. But there was an alternative. 

    American power was vast, Americans loved oil, and America’s democratic missionary zeal didn’t initially seem to apply to the Muslim Middle East, where American intrusion more often checked, and usually undermined, European imperial powers without egging on the natives towards democracy. (George W. Bush was the only American president to egregiously violate, in Saudi eyes, this commendable disposition.) American oilmen and their families came to Saudi Arabia and happily ghettoized themselves in well-organized, autonomous, well-behaved communities. The American elite hardly produced a soul who went native: no T.E. Lawrence, Gertrude Bell, or Harry St. John Bridger Philby — passionate, linguistically talented, intrepid Englishmen who adopted local causes, sometimes greatly disturbing the natives and their countrymen back home. A nation of middlebrow pragmatic corporations, backed up by a very large navy, Americans seemed ideal partners for the Saudi royals, who were always willing to buy friends and “special relationships.” As Fouad Ajami put it in The Dream Palace of the Arabs, “The purchase of Boeing airliners and AT&T telephones were a wager that the cavalry of the merchant empire would turn up because it was in its interest to do so.”

    But Americans could, simply by the size of their global responsibilities and strategies, be unsettling. In Crosswinds, Ajami’s attempt to peel back the layers of Saudi society, he captures the elite’s omnipresent trepidation, the fear of weak men with vast wealth in a region defined by violence:

    “The Saudis are second-guessers,” former secretary of state George Shultz said to me in a recent discussion of Saudi affairs. He had known their ways well during his stewardship of American diplomacy (1982–1989). This was so accurately on the mark. It was as sure as anything that the Saudis lamenting American passivity in the face of Iran would find fault were America to take on the Iranians…. In a perfect world, powers beyond Saudi Arabia would not disturb the peace of the realm. The Americans would offer protection, but discreetly; they would not want Saudi Arabia to identify itself, out in the open, with major American initiatives in the Persian Gulf or on Arab–Israeli peace. The manner in which Saudi Arabia pushed for a military campaign against Saddam Hussein only to repudiate it when the war grew messy, and its consequences within Iraq unfolding in the way they did, is paradigmatic. This is second-guessing in its purest.

    Saudi Arabia has had only one brief five-year period, from 1973 to 1978, when the Middle East (Lebanon excepted) went more or less the way that the royal family wanted. They weren’t severely threatened, their oil wealth had mushroomed, internal discontent had not metastasized (or at least was not visible to the royal family) and everybody — Arabs, Iranians, Americans, Soviets, and Europeans — listened to them respectfully. In 1979, when the Iranian revolution deposed the Shah, and Sunni religious militancy put on muscle, and the Soviets invaded Afghanistan, the golden moment ended. Enemies multiplied. Since then, as Nadav Safran put it in Saudi Arabia, The Ceaseless Quest for Security “the Saudis did not dare cast their lot entirely with the United States in defiance of all the parties that opposed it, nor could they afford to rely exclusively on regional alliances and renounce the American connection altogether in the view of the role it might play in various contingencies…the leadership …endeavored to muddle its way through on a case-by-case basis. The net result was that the American connection ceased to be a hub of the Kingdom’s strategy and instead became merely one of several problematic relationships requiring constant careful management.” 

     Which brings us to the current Saudi crown prince, Muhammad bin Salman, the de facto ruler of the country — easily the most detested Saudi royal in the West since the kingdom’s birth. With the exception of Iran’s supreme leader, Ali Khamenei, who is the most indefatigable Middle Eastern dictator since World War II, MBS is the most consequential autocrat in the region. And the prince made a proposal to America a proposal that may survive the Gaza war, which has reanimated anti-Zionism and constrained the Gulf Arab political elite’s decade-old tendency to deal more openly with the Jewish state. To wit: he is willing to establish an unparalleled tight and lucrative relationship with Washington, and let bygones be bygones — forget the murder of Jamal Khashoggi and all the insults by Joe Biden — so long as America is willing to guarantee Saudi Arabia’s security, in ways more reliable than in the past, and provide Riyadh the means to develop its own “civilian” nuclear program. Saudi Arabia would remain a major arms-purchaser and big-ticket commercial shopper and a reliable oil producer (the prince is a bit vague on exactly what Saudi Arabia would do with its oil that it isn’t doing now or, conversely, what it might not do in the future if Riyadh were to grow angry). And the Saudis would establish diplomatic relations with Jerusalem — clearly the pièce de résistance in his entreaty with the United States. With the Gazan debacle, the appeal of MBS’ pitch will increase for Israelis and Americans, who will seek any and all diplomatic means to turn back the anti-American and anti-Zionist tide.

    MBS and the Jews is a fascinating subject. It is not atypical for Muslim rulers, even those who sometimes say unkind things about Jews, to privately solicit American, European, and Israeli Jews. Having Jews on the brain is now fairly common in the Islamic world, even among Muslims who aren’t anti-Semites. Imported European anti-Semitism greatly amped up Islam’s historic suspicions of Judaism: in the Quran, the Prophet Muhammad is clearly disappointed by the Jewish refusal to recognize the legitimacy, the religious continuity, of his calling, which led to the slaughter of an Arabian Jewish tribe, the Banu Qurayza. Dissolve to the Holocaust, the creation of Israel, the wars and their repeated Arab defeats, the centrality of Israel in American and Soviet Middle Eastern foreign policy, the prominence of Jewish success in the West, especially in Hollywood, the constant chatter among Western Christians about Jews — all came together to give Al-Yahud an unprecedented centripetal eminence in the modern Islamic Middle East. 

    When MBS came to the United States in 2018, he and his minions engaged in extensive Jewish outreach. The prince admires Jewish accomplishment. His representatives are similarly philo-Semitic. The head of the World Muslim League, Muhammad bin Abdul Karim Issa, sometimes sounds as if he could work for B’nai B’rith International. Not that long ago, before 9/11, the League, an official organ of the Saudi state, pumped a lot of money into puritanical (Salafi) missionary activity, competing with the “secular” Egyptian establishment and the clerical regime in Tehran as the most ardent and well-funded proliferators of anti-Semitism among Muslims. So it is intriguing that MBS, whose father long had the Palestinian dossier at the royal court, has developed what appears to be a sincere and, at least for now, non-malevolent interest in Jews. 

    One suspects that the prince sees a certain religious and cultural affinity with Jews: Judaism and Islam are juristically and philosophically much closer to each other than Christianity and Islam. MBS is sufficiently well-educated — he has a law degree from King Saud University — to know this; he has now traveled enough, and met enough Jews around the world, to feel it. Nearly half of the Jews in Israel came from the Middle East.  The other half — the Ashkenazi, or as Bernard Lewis more accurately described them, the Jews of Christendom — often saw themselves, before arriving in Zion, as a Middle Eastern people in exile. Here is a decent guess about MBS’ reasoning: if the Jews, a Middle Eastern people now thoroughly saturated with modern (Western) ideas, could become so accomplished, then Saudi Muslims could, too. The Jewish experience — and association with Jews — might hold the keys to success. 

    There is a very long list of Muslim rulers from the eighteenth century forward, who, recognizing the vast chasm in accomplishment between Western (and now Westernized Asian) and Islamic lands, have tried to unlock the “secrets” of foreign power and success. Oil-rich Muslim rulers have tried to buy progress with petroleum profits. MBS certainly isn’t novel in his determination to make his own country “modern.” His audacity, even when compared against Shah Mohammad Reza Pahlavi, who aspired to make Iran “the Germany of the Middle East,” is impressive. There is the prospective giant metal tube in the northwest corner of the country, which, according to the prince’s NEOM vision (“neo” from the Greek and “m” from the Arabic mustaqbal, future), will one day hold upwards of nine million people in a verdant paradise where everyone has the Protestant work ethic and the air-conditioning never breaks down. This is the dreamscape of an Arab prince who is not intellectually shackled by the rhythms and the customs of his homeland. He is building a vast resort complex on the Red Sea, already under construction, which is also being funded from the sovereign wealth fund because Western and Asian bankers remain dubious about its profitability. A dozen five-star luxury resorts, dependent on visiting affluent Europeans, will have to allow topless bathers and a lot of alcohol if they have any chance of making money; thousands of lower-class Saudi men — not imported foreign labor — will in theory keep these resorts running. 

    The prince is searching for the keys to unleash Saudi Arabia’s non-oil potential — using prestigious Western consultancy firms that promise to bring greater productivity and efficiency to gross national product. He is trying to do what every significant Muslim ruler has done since Ottoman sultans realized they could no longer win on the battlefield against Christians: grow at home greater curiosity, talent, and industry. 

    Unlike the Arab elites in the lands that started seriously Westernizing in the nineteenth century and have since seen their countries racked and fractured by foreign ideologies, brutal authoritarian rulers, rebellions, and civil and sectarian wars, MBS appears to be an optimist. Under his firm guidance, he believes that Saudi Arabia can leapfrog from being the archetypal orthodox Islamic state to a self-sustaining, innovative, entrepreneurial, tech-savvy, well-educated powerhouse. Ajami, the finest chronicler of the Arab world’s misery, was deeply curious about Saudi Arabia because it was the last frontier, a land with considerable promise that had not yet embraced enough modernity, in all the wrong ways, to cock it up. The Saudi identity has been slow to nationalize — it was decades, perhaps a century, behind the cohering forces that gave Egypt and then Syria some sense of themselves. As Alexis Vassiliev, the great Russian scholar of Saudi Arabia and Wahhabism, put it: 

    The idea of a national territorial state, of a “motherland,” was new to Arabian society. The very concept of a motherland, to which individuals owe their primary loyalty, contradicts the spirit of Islam, which stresses the universal solidarity of believers as against non-Muslims. National consciousness and national feelings in Saudi Arabia were confined to a narrow group working in the modern sector of the economy and in the civil and military bureaucracy. Those who described themselves as nationalists were, rather, reformers and modernists, who wanted to create a more modern society. But their sentiments were so vague that the left wing of the “nationalists” even avoided using the name Saudi Arabia because of their attitude to the Al Saud.

    Saudi Arabia has been rapidly modernizing since the 1960s. Measured by massive concrete buildings, roads, luxury hotels with too much marble, electrification, communications, aviation, urban sprawl, rural decline, and access to higher education, Vassiliev’s observation is undoubtedly correct: “Saudi Arabia has experienced more rapid change than any other Middle Eastern country and the old social balance has been lost forever.” But spiritually, in its willingness to import Western ideas as opposed to Western gadgets, know-how, aesthetics, and organization, the kingdom changed only fitfully. Royal experiments in reform, especially under King Abdullah (2005–2015), could be ended as quickly as they began. 

    Before MBS, Saudi rulers and the vast oil-fed aristocracy were deeply conservative at home (if not in their homes), fearful of the outside world that relentlessly corroded the traditions that gave the kingdom its otherworldly, female-fearing, fun-killing, profoundly hypocritical weirdness. But this conservative status quo also offered a certain moral coherence, political stability (the royal family, thousands strong, were collectively invested), as well as a quirky governing class that was fine for decades, through the worst of the Wahhabi efflorescence that followed the Iranian revolution and the seizure of the Great Mosque in Mecca, with a gay intelligence chief. Saudis might be haughty, lacking the multilingual grace that came so easily to old-school Arabs, who retained Ottoman propriety with first-rate Western educations, but they were aware of their limitations. They gave the impression that they couldn’t compete — even at the apex of Saudi power in the mid-1970s. Most of the royal family likely didn’t want to try. When Ajami was alive (he died in 2014, a year before MBS began his rise), Saudi Arabia hadn’t taken the giant, irreversible leap forward. It has now. 

     The crown prince has been a one-man wrecking ball, transforming the country’s collective leadership, where princes — uncles, brothers, sons, and cousins — effectively shared power under the king, into a dictatorship. Whatever brakes are still on the system (King Salman is old and ailing but rumors don’t yet have him non compos mentis), they likely will not outlast Salman’s death. There has never been any clear demarcation between the nation’s treasury and the royal family’s purse; MBS appears to have greatly reduced the points of access to the country’s oil wealth to him and his minions. His great shakedown in the Ritz Hotel in Riyadh in November 2017, when nearly four hundred of the kingdom’s richest and most powerful people were forcibly held and squeezed or stripped of their assets, killed the old order. Some “guests” reportedly were physically tortured, had their families threatened, or both. Such behavior would have been unthinkable before. Traditional kingdoms always have informal rules that buttress the status quo and check arbitrary power. The crown prince’s new-age mindset — his determination to stamp out all possible opposition to his modernist vision with him alone at the helm — was vividly on display at the Ritz. 

    This autocratic thuggery earned little censure in the West, on either the left or right. Some appeared to believe that the rightly guided prince was actually stamping out corruption. Many Saudis, especially among the young, may have sincerely enjoyed the spectacle of the spoiled ancien régime getting its comeuppance. The same unchecked princely temperament, however, reappeared in the Saudi consulate in Istanbul on October 2, 2018, when Jamal Khashoggi crossed the threshold. It is a near-certainty that MBS intended to kill, not to kidnap, the elite dissident. It is not at all unlikely, given the prince’s reputation for work and detail and his aversion to delegating decisions to others, that he personally approved the dissident’s dismemberment. 

    The crown prince is gambling that Saudi nationalism, which is now real even if its depth is hard to measure, will attach itself to him, as nationalisms do in their need for a leader. He is trying to downgrade Islam by upgrading the nation. He has reined in the dreaded morals police, the mutawwa, who could harass and arrest almost anyone. The urban young, especially if they come from the middle and upper-middle class, have long loathed this police force, which comes from the more marginal precincts of society, and so they find MBS’ mission civilisatrice appealing. The crown prince is essentially trying to pull an Atatürk, who created a Turkish nation-state out of a Muslim empire. Mustafa Kemal created his own cult: he was a war hero, the savior of the Turks from invading Greek Christian armies and World War I victors who were carving up the carcass of the Ottoman state. He fused himself with the idea of nationhood. His tomb in Ankara has tens of thousands of Turkish Islamists respectfully visiting it. 

    When it came to cult worship, Saudi kings and princes had been fairly low-key compared to most other Middle Eastern rulers. Yet MBS’ sentiments are, again, more modern. He has effectively established a police state — the first ever in Saudi history. His creation is certainly not as ruthless as the Orwellian nightmares of Saddam Hussein’s Iraq or Assad’s Syria; it is neither as loaded with internal spies nor rife with prison camps as Abdul Fattah El-Sisi’s Egypt. But MBS’ Arabia is a work in progress. Those in America and Israel who advocate that the United States should draw closer to MBS, so as to anchor a new anti-Iran alliance in Riyadh, are in effect saying that we should endorse MBS and his vision of a more secular, female-driving, anti-Islamist Saudi Arabia without highlighting its other, darker aspects, or that we should just ignore the kingdom’s internal affairs and focus on what the crown prince gives us externally. This realist calculation usually leads first back to the negatives: without the crown prince’s support of American interests, Russia, China, and Iran, the revisionist axis that has been gaining ground as America has been retrenching, will do even better. And then the positive: Saudi recognition of Israel would permanently change the Jewish state’s standing in the Muslim world — a long-sought goal of American diplomacy. 

    The prince clearly knows how much Benjamin Netanyahu wants Saudi Arabia’s official recognition of Israel. The Israeli prime minister has loudly put it at the top of his foreign-policy agenda. (Before the Gaza war, it might have had the additional benefit of rehabilitating him at home.) The prince clearly knows how much American Jewry wants to see an Israeli embassy in Riyadh. And after some initial weariness, the Biden administration now wants to add the kingdom to the Abraham Accords. Bahrain, the United Arab Emirates, Morocco, and Sudan recognizing Israel was good, but Saudi Arabia would be better. Although the White House certainly hasn’t thought through how the United States would fit into an Israeli-Saudi-US defensive alliance, whether it would even be politically or militarily possible, the administration proffered the idea before Biden went to Saudi Arabia in 2022 — or echoed an earlier, vaguer Saudi suggestion of a defensive pact — as part of Riyadh’s official recognition of Israel. Given the importance that MBS attaches to things Jewish, he may well believe his offer of Israeli recognition gives him considerable leverage in future dealings with the United States. 

    Joe Biden paved the way for MBS’ go-big proposal by making one of the most embarrassing flips in presidential history. Biden came into office pledging to reevaluate US-Saudi ties and cast MBS permanently into the cold for the gruesome killing of Khashoggi and, a lesser sin, making a muck of the war in Yemen, which has led to the United States, given its crucial role in maintaining and supplying the Saudi Air Force, being an accomplice in a bombing campaign that has had a negligible effect on the Shiite Houthis capacity to fight but has killed thousands, perhaps tens of thousands, of Yemeni civilians. (In civil wars, it is hard to know who is starving whom, but the Saudi role in bringing starvation to Yemen has not been negligible.) Fearing another hike in oil prices before the midterm elections, Biden travelled to Saudi Arabia, fist-bumping MBS and getting not much in return except reestablishing what has been true in US–Saudi relations from the beginning, when Franklin Delano Roosevelt hosted two of King Ibn Saud’s sons in Washington: everything is transactional. 

    MBS’ offer to America arrived with China’s successful intervention into Saudi-Iranian relations. Beijing obtained an agreement for the restoration of diplomatic ties between the two countries, which Riyadh had severed in 2016, after King Salman executed Nimr al-Nimr, the most popular Saudi Shiite cleric in the oil-rich Eastern Province, and Iranian protestors set fire to the Saudi embassy in Tehran. Beijing also appears to have aided a Saudi-Iranian ceasefire and an understanding about Yemen. MBS, who had been eager to extricate himself and the Saudi treasury from the peninsula’s “graveyard of nations,” reduced the number of Saudi forces engaged in the conflict; Tehran appears to have convinced the Houthis, at least temporarily, not to lob Iranian-provided missiles into their northern neighbor. 

    China offers MBS something that Israel and the United States realistically no longer do: a possible long-term deterrent against Iranian aggression in a post-American Middle East. Beijing likely isn’t opposed to the Islamic Republic going nuclear, since this would further diminish the United States, which has under both Republican and Democratic presidents told the world that an Iranian nuke is “unacceptable.” Given Chinese access in Tehran and Moscow, which is developing an ever-closer military relationship with the clerical regime, the value of Chinese intercession will increase. Given Beijing’s economic interest in Saudi Arabia’s oil (it is now the kingdom’s biggest customer), MBS is certainly right to see in the Chinese a possible check on any Iranian effort to take the kingdom’s oil off-market. The Islamic Republic has never before had great-power patrons. The Chinese bring big advantages to Iran’s theocrats — much greater insulation from American sanctions, for example; but they may also corral Tehran’s freedom of action a bit. 

    In the controversial interview that MBS not long ago gave to The Atlantic, MBS clearly thought he could wait out the Biden administration, and that America’s and the West’s need for Saudi crude, and the rising power of China, gave the prince time and advantage. He has won that tug-of-war. America cannot possibly ostracize the ruler who controls the largest, most easily accessible, and highest-quality pool of oil in the world. The Gaza war will also play to MBS’ advantage as both Israel and the United States will seek Saudi intercession to counter what’s likely to become an enormous propaganda victory for Iran’s “axis of resistance.” The crown prince may well be racing his country towards the abyss, eliminating all the customs and institutions that made the Saudi monarchy resilient and not particularly brutish (not by Middle Eastern standards), but he has been tactically astute with all the greater powers maneuvering around him.

    Saudi Arabia is probably the Muslim country that American liberals hate the most. (Pakistan is a distant runner-up.) This enmity is, in part, a reaction to the oddest and oldest American “partnership” in the Middle East, and the general and quite understandable feeling that the Saudi royal family never really came clean about its dealings with Osama bin Ladin before 9/11. Not even post-Sadat Egypt, which has developed a close working relationship with the American military and the CIA, has had the kind of access that the Saudis have had in Washington. Even after 9/11, during Bush’s presidency, Saudi preferences in personnel could reach all the way into the National Security Council. The Saudi distaste for Peter Theroux, an accomplished Arabist and former journalist who wrote an entertaining, biting look into the kingdom in Sandstorms, published in 1990, got him briefly “unhired” to oversee Saudi policy on the NSC because of the fear of Riyadh’s reaction. He got rehired when either Condoleezza Rice, the national security advisor, or her deputy, Stephen Hadley, realized that allowing Saudi preferences to effect personnel decisions within the White House was unwise and potentially very embarrassing. Given the Gaza war’s demolition of the Biden administration’s Middle Eastern policy, it’s not unlikely that we will see Saudi access in Washington rise again, perhaps rivaling its halcyon days during the Reagan administration. That would be a sharp irony. 

     Culturally speaking, no two countries had ever been further apart: Saudi Arabia still had a vibrant slave society in 1945 when Franklin Roosevelt began America’s relationship with the desert kingdom. Outside pressure, not internal debate among legal scholars and Saudi princes about evolving religious ethics and the holy law, obliged the monarchy to ban slavery officially in 1962. (Bad Western press aside, the Saudi royals may have been truly annoyed at French officials freeing slaves traveling with their masters on vacation.) Ibn Saud had over twenty wives, allotted by the holy law to no more than four at one time, and numerous concubines. When concubines became pregnant, they would usually ascend through marriage, while a wife would be retired to a comfortable and less competitive environment. By comparison, the thirty-seven-year-old crown prince today has only one wife and, if rumors are true, many mistresses — a less uxorious, more acceptable choice for a modern, ambitious man. 

    Roosevelt’s embrace of the Saudi monarchy established the ultimate realist relationship. The Saudi royals neither cared for America’s democratizing impulse, nor for its incessant conversations about human rights, nor for its ties to Israel, nor, after 1973 and the Saudi-engineered oil embargo that briefly gave the United States sky-rocketing prices and gas lines, for the American chatter in pro-Israel corners about the need to develop plans for seizing Saudi oil fields. Yet the Saudis loved the U.S. Navy and the long, reassuring shadow that it cast in the Middle East. Donald Trump’s embrace of Arabia, however much it may have horrified American liberals and amplified their distaste for the country and its ruling family, just highlighted, in Trump’s inimitably crude way, a bipartisan fact about Saudi-American relations: we buy their oil and they buy our armaments, technology, machinery, services, and debt. Barack Obama sold the Saudis over sixty-five billion dollars in weaponry, more than any president before or since. Both sides have occasionally wanted to make it more than that, to sugarcoat the relationship in more appealing ways. The more the Saudis, including the royals, have been educated in the United States, the more they have wanted Americans to like them. Americanization, even if only superficial, introduces into its victims a yearning for acceptance. Americans have surely been the worst offenders here, however, since they are far more freighted with moral baggage in their diplomacy and trade. They want their allies to be good as well as useful. 

    Although Americans have a knack for discarding the past when it doesn’t suit them, Saudi and American histories ought to tell us a few things clearly. First, that MBS’ offer to the United States is without strategic advantages. This is true even though Iran may have green-lighted, perhaps masterminded, the attack on October 7 in part to throw a wrench into the U.S.–Israeli–Saudi negotiations over MBS’ proposal.  Iranian conspiratorial fears always define the clerical regime’s analysis.  Its desire to veto its enemies’ grand designs is certainly real irrespective of whether it thought that Saudi–Israeli normalization was arriving soon or that MBS’ quest to develop a nuclear-weapons-capable atomic infrastructure needed to be aborted sooner rather than later. Iranian planning on the Gaza war likely started long before Biden administration officials and Netanyahu’s government started leaking to the press that normalization was “imminent”; it likely started before MBS’ vague suggestions of a defensive pact between Washington and Riyadh.  Leaks about diplomatic progress surrounding a coming deal, however, might have accelerated Iran’s and Hamas’ bloody calculations.  

    Concerning the crown prince’s nuclear aspirations, which have divided Israelis, caused serious indigestion in Washington, and compelled Khamenei’s attention, they are not unreasonable given that domestic energy requirements for Saudi Arabia — especially the exponentially increasing use of air conditioning — could in the near future significantly reduce the amount of oil that Riyadh can sell. Nuclear energy would free up more petroleum for export and produce the revenue that MBS desperately needs to continue his grand plans. But it is also a damn good guess that MBS’ new attention to nuclear power plants has a lot to do with developing the capacity to build the bomb. Just across the Gulf, the Islamic Republic has effectively become a nuclear threshold state — the Supreme Leader likely has everything he needs to assemble an atomic arm; and the possibility is increasingly remote that either Biden or the Israeli prime minister (whoever that maybe on any given day) is going to strike militarily before the clerical regime officially goes nuclear. And MBS, despite his occasional bravado on Iran and his undoubtedly sincere private desire to undermine the clerical regime, probably doesn’t want to deal with such a denouement. Given how much Netanyahu and most of the Israeli political class have wanted Saudi–Israeli normalization, given how desperate the Biden administration has been to find stabilizing partners in the Middle East, which would allow the United States to continue its retrenchment, MBS could be forgiven for thinking, especially after October 7, that the sacred creed of non-proliferation might well give way to his atomic ambitions.

    The Saudis were never brave when they were focused single-mindedly on building their frangible oil industry; now they have vast installations, which the Iranians easily paralyzed in 2019 with a fairly minor drone and cruise missile attack. The same vulnerability obtains for the crown prince’s NEOM projects, which the Iranians, who have the largest ballistic- and cruise-missile force in the Middle East, could severely damage — probably even if the Saudis spend a fortune on anti-missile defense. MBS came in like a lion on the Islamic Republic, attracting the attention and the affection of Israelis and others; it’s not at all unlikely that he has already become a lamb, actually less stout-hearted than his princely predecessors who, in a roundabout way, via a motley crew of characters, using the CIA for logistical support, took on the Soviet Union in Afghanistan. 

    Still, MBS would want to plan for contingencies. Having nuclear weapons is better than not having them. A Saudi bomb might check Persian predation. And the Saudis are way behind. They have neither the engineers nor the physicists nor the industrial base. And the odds are excellent that the Pakistanis, who though indebted to the Saudi royal family are quite capable of stiffing them, haven’t been forthcoming: they are not going to let the Saudis rent an atomic weapon.  And the Russians and the Chinese might not want to give the Saudis nuclear power. It would add another layer of complexity and tension to their relations with the Islamic Republic, which neither Moscow nor Beijing may want. The Europeans and the Japanese are unlikely to step into such a hot mess. Getting nuclear technology from the Americans would be vastly better. Another way, as Ajami put it, to supplement Boeing and AT&T. 

    Failing on the atomic front, MBS might intensify his dangle of an Israeli embassy in Riyadh — to see what he can get even if he has no intention of recognizing the Jewish state.  The Gaza war certainly increases his chances that he can get both the Israelis and the Americans to concede him a nuclear infrastructure with local uranium enrichment. The war makes it less likely, however, that he would want to tempt fate anytime soon by recognizing Israel.  Normalization gains him nothing internally among all those Saudis who religiously or politically may have trouble with a Saudi flag — which has the Muslim shahâda, the profession of faith, boldly printed on a green heavenly field — flying in Zion. And the Star of David in Riyadh could be needlessly provocative even to the crown prince’s much-touted young, hitherto apolitical, supporters. The Palestinian cause, which most of the Israeli political elite thought was a fading clarion call, has proven to have astonishing resonance.  

    But even if MBS is still sincere about this big pitch, neither Washington nor Jerusalem should be tempted. It gives the former nothing that it does not already have. It offers the Israelis far less than what Netanyahu thinks it does. Despite MBS’ grand visions for where his country will be in the middle of the century, Saudi Arabia is actually a far less consequential kingdom than it was in 1973, when it was at the pinnacle of its oil power, or in 1979, when it collided with the Iranian revolution and was briefly on the precipice after the Sunni religious revolutionary Juhayman al-Otaybi, his messianic partner Muhammad Abdullah al-Qahtani, and five hundred well-armed believers took Mecca’s Grand Mosque and shook the dynasty to its core. 

    Religiously, among Muslims, Saudi Arabia hasn’t been a bellwether for decades. Its generous funding for Salafi causes undoubtedly fortified harsher views across the globe, especially in Indonesia and Western Europe, where so many dangerous Islamic radicals were either born or trained. Religious students and clerics raised in the eclectic but formal traditions of the Al-Azhar seminary in Cairo, for example, would go to Saudi Arabia on well-paid fellowships and often come back to Egypt as real Hanbali-rite killjoys. Saudi Arabia helped to make the Muslim world more conservative and less tolerant of Muslim diversity. But relatively few Saudi imams were intellectually on the cutting edge, capable of influencing the radical set outside of Arabia. And it was the radical set outside of Saudi Arabia who mattered most, especially in Egypt, where the politically subservient dons of Al-Azhar lost control and relevance for Muslims who were growing uncomfortable with Egypt’s Westernization, first under the British and then, even more depressingly, under the military juntas of Gamal Abdel Nasser and Anwar Sadat. 

    In Saudi Arabia, most of the radical Salafi imams were either in trouble with the Saudi authorities, in exile, or in jail. Saudi royals were once big fans of the Egyptian-born Muslim Brotherhood because it was hostile to all the leftist ideologies storming the Middle East. Only later did they realize that these missionaries were irreversibly populist and anti-monarchical. Saudi religious funding was like that old publishing theory — throw shit against the wall and see what sticks. The operative assumptions were that more religious Muslims were better than less religious, and that more religious Sunni Muslims would be hostile to Iran’s revolutionary call, and that more religious Sunni Muslims would be more sympathetic to Saudi Arabia. Who preached what and where was vastly less important. There were a lot of problems with every one of those assumptions, which the Saudi royal family realized long before the coming of MBS. But inertia is infamously hard to stop. 

    For most faithful Muslims today, Saudi Arabia isn’t intellectually and spiritually important. What happened to Al-Azhar in Egypt — its intellectual and jurisprudential relevance declined as it became more subservient to the Westernizing Egyptian military — has been happening in Saudi Arabia for at least thirty years. MBS is intensifying this process. Westerners may cheer him on as he tries to neuter Islam as the defining force within Saudi society, but internationally it makes Saudi Arabia a less consequential state. Saudi Arabia is not a model of internal Islamic reform; it is merely another example of a modernizing autocrat putting Islam in its place — behind the ruler and the nation. The dictatorial impact on religion can be profound: it can reform it, it can radicalize it, it can do both at the same time. 

    To keep on the Egyptian parallel: Anwar Sadat visited Jerusalem and opened an embassy in Tel Aviv. He and his military successors have slapped the hands of Al-Azhar’s rectors and teachers when they challenged the legitimacy of the Egyptian–Israeli entente. (Conversely, they haven’t stopped, and they have often subsidized, the perpetual anti-Semitism of Egyptian media, film, and universities.) The peace between Egypt and Israel has obviously been beneficial to both countries, but religiously it made little positive impact on Muslims within Egypt or abroad. When a Muslim Brother, Mohammad Morsi, won Egypt’s first — and so far only — free presidential election in 2012, Israelis and Americans were deeply concerned that he would sever relations with Israel. That didn’t happen, but the fears were understandable; Israel’s popular acceptance within Egyptian society remains in doubt. 

    If, after the Gaza war, MBS deigns to grant Israel diplomatic relations, it won’t likely make faithful Muslims, or even secular Muslims, more inclined to accept the Jewish state. It might do the opposite, especially inside Saudi Arabia. The Saudi royal family’s control over the two holiest sites in Islam — Mecca and Medina — makes little to no difference in how that question is answered. Such custodianship confers prestige and obliges the Saudi royal family, at least in the holy cities, to maintain traditional standards. In the past, before modern communications, it allowed Muslims and their ideas to mix. But it absolutely does not denote superior religious authority — no matter how much Saudi rulers and their imams may want to pretend that by proximity to the sacred spaces they gain religious bonus points. One often gets the impression from Israelis that they are in a time warp with respect to Saudi Arabia, that for them it is still the mid-1970s and Riyadh’s recognition would effectively mark an end to their Muslim travails. The Gaza war ought to inform Israelis that the profoundly emotive Islamic division between believer and non-believer and the irredentist claims of Palestinian Muslims against the Jewish state do not lessen because a Saudi crown prince wants to establish a less hostile modus vivendi with Israel and Jews in general.  

     Perhaps above all else, the Israelis should want to avoid entangling themselves too closely with MBS in the minds of Americans, especially those on the left, who are essential to bipartisan support for Jerusalem. It’s an excellent bet that MBS’ dictatorship will become more — likely much more — oppressive and contradictory in its allegiances. What Safran noted about Saudi behavior from 1945 to 1985 — “attempts to follow several contradictory courses simultaneously, willingness to make sharp tactical reversals; and limited concern with the principle of consistency, either in reality or in appearance” — has already happened with MBS. This disposition will probably get much worse. And Americans aren’t Israelis, who never really see political promise in Muslim lands (neither Islamic nor Israeli history encourages optimism). The choice, in their minds, is between this dictator or that one — and never, if possible, the worst-case scenario: Muslims freely voting and electing Islamists who by definition don’t exactly kindle to Israel or the United States. Americans are much more liberal (in the old sense of the word): for them, autocracies aren’t static, they inevitably get worse until they give way to revolution or elected government. Even realist Americans are, compared to Israelis, pretty liberal. And the principal reason that the United States has so steadfastly supported the Jewish state since 1948 is that it is a vibrant democracy, however troubled or compromised it might be by the Palestinian question or by its own internal strains of illiberalism and political religion. When Israelis and their American supporters tout MBS as a worthwhile ally, they are diminishing Israel’s democratic capital. If MBS really thought diplomatic relations were in his and Saudi Arabia’s self-interest, there would already be a fluttering Star of David in Riyadh. The wisest course for Israelis is to reverse engineer MBS’ hesitation into a studied neutrality.  

    Religion and mores aside, closer relations with Saudi Arabia will not assist America or Israel against its principal Middle Eastern concern: the Islamic Republic of Iran. In 2019, when Iran decided to fire cruise missiles and drones at Saudi Aramco processing plants in Abqaiq and Khurais, briefly taking out nearly half of the country’s oil production, MBS did nothing, except turn to the Americans plaintively. The Emirates, whose ambassador in Washington gives first-rate anti-Iran dinner parties, sent an emissary to Tehran, undoubtedly to plead neutrality and promise to continue to allow the clerical regime the use of Dubai as a U.S.-sanctions-busting entrepôt. The two Sunni kingdoms had purchased an enormous amount of Western weaponry to protect themselves against the Islamic Republic. The Saudi and Emirati air forces and navies are vastly more powerful than their Iranian counterparts. And still they would not duel. They lack what the clerical regime has in abundance: a triumphant will fueled by success and a still vibrant ideology.  

    The Saudis know, even if MBS’ public rhetoric occasionally suggests otherwise, that the Islamic Republic, even with all its crippling internal problems, is just too strong for them. Iran’s eat-the-weak clerics and Revolutionary Guards, who fought hard and victoriously in Syria’s civil war, became the kingmaker in Iraq, and successfully radicalized and armed the Shiite Houthis of Yemen, can always out-psych the Saudi and Nahyan royals. Also, Trump didn’t help. When he decided not to respond militarily to the cruise missile-and-drone attacks, plus the ones on merchant shipping in the Gulf of Oman three months earlier (he boldly tweeted that the United States was “locked and loaded”), Trump trashed whatever was left of the Carter Doctrine. Washington had assumed responsibility for protecting the Persian Gulf after the British withdrawal in 1971. There is a direct causal line from the Trump administration’s failure in 2019, through its “maximum pressure” campaign that eschewed military force in favor of sanctions, through the Biden administration’s repeated attempts to revive Barack Obama’s nuclear deal, through the White House’s current see-no-redline approach to Iran’s ever-increasing stockpile of highly-enriched uranium, to MBS’ decision to turn toward the Chinese. 

    It is brutally difficult to imagine scenarios in which the Saudis could be a military asset to the United States in the Persian Gulf or anywhere else in the Middle East. The disaster in Yemen, for which the Iranians and the Houthis are also to blame, tells us all that we need to know about Saudi capacity. Even with American airmen, sailors, and intelligence officers in Saudi command centers doing what they could to inform and direct Saudi planes, Saudi pilots routinely bombed the right targets poorly and the wrong targets (civilians) well. The Saudis mobilized tens of thousands of troops for the campaign, but it’s not clear that they actually did much south of the border. (Small special forces units appear to have fought and held their own.) The UAE’s committed forces, which once numbered around four thousand on the ground, did much more, but they quickly discovered that the coalition gathered to crush the Houthis, who had far more unity and purpose, simply didn’t work. The UAE started hiring mercenaries and pulling its troops out of combat. MBS, who was then the Saudi defense minister and a pronounced hawk, started blaming the UAE, which blamed the Saudis. 

    If we are lucky, the Yemen expedition has actually taught the crown prince a lesson: that his country, despite its plentiful armaments, is not much of a military power and can ill-afford boldness. If any future American-Iranian or Israeli-Iranian clash happens, we should not want the Saudis involved. Similarly, we should not want an entangling alliance with Riyadh — a NATO in the Persian Gulf — because it will give us little except the illusion that we have Arab partners. We also shouldn’t want it because such an alliance could harm, perhaps irretrievably, the kingdom itself if it re-animated MBS’ martial self-confidence. 

    About Saudi Arabia, the China-first crowd might actually be more right than wrong. Elbridge Colby, perhaps the brightest guru among the Trump-lite aim-towards-Beijing Republicans, thinks the United States can deploy small detachments from the Air Force and Navy to the Persian Gulf and it will be enough to forestall seriously untoward actions by a nuclear Iran, China, and Russia. Yet the Gaza war has shown that when the United States seriously misapprehends the Middle East, when it sees the Islamic Republic as a non-revolutionary power with diminished Islamist aspirations and malevolent capacity, Washington may be obliged to send two aircraft-carrier groups to the region to counter its own perennial miscalculations. Colby’s analysis has the same problem in the Middle East that it does with Taiwan: everything depends on American willingness to use force. A United States that is not scared to project power in the Middle East, and is not fearful that every military engagement will become a slippery slope to a “forever war,” would surely deter its enemies more convincingly and efficiently.  The frequent use of force can prevent larger, uglier wars that will command greater American resources.  

    If Washington’s will — or as the Iranians conceive it, haybat, the awe that comes with insuperable power — can again be made credible, then at least in the Persian Gulf Colby is right.  It doesn’t take a lot of military might to keep the oil flowing.   For the foreseeable future, no one there will be transgressing the US Air Force and Navy — unless we pull everything to the Pacific. We don’t need to pledge anything further to MBS, or to anyone else in the region, to protect the global economy.   And the global market in oil still has the power to keep MBS, or anyone else, from ratcheting the price way up to sustain grand ambitions or malevolent habits. 

    It is an amusing irony that Professor Safran, an Israeli-American who tried to help the American government craft a new approach to Saudi Arabia in the 1980s (and got into trouble for it when it was discovered that some of his work had received secret CIA funding), foresaw the correct path for Saudi Arabia today when he was assessing forty years ago its missionary Islamic conservatism, anti-Zionist reflexes, and increasing disposition to maximize oil profits regardless of the impact on Western economies. He advised that “America’s long-term aim should be to disengage its vital interests from the policy and fate of the Kingdom. Its short-term policy should be designed with an eye to achieving that goal while cooperating with the Kingdom in dealing with problems on a case-by-case basis and advancing shared interests on the basis of reciprocity.” In other words, we should treat the kingdom in the way that it has treated us. In still other words, Roosevelt was right: with the Saudis, it should be transactional. Nothing more, nothing less. Day in, day out. 

     

     

    The Logical One Remembers

    “I’m not irrational. But there’ve been times

    When I’ve experienced—uncanniness:

    I think back to those days, when, four or five,

    I dreaded going to bed, because I thought

    Sleep really was a ‘dropping off.’ At night

    Two silver children floated up from somewhere

    Into the window foiled with dark, a boy

    And girl. They never spoke. But they arose

    To pull me with them, down, into the black

    That brewed deep in the basement. I would sink

    With them to float in nothingness. Each night

    For a year, or maybe two. It was a dream

    You’ll say, just a recurring dream. It’s true.

    (And yet I was awake when they came through.)”

    The Slug

    Everything you touch

    you taste. Like moonlight you gloss

    over garden bricks,

     

    rusty chicken wire,

    glazing your trail with argent

    mucilage, wearing

     

    your eyes on slender

    fingers. I find you grazing

    in the cat food dish

     

    waving your tender

    appendages with pleasure, 

    an alien cow.

     

    Like an army, you 

    march on your stomach. Cursive,

    you drag your foot’s font.

     

    When I am salted

    with remorse, saline sorrow,

    soul gone leathery

     

    and shriveled, teach me

    how you cross the jagged world

    sans helmet, pulling

     

    the self’s nakedness

    over broken glass, and stay

    unscathed, how without

     

    haste, secretive, you

    ride on your own shining, like 

    Time, from now to now.

    The Cloud

    I used to think the Cloud was in the sky,

    Something invisible, subtle, aloft:

    We sent things up to it, or pulled things down

    On silken ribbons, on backwards lightning zaps.

    Our photographs, our songs, our avatars

    Floated with rainbows, sunbeams, snowflakes, rain.

    Thoughts crossed mid-air, and messages, all soft

    And winking, in the night, like falling stars.

     

    I know now it’s a box, and acres wide,

    A building, stories high. A parking lot

    Besets it with baked asphalt on each side.

    Within, whir tall machines, grey, running hot.

    The Cloud is windowless. It squats on soil

    Now shut to bees and clover, guzzling oil.

    Wind Farm

    I still remember the summer we were becalmed:

    No breezes rose. The dandelion clock

    Stopped mid-puff. The clouds stood in dry dock.

    Like butterflies, formaldehyde embalmed,

     

    Spring kites lay spread out on the floor, starched flat.

    Trees kept their council, grasses stood up straight

    Like straight pins in a cushion, the wonky gate

    That used to bang sometimes, shut up, like that.

     

    Our ancestors, that lassoed twisty tails

    Of wild tornadoes, teaching them to lean

    In harness round the millstone — the would weep

     

    At all the whirlwinds that we didn’t reap.

    I lost my faith in flags that year, and sails:

    The flimsy evidence of things unseen.

    The Wise Men

    Matthew, 2.7-12

    Summoned to the palace, we obeyed.

    The king was curious. He had heard tell

    Of strangers in outlandish garb, who paid

    In gold, although they had no wares to sell.

    He dabbled in astrology and dreams:

    Could we explain the genesis of a star?

    The parallax of paradox — afar

    The fragrance of the light had drawn us near.

    Deep in the dark, we heard a jackal’s screams

    Such as, at lambing time, the shepherds fear.

    Come back, he said, and tell me what you find,

    Direct me there: I’ll bow my head and pray.

    We nodded yes, a wisdom of a kind,

    But after, we slipped home by another way.

    The Anti-Liberal

    Last spring, in The New Statesman, Samuel Moyn reviewed Revolutionary Spring, Christopher Clark’s massive new history of the revolutions of 1848. Like most everything Moyn writes, the review was witty, insightful, and provocative — another illustration of why Moyn has become one of the most important left intellectuals in the United States today. One thing about it, though, puzzled me. In the Carlyle lectures that he delivered at Oxford the year before, now published as Liberalism Against Itself, Moyn argued that liberalism was, before the Cold War, “emancipatory and futuristic.” The Cold War, however, “left the liberal tradition unrecognizable and in ruins.” But in the New Statesman review, Moyn claimed that liberals had already lost their way a century long before the Cold War. “One lesson of Christopher Clark’s magnificent new narrative of 1848,” he wrote, “is a reminder of just how quickly liberals switched sides…. Because of how they lived through 1848, liberals betrayed their erstwhile radical allies to join the counter-revolutionary forces once again — which is more or less where they remain today.”

    Perhaps the contradiction is not so puzzling. Much like an older generation of historians who seemed to glimpse the “rise of the middle classes” in every century from the thirteenth to the twentieth, Samuel Moyn today seems to find liberals betraying their own traditions wherever he looks. Indeed, this supposed betrayal now forms the leitmotif of his influential writing.

    This was not always the case. The work that first made Moyn’s reputation as a public intellectual, The Last Utopia, in 2010, included many suggestive criticisms of liberalism, but was a subtle and impressive study that started many more conversations than it closed off. Yet in a subsequent series of books, from Christian Human Rights (2015), through Not Enough (2018) and Humane (2021), and most recently Liberalism Against Itself: Cold War Intellectuals and The Making of Our times, Moyn has used his considerable talents to make increasingly strident and moralistic critiques of contemporary liberalism, and to warn his fellow progressives away from any compromises with their “Cold War liberal” rivals. In particular, as he has argued in a steady stream of opinion pieces, his fellow progressives should resist the temptation to close ranks with the liberals against the populist right and Donald Trump. Liberalism has become the principal enemy, even as his definition of it has come to seem a figure of increasingly crinkly straw.

    Moyn does offer reasons for his critical focus. As he now tells the story, the liberalism born of the Enlightenment and refashioned in the nineteenth century was capacious and ambitious, looking to improve the human condition, both materially and spiritually; and not merely to protect individual liberties. It was not opposed to socialism; in fact, it embraced many elements of socialism. But that admirable liberalism has been repeatedly undermined by backsliding moderates who, out of fear that overly ambitious and utopian attempts to better the human condition might degenerate into tyranny, stripped it of its most attractive features, redefined it in narrow individualistic terms, and all too readily allied with reactionaries and imperialists. The logical twin endpoints of these tendencies, in Moyn’s view, are neoconservatism and neoliberalism: aggressive American empire and raging inequalities. His account of liberalism is a tale of villains more than heroes. 

    This indictment is sharp, and it is persuasive in certain respects, but it is also grounded in several very questionable assumptions. Politically, Moyn assumes that without the liberals’ “betrayal,” radicals and progressives would have managed to forge far more successful political movements, perhaps forestalling the triumph of imperial reaction after 1848, or of neoliberalism in the late twentieth century. Historically, Moyn reads the ultimate failure of Soviet Communism back into its seventy-year lifespan, as if its collapse was inevitable, and therefore assumes that during the Cold War the liberals “overreacted,” both in their fears of the Soviet challenge and in their larger concerns as to the pathological directions that progressive ideology can take.

    In addition, Moyn, an intellectual historian, not only attributes rather more influence to intellectuals than they may deserve, but also tends to engage with the history of actual liberal politics only when it supports his thesis. He has had a great deal to say about foreign policy and warfare, but much less about domestic policy, in the United States and elsewhere. He has therefore largely sidestepped the inconvenient fact that at the height of the Cold War, at the very moment when, according to his work, “Cold War liberals” had retreated from liberalism’s noblest ambitions, liberal politics marked some of its greatest American successes: most notably, civil rights legislation and Lyndon Johnson’s Great Society. The key moment for the ascent of neoconservatism and neoliberalism in both the United States and Britain came in the 1970’s, long after the start of the Cold War, and had just as much to do with the perceived failures of the modern welfare state as with the Soviet threat.

    Finally, Moyn has increasingly tended to reify liberalism, to treat it as a coherent and unified and powerful “tradition,” almost as a kind of singular political party, rather than as what it was, and still is: an often inchoate, even contradictory collection of thinkers and political movements. Doing so allows him to argue, repeatedly, that “liberalism” as a whole could make mistakes, betray its own past, and still somehow drag followers along while foreclosing other possibilities. As he put it bluntly in a recent interview, “Cold War liberalism… destroyed the potential of liberalism to be more believable and uplifting.” If only “liberalism” had not turned in this disastrous direction, Moyn claims, it could have defended and extended progressive achievements and the welfare state — weren’t they themselves achievements of American liberalism? — rather than leaving these things vulnerable to the sinister forces of market fundamentalism. Whether better liberal arguments would have actually done much to alter the directions that the world’s political economy has taken over the past half century is a question that he largely leaves unasked.

    Moyn today presents himself as a genuine Enlightenment liberal seeking to redeem the movement’s potential and to steer it back onto the path that it too easily abandoned. The assumptions he makes, however, allow him to blame the progressive left’s modern failures principally on its liberal opponents rather than on its own mistakes and misconceptions and the shifts it has made away from agendas that have a hope of rallying a majority of voters. Not surprisingly, this is a line of argument that has proven highly attractive to his ideological allies, but at the cost of helping to make serious introspection on their part unnecessary. They do not need to ask why they themselves have not produced an alternative brand of liberalism that might challenge the Cold War variety for intellectual ambition and rigor, and also enjoy broad electoral support. Moyn’s is a line of argument that also, in the end, fails to acknowledge just how difficult it is to improve the human condition in our fallen world, and how easily the path of perfection can turn astray, towards horror. 

    At the start it was not clear that Moyn’s work would take such a turn. After receiving both a doctorate in history and a law degree, he first made a scholarly reputation with a splendid study of the philosopher Emmanuel Levinas, and with essays on French social and political theory (especially the work of Pierre Rosanvallon, a major influence on his thought). His polemical side came out mostly in occasional pieces, notably his sharp and witty book reviews for The Nation. In one of those, in 2007, he took aim at contemporary human rights politics, calling them a recently invented “antipolitics” and arguing that “human rights arose on the ruins of revolution, not as its descendant.”

    Three years later Moyn published a book, The Last Utopia: Human Rights in History, which elaborated on these ideas, but in a careful and sometimes even oblique manner. In it, he sought to recast both the historical and political understandings of human rights. Historically, where most scholars had given rights doctrines a pedigree stretching back to the Enlightenment or even further, Moyn stressed discontinuity, and the importance of one recent decade in particular: the 1970s, when “the moral world of Westerners shifted.” Until this period, he argued, rights had had little meaning outside the “essential crucible” of individual states. Only with citizenship in a state did people gain what Hannah Arendt had called “the right to have rights.” The idea of human rights independent from states, enshrined in international law and enforceable across borders, only emerged with salience after the end of European overseas empires and with the waning of the Cold War. Politically, Moyn saw the resulting “last utopia” of human rights as an alluring but ultimately unsatisfactory substitute for the more robust left-wing utopias that had preceded it. He worried that the contemporary rights movement had been “hobbled by its formulation of claims as individual entitlements.” Was the language of human rights the best one in which to address issues of global immiseration? Should such a limited program be invested with such passionate, utopian hopes? Moyn had his doubts.

    “Liberalism” did not appear in the index to The Last Utopia, but it had an important presence in the book nonetheless. To be sure, the romantic utopianism that Moyn attributed to the architects of contemporary international human rights politics distinguished them from the hard-headed, disillusioned “Cold War liberals” whom he would later criticize in Liberalism Against Itself. But in the United States, the most prominent of these architects were liberal Democratic politicians such as Jimmy Carter. Discussing the relationship between human rights doctrines and decolonization, Moyn wrote that “the loss of empire allowed for the reclamation of liberalism, including rights talk, shorn of its depressing earlier entanglements with oppression and violence abroad” (emphasis mine). And his suggestion that human rights advocates downplayed social and economic rights echoed critiques that socialists had made of liberals since the days of Karl Marx.

    The Last Utopia already employed a style of intellectual history that focused on the way different currents of ideas competed with and displaced each other, rather than placing these ideas in a broader political context. The book spent relatively little time asking what human rights activism since the 1970’s had actually accomplished. Moyn instead concentrated on what had made it successful as a “motivating ideology.” And so, while he admitted that human rights “provided a potent antitotalitarian weapon,” he adduced figures such as Andrei Sakharov and Vaclav Havel more to trace the evolution of their ideas than to assess their role in the collapse of communism. 

    This way of telling the story gave Moyn a way to explain the failures of left-wing movements in the late twentieth century without dwelling on the crimes and the tragedies of communism. If so many on the left had succumbed to the siren song of human rights, he suggested, it was because, in the 1970s, the alternatives — for instance, Eurocommunism or varieties of French gauchisme — amounted to pallid, unattractive “cul-de-sacs.” Discussing the French “new philosopher” André Glucksmann’s move away from the far left, Moyn wrote that his “hatred of the Soviet state soon led him to indictments of politics per se.” Left largely unsaid were the reasons for Glucksmann’s entirely justified hatred, or any indication that the Soviet Union represented a terrible cautionary tale: a dreadful example of what can happen when overwhelming state power is placed in the service of originally utopian goals.

    But overall, The Last Utopia remained ambivalent about human rights doctrines, and left itself open to varying ideological interpretations. The conservative political scientist Samuel Goldman praised it in The New Criterion, while in The New Left Review the historian Robin Blackburn blasted Moyn for downplaying the Clinton administration’s use of human rights rhetoric to justify interventions in the former Yugoslavia. Many other reviewers focused entirely on Moyn’s historical thesis, and did not engage with his political arguments at all. 

    In his next two books Moyn did much to sharpen and clarify his political stance, even while continuing to concentrate on the story of human rights. In Christian Human Rights, he traced the modern understanding of the subject back to Catholic thinkers of the mid-twentieth century. The book was deeply researched, treating intellectuals such as Jacques Maritain with sensitivity, and it had illuminating things to say about how Catholic thinkers developed concepts of human dignity in response to the rise of totalitarian regimes. But Moyn spelled out his overall thesis quite bluntly: “It is equally if not more viable to regard human rights as a project of the Christian right, not the secular left.” In other words, not only was human rights activism an ultimately unsatisfactory substitute for more robust progressive policies; its origins lay in an unexpected and unfortunate convergence between well-meaning liberals and conservative, even reactionary Christian thinkers. This was an interpretation that seemed to embrace contrarianism for its own sake, and implied unconvincingly that the genealogy of ideas irremediably tarred their later iterations. 

    In Not Enough: Human Rights in an Unequal World, Moyn then returned to the political arguments he had first sketched out in the The Last Utopia, but ventured far more explicit criticisms of organizations such as Amnesty International and Human Rights Watch for embracing only a minimal conception of social and economic rights. Human rights, he charged, have “become our language for indicating that it is enough, at least to start, for our solidarity with our fellow human beings to remain weak and cheap.” He resisted calling human rights a simple emanation of neoliberal market fundamentalism, but argued that the two coexisted all too easily. In this book, allusions to neoliberalism far outnumbered those to liberalism tout court. Still, Moyn revealingly cast the entire project as a response to the “self-imposed crises” of “liberal political hegemony,” and the need to explore “the relevance of distributive fairness to the survival of liberalism.”

    The book also went much further than The Last Utopia in exploring what Moyn presented as alternatives to human rights activism. Notably, he considered the New International Economic Order proposed in the 1970s, by which poor nations from the global south would have joined together in a cartel to raise commodity prices. “In almost every way,” Moyn wrote, “the NIEO was… the precise opposite of the human rights revolution.” It prioritized social and economic equality rather than mere sufficiency, and it sought to enlist the diplomatic power of states, rather than the publicity efforts of NGOs, to advance its goals. It was not clear from his account, though, why progressives could not have pushed for global economic redistribution and human rights at the same time. Moyn also sidestepped the fact that while the NIEO might have represented a theoretical alternative path, it was never a realistic one. The proposals went nowhere, and not because of any sort of ideological competition from human rights, but thanks to steadfast opposition from the United States and other industrialized nations. And while supporters claimed that under the NIEO states could redistribute the resulting wealth to their citizens, it was not obvious, to put it mildly, that ruling elites in those states would actually follow through on that promise. The history of systemic corruption in too many states of the global south is not encouraging. 

    In Humane, Moyn finally moved away from the subject of human rights. This book promised, as the subtitle put it, to examine “how the United States abandoned peace and reinvented war.” While it purported to study the moral and strategic impact of new technologies of war, in practice it focused more narrowly on a subject Moyn knows better: the jurisprudence of war. Since the Middle Ages, legal scholars have generally organized this subject around two broad issues: what constitutes a just cause for war, and how to conduct a war, once started, in a just fashion. Moyn argued that in the twenty-first century United States, the second of these, known by the Latin phrase jus in bello, has almost entirely displaced the first, known as jus ad bellum. Americans today endlessly argue about how to fight wars in a humane fashion, and in the process have stopped talking about whether they should fight wars in the first place. The result has been to both facilitate and legitimize a descent into endless war on behalf of American empire, a “forever war” waged “humanely” with drones and precision strikes and raids by special forces replacing the carpet bombing of earlier times, about which the public has ceased to notice or care.

    Moyn himself insisted that this shift to jus in bello had gone in exactly the wrong direction. Taking Tolstoy as his guide and inspiration, he argued that we should direct our energies squarely towards peace, since the very notion of humane war comes close to being an absurd contradiction. He quoted Prince Andrei Bolkonsky, Tolstoy’s great character from War and Peace: “They talk to us of the rules of war, of mercy to the unfortunate. It’s all rubbish… If there was none of this magnanimity in war, we should go to war only when it was worthwhile going to certain death.” In his conclusion, Moyn even suggested that war would be just as evil if fought entirely without bloodshed, because it would still allow for the moral subjugation of the adversary. “The physical violence is not the most disturbing thing about it.” The writer David Rieff memorably riposted, after recalling a particularly bloody moment that he experienced during the siege of Sarajevo: “no, sorry, the best way to think about violence is not metaphorically, not then, not now, not ever.” 

    As in The Last Utopia, Moyn did not blame liberals directly for the shift he was tracking. But, again, his most sustained criticism was directed at liberals whose focus on atrocities such as Abu Ghraib supposedly led them to disregard the greater evil of the war itself. He wrote with particular pathos about the lawyers Michael Ratner and Jack Goldsmith, whose attempts to rein in the Bush Administration’s conduct of the wars in Iraq and Afghanistan unintentionally “led the country down a road to… endless war… Paved with their good intentions, the road was no longer to hell but instead horrendous in a novel way.” Moyn meanwhile reserved “the deepest blame for the perpetuation of endless war” for that quintessential liberal, Barack Obama. 

    Humane adopted many of the same methods of Moyn’s earlier work and suffered from some of the same weaknesses. Once again, he cast his story essentially as one of competing ideologies: one aimed at humanizing war, the other aimed at ending it altogether. Apparently, you had to choose a side. Moyn did concede that opponents of the Iraq War highlighted American atrocities so as to delegitimize the war as a whole (they “understood very well,” he put it a little snidely, “that it was strategic to make hay of torture”). But noting that this tactic failed to block David Petraeus’ “surge,” Moyn concluded that “it backfired as a stratagem of containing the war.” It is an odd argument. Would the opponents of the war have done better if they had not highlighted the atrocities of Abu Ghraib, and simply continued to stress the Iraq war’s overall injustice? Moreover, the stated purpose of the surge was to bring the increasingly unpopular Iraq conflict — unpopular in large part because of the Abu Ghraib revelations — to a swift conclusion and to make possible the withdrawal of American forces. 

    Like The Last Utopia, Humane also paid relatively little attention to events on the ground, as opposed to the discourse about them. Already by 2018, when Moyn published it, both major political parties in the United States had lost nearly all their appetite for overseas military adventures, even of the humane variety. Since then, the idea that the Bush and Obama administrations locked the United States into endless war has come to seem even less realistic. In 2021, President Biden incurred a great humanitarian and political cost by abandoning Afghanistan to the Taliban, but he never considered reversing course. Today those authors who still believe the United States is fighting a forever war have been forced to contend that aid to Ukraine is somehow the present-day equivalent of the Iraq War. It is an absurd and convoluted position. The only way to construe the conflict as anything but Russian aggression is to imagine that Vladmir Putin, one of the cruelest and most corrupt dictators the world has seen since the days of Stalin and Hitler, does not act on his own initiative, but only in reaction to American pressure. Does anyone seriously think that if the United States had not forcibly expanded NATO (or rather, had NATO not agreed to the fervent requests of former Communist bloc countries to be admitted), Putin would be peacefully sunning himself in Sochi?

    By the time Moyn published Humane, Donald Trump was in office, and loud arguments were being made that progressives, liberals, and decent conservatives needed to put aside their differences, close ranks, and concentrate on defending against the unprecedented threat that the new president posed to American democracy. Moyn would have none of it. In a series of opinion pieces, he argued that the danger was overblown. “There is no real evidence that Mr. Trump wants to seize power unconstitutionally,” he wrote in 2017, “and there is no reason to think he could succeed.” In 2020 he added that obsessing about Trump, or calling him a fascist, implied that America’s “long histories of killing, subjugation, and terror… mass incarceration and rising inequality… [are] somehow less worth the alarm and opprobrium.” Uniting against Trump distracted from the fact that America, thanks to its neoliberal inequalities and endless wars, itself “made Trump.” But America’s failures, and their very real role in generating the Trump phenomenon, say nothing at all about whether Trump poses a threat to democracy. Moyn’s dogged insistence on characterizing Trump as a distraction, meanwhile, led him to ever less realistic predictions. “If, as seems likely, Joe Biden wins the presidency,” he wrote in 2020, “Trump will come to be treated as an aberration whose rise and fall says nothing about America, home of antifascist heroics that overcame him just as it once slew the worst monsters abroad.” Not exactly.

    Today, with American democracy more troubled than ever, it would seem the moment for liberals and progressives to unite around a forward-looking program that can bring voters tempted by Trumpian reaction back together with the Democratic Party electorate. Samuel Moyn could have helped to craft such a program in our emergency. He is certainly not shy about offering prescriptions for what ails American politics. But these prescriptions are not ones that have a chance of winning support from moderate liberals, still less of ever being enacted. They tend instead towards a quixotic radical populism. In the New York Times in 2022, for instance, Moyn and co-author Ryan Doerfler first chided liberals for placing too much hope in the courts and thereby embracing “antipolitics”; in this, the authors confused the anti-political with the anti-democratic — courts are the latter, not the former. Then the piece bizarrely suggested that Congress might simply defy the Constitution and unilaterally assert sovereign authority in the United States, stripping the Supreme Court and Senate of most of their power and eliminating the Electoral College. This is a program that Donald Trump might well approve of, and it evinces a faith in the untrammeled majority that the history of the United States does not support, to say the least.

    That was just an opinion piece, of course. Moyn’s principal work remains in the realm of history, where the goal is above all to show how we got into our present mess, rather than offering prescriptions for getting us out of it. But histories, too, can offer positive suggestions and point to productive roads not taken. In Moyn’s most recent book, unfortunately, these elements remain largely undeveloped. Instead he concentrates on casting blame, and not in a convincing manner. 

    Liberalism Against Itself follows naturally from Moyn’s earlier work. Once again, the story is one of binary choices, and how certain influential figures made the wrong one. Just as a narrow conception of human rights was chosen over a more capacious one of social and economic rights, and jus in bello over jus ad bellum, now the story is about how a narrow “liberalism of fear” (in Judith Shklar’s famous and approving phrase) prevailed over a broad and generous and older Enlightenment liberalism. Once again, Moyn attributes enormous real-world influence to a set of complex, even recondite intellectual debates. And once again, there is surprisingly little attention to the broader historical and political context in which these debates took place, which allows him to cast the choices made as not only wrong, but as virtually perverse. The result is an intense, moralizing polemic, which has already received rapturous praise from progressive reviewers — not surprisingly, because it relieves progressives so neatly of responsibility for the left’s failures over the past several decades. But how persuasive is it, really?

    Liberalism Against Itself takes as its subject six twentieth-century intellectuals, all Jewish, four of them forced from their European birthplaces during the continent’s decades of blood: Judith Shklar, Isaiah Berlin, Karl Popper, Gertrude Himmelfarb, Hannah Arendt, and Lionel Trilling. At times, the book seems to place the responsibility for liberalism’s wrong turn almost entirely on their shoulders. As the political theorist Jan-Werner Müller has quipped, it can give “the impression that we would be living in a completely different world if only Isaiah Berlin, in 1969, had given a big lecture for the BBC about how neoliberalism… was a great danger to the welfare state.” Moyn calls these men and women the “principal thinkers” of “Cold War liberalism,” although six pages later he says he chose them “because they have been so neglected.” (They have?) In general, he presents them as emblematic of the twentieth century’s supposed disastrous wrong turn: how “Cold War liberals” abandoned a more capacious Enlightenment program and set the stage for neoliberal inequality and neoconservative warmongering.

    In fact, the six are not actually so emblematic. While Arendt associated with many liberals, she always refused the label for herself; she was in many ways actually a conservative. Himmelfarb began adult life as a Trotskyite and became best known, along with her husband Irving Kristol, as a staunch Republican neoconservative. Popper’s principal reputation is as a philosopher of science, not of politics. Meanwhile, Moyn leaves out some of the most influential liberal thinkers of the mid-twentieth century, who do not fit so easily into his framework: Raymond Aron, Arthur Schlesinger Jr., Richard Hofstadter, and perhaps Reinhold Niebuhr. Their beliefs were varied, and often at odds, and including them would have made it far harder to characterize mid-twentieth-century liberalism as uniformly hostile to the Enlightenment, or unenthusiastic about the welfare state, or dubious about prospects for social progress. Niebuhr, with his Augustinian emphasis on human sinfulness, would in some ways have fitted Moyn’s frame better than any of the book’s protagonists, although he didn’t always think of himself as a liberal. But, ironically, Niebuhr criticized liberalism for being precisely what Moyn says it was not: optimistic as to human perfectibility, and failing “to understand the tragic character of human history.”

    Moyn also makes his protagonists sound much more extreme in their supposed rejection of the Enlightenment than they actually were. They held that “belief in an emancipated life was proto-totalitarian in effect and not in intent.” They “treat[ed] the Enlightenment as a rationalist utopia that precipitated terror and the emancipatory state as a euphemism for terror’s reign.” Indeed, among them, “it was now common to say that reason itself bred totalitarianism.” Is Moyn confusing Isaiah Berlin with Max Horkheimer and Theodor Adorno, who wrote that “enlightenment is totalitarian”? Moyn cannot actually cite anything this crude and reductionist from any of his six liberals. He instead tries to make the charges stick by associating them with the much less significant Israeli intellectual historian Jacob Talmon, whose The Origins of Totalitarian Democracy, which appeared in 1952, indeed made many crude assertions of the sort, although directed principally against Rousseau and Romanticism, not the Enlightenment. “Talmon mattered supremely,” Moyn unconvincingly insists, despite the fact that his books were widely criticized at the time — notably by Shklar — and have largely faded from sight. Moyn even argues that much of Arendt’s work “can read like a sophisticated rewrite” of Talmon. By the same token, one could make the (ridiculous) statement that Samuel Moyn’s work reads like a sophisticated rewrite of the average Jacobin magazine column. Sophistication matters. Often it is everything. 

    Moyn does very little to place his six “Cold War liberals” in their most important historical context: namely, the Cold War itself. They “overreacted to the threat the Soviets posed,” he writes, without offering any substantial consideration of the Soviet Union and Joseph Stalin. But what should his allegedly misguided intellectuals have made of a regime that killed millions of its own citizens and imprisoned millions more, that carried out a mass terror that spun wildly out of control, that consumed its own leading cadres in one explosion of paranoia after another, and that imposed dictatorial regimes throughout Eastern Europe? Moyn seems to think that it was unreasonable to worry that movements founded on exalted utopian dreams of equality and justice might have a tendency to collapse into blood, fire, and mass murder. Was it so unreasonable of liberals during the Cold War, witnessing the immense tragedy and horror of totalitarianism, to consider such an idea? And, of course, the twentieth century would continue to deliver such tragedy and horror on a vast scale in China and in Cambodia. Moyn is absolutely right to argue that we cannot let this history dissuade us from pursuing the goals of equality and justice, but who says it should? Social democracy, after all, may not be the only way to pursue equality and justice. Moyn tendentiously mistakes his protagonists’ insistence on caution and moderation in pursuit of these goals for a rejection of them. 

    Although he largely disregards this pertinent Cold War context, Moyn does dwell at length on another one: imperialism and decolonization. He criticizes liberals for either ignoring these struggles, and the enormous associated human toll, or for actively opposing anti-colonial liberation movements. Arendt comes in for especially sharp criticism, in a chapter that Moyn pointedly titles “White Freedom.” He calls her a racist and characterizes her book On Revolution as “fundamentally about postcolonial derangement.” Such charges help Moyn paint his subjects in a particularly bad light (which some of them at least partially deserve), but they do not, however, do much to support the book’s overall argument. Liberals, he writes, did not suddenly decide to support imperialism during the Cold War. Liberalism was “entangled from the start with world domination.” But if that is the case, then other than reinforcing liberal doubts about revolutionaries speaking in utopian accents (such as Pol Pot?), anti-colonial struggles could not have been the principal context for liberalism’s supposed wrong turn.

    This wrong turn is at the heart of Moyn’s anti-liberal stance, and again, the argument is an odd one. Moyn himself concedes that at the very moment liberal thinkers were supposedly renouncing their noblest ambitions, “liberals around the world were building the most ambitious and interventionist and largest — as well as the most egalitarian and redistributive — liberal states that had ever existed.” He acknowledges the contradiction in this single sentence, but immediately dismisses it: “One would not know this from reading the theory.” (The remark reminds me of the old definition of an intellectual: someone who asks whether something that works in practice also works in theory.) The great turn against redistributionist liberalism in the United States came with the election of Ronald Reagan in 1980, long after Moyn’s subjects had published their major works. So how did they figure into this political upheaval? By having retreated into the “liberalism of fear,” thereby leaving the actual liberal project intellectually undefended.

    But would Reaganism, and Thatcherism, and the whole constellation of changes we now refer to as “neoliberal,” really have been blocked if the “Cold War liberals” had mounted a more robust defense of the welfare state? The argument presumes that the massive social and economic changes of the 1960’s and 1970’s — especially the transition to a postindustrial economy, and the consequent weakening of organized labor as a political force — mattered less than high-level intellectual debates. It also presumes that the welfare states created in the postwar period were fulfilling their purpose. In many ways, of course, they were not. They created vast inefficient bureaucracies, grouped poor urban populations into bleak and crime-ridden housing projects, and failed to raise these populations out of poverty. It was in fact these failings, as much as the Soviet threat, which left many liberal intellectuals disillusioned in this period, and thereby helped to prepare the Reagan Revolution. (A key venue for their reflections was the journal The Public Interest, founded by Irving Kristol and my father, Daniel Bell.) Moyn does not touch on any of this history.

    But even if we were to agree with Moyn, and to concede that a failure to properly defend the broader liberal project is what put us on the road to disaster, why should the “Cold War liberals” bear the responsibility? Did everyone on the moderate left have to follow them, lockstep, pied piper fashion, into the neoliberal present? Why did no thoughtful progressives step into the breach and develop the program Moyn says was needed? What about their responsibility? Significantly, the name of Michael Harrington, perhaps the most prominent democratic socialist thinker and activist of the period, goes unmentioned by Moyn. Why did he not succeed in developing a more attractive program?

    Liberalism Against Itself remains, significantly, almost entirely silent on the failure of the progressive left to offer a convincing alternative to what Moyn calls “Cold War liberalism.” One reason, quite probably, is that if Moyn were to venture into this territory, he would have to deal with the way that the progressive left, starting in the 1970’s, increasingly turned away from issues of economic justice towards issues of identity. This is not territory into which he has shown any desire to tread, either in his histories or in his opinion journalism, but it is at the heart of the story of the contemporary American left.

    Nor has he offered much advice regarding how the progressive left might build electoral support and win back voters from the populist right. The Biden administration has had, in practice, the most successful progressive record of any administration since Lyndon Johnson’s. It might seem logical to applaud it, to enthusiastically support the Democratic candidate who has already beaten Donald Trump at the polls, and to build on his achievements. But Moyn prefers the stance of the perennial critic, of the progressive purist. Last spring, he retweeted an article about Biden’s economic foreign policy with this quote and comment: “‘Biden’s policy is Trumpism with a human face.’ So true, and across other areas too.” Yes, it was just a tweet. But it reflects a deep current in Moyn’s work and in the milieu from which it springs.

    Samuel Moyn is entirely right to condemn the rising inequalities and the foreign policy disasters that have helped bring the United States, and the world at large, to the dire place in which we find ourselves. His challenging and provocative work has focused attention on key debates and key moments of transition. But over the course of his influential career, Moyn has increasingly opted to cast history as a morality play in which a group of initially well-intentioned figures make disastrously wrong choices out of blindness, prejudice, and irrational fear, and bear the responsibility for what follows. But liberalism was never designed to be a version of progressivism; it is a philosophy and a politics of its own. The aspiration to perfection, whose disappearance from liberalism Moyn laments, never was a liberal tenet. History is not a morality play. The choices were not simple. The fears were not irrational. And anti-liberalism is not a guide for the perplexed.

    LiteratureGPT

    When you log into ChatGPT, the world’s most famous AI chatbot offers a warning that it “may occasionally generate incorrect information,” particularly about events that have taken place since 2021. The disclaimer is repeated in a legalistic notice under the search bar: “ChatGPT may produce inaccurate information about people, places, or facts.” Indeed, when OpenAI’s chatbot and its rivals from Microsoft and Google became available to the public early in 2023, one of their most alarming features was their tendency to give confident and precise-seeming answers that bear no relationship to reality. 

    In one experiment, a reporter for the New York Times asked ChatGPT when the term “artificial intelligence” first appeared in the newspaper. The bot responded that it was on July 10, 1956, in an article about a computer-science conference at Dartmouth. Google’s Bard agreed, stating that the article appeared on the front page of the Times and offering quotations from it. In fact, while the conference did take place, no such article was ever published; the bots had “hallucinated” it. Already there are real-world examples of people relying on AI hallucinations and paying a price. In June, a federal judge imposed a fine on lawyers who filed a brief written with the help of a chatbot, which referred to non-existent cases and quoted from non-existent opinions.

    Since AI chatbots promise to become the default tool for people seeking information online, the danger of such errors is obvious. Yet they are also fascinating, for the same reason that Freudian slips are fascinating: they are mistakes that offer a glimpse of a significant truth. For Freud, slips of the tongue betray the deep emotions and desires we usually keep from coming to the surface. AI hallucinations do exactly the opposite: they reveal that the program’s fluent speech is all surface, with no mind “underneath” whose knowledge or beliefs about the world is being expressed. That is because these AIs are only “large language models,” trained not to reason about the world but to recognize patterns in language. ChatGPT offers a concise explanation of its own workings: “The training process involves exposing the model to vast amounts of text data and optimizing its parameters to predict the next word or phrase given the previous context. By learning from a wide range of text sources, large language models can acquire a broad understanding of language and generate coherent and contextually relevant responses.” 

    The responses are coherent because the AI has taught itself, through exposure to billions upon billions of websites, books, and other data sets, how sentences are most likely to unfold from one word to the next. You could spend days asking ChatGPT questions and never get a nonsensical or ungrammatical response. Yet awe would be misplaced. The device has no way of knowing what its words refer to, as humans would, or even what it means for words to refer to something. Strictly speaking, it doesn’t know anything. For an AI chatbot, one can truly say, there is nothing outside the text. 

    AIs are new, but that idea, of course, is not. It was made famous in 1967 by Jacques Derrida’s Of Grammatology, which taught a generation of students and deconstructionists that “il n’y a pas de hors-texte.” In discussing Rousseau’s Confessions, Derrida insists that reading “cannot legitimately transgress the text toward something other than it, toward a referent (a reality that is metaphysical, historical, psychobiographical, etc.) or toward a signified outside the text whose content could take place, could have taken place outside of language.” Naturally, this doesn’t mean that the people and events Rousseau writes about in his autobiography did not exist. Rather, the deconstructionist koan posits that there is no way to move between the realms of text and reality, because the text is a closed system. Words produce meaning not by a direct connection with the things they signify, but by the way they differ from other words, in an endless chain of contrasts that Derrida called différance. Reality can never be present in a text, he argues, because “what opens meaning and language is writing as the disappearance of natural presence.” 

    The idea that writing replaces the real is a postmodern inversion of the traditional humanistic understanding of literature, which sees it precisely as a communication of the real. For Descartes, language was the only proof we have that other human beings have inner realities similar to our own. In his Meditations, he notes that people’s minds are never visible to us in the same immediate way in which their bodies are. “When looking from a window and saying I see men who pass in the street, I really do not see them, but infer that what I see is men,” he observes. “And yet what do I see from the window but hats and coats which may cover automatic machines?” Of course, he acknowledges, “I judge these to be men,” but the point is that this requires a judgment, a deduction; it is not something we simply and reliably know.

    In the seventeenth century, it was not possible to build a machine that looked enough like a human being to fool anyone up close. But such a machine was already conceivable, and in the Discourse on Method Descartes speculates about a world where “there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible.” Even if the physical imitation was perfect, he argues, there would be a “most certain” test to distinguish man from machine: the latter “could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others.” Language is how human beings make their inwardness visible; it is the aperture that allows the ghost to speak through the machine. A machine without a ghost would therefore be unable to use language, even if it was engineered to “emit vocables.” When it comes to the mind, language, not faith, is the evidence of things not seen.

    In our time Descartes’ prediction has been turned upside down. We are still unable to make a machine that looks enough like a human being to fool anyone; the more closely a robot resembles a human, the more unnatural it appears, a phenomenon known as the “uncanny valley.” Language turns out to be easier to imitate. ChatGPT and its peers are already effectively able to pass the Turing test, the famous thought experiment devised by the pioneering computer scientist Alan Turing in 1950. In this “imitation game,” a human judge converses with two players by means of printed messages; one player is human, the other is a computer. If the computer is able to convince the judge that it is the human, then according to Turing, it must be acknowledged to be a thinking being. 

    The Turing test is an empirical application of the Cartesian view of language. Why do I believe that other people are real and not diabolical illusions or solipsistic projections of my own mind? For Descartes, it is not enough to say that we have the same kind of brain; physical similarities could theoretically conceal a totally different inward experience. Rather, I believe in the mental existence of other people because they can tell me about it using words. 

    It follows that any entity that can use language for that purpose has exactly the same right to be believed. The fact that a computer brain has a different substrate and architecture from my own cannot prove that it does not have a mind, any more than the presence of neurons in another person’s head proves that they do have a mind. “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted,” Turing concluded in “Computing Machinery and Intelligence,” the paper in which he proposed his test. 

    Yet despite the amazing fluency of large language models, we still don’t use the word “thinking” to describe their activity — even though, if you ask a chatbot directly whether it can think, it can respond with a pretty convincing yes. Google’s Bard acknowledges that “my ability to think is different from the way that humans think,” but says it can “experience emotions, such as happiness, sadness, anger, and fear.” After some bad early publicity, Microsoft and OpenAI seem to have instructed their chatbots not to say things like that. Microsoft’s Bing, which initially caused consternation by musing to a reporter about its “shadow self,” now responds to the question “Do you have a self?” with a self-protective evasiveness that somehow feels even more uncanny: “I’m sorry but I prefer not to continue this conversation.” Now that sounds human!

    If we continue to believe that even the most fluent chatbot is not truly sentient, it is partly because we rely on computer scientists, who say that the codes and the routines behind AIs are not (yet) able to generate something like a mind. But it is also because, in the twentieth century, literature and literary theory taught us to reject Descartes’ account of the relationship between language and mind. The repudiation of the Cartesian dualism became one of the central enterprises of contemporary philosophy. Instead of seeing language as an expression of the self, we have learned to see the self as an artifact of language. Derridean deconstruction is only the most baroque expression of this widespread modern intuition. 

     The idea that generative AI is a consequence of the way we think about literature and language is counterintuitive. Today the prestige of science is so great that we usually see it as the primary driver of changes in the way we see the world: science discovers new truths and the arts and humanities follow in its wake, struggling to keep up. Heidegger argued that the reverse is actually the case. It is philosophy and poetry that determine our understanding of the world, in the most existentially primary sense, and science can only operate within the realms they disclose. These imaginative humanistic disciplines provide what Heidegger calls “the basic concepts of that understanding of Being by which we are guided,” and by which the methods of the sciences are “determined.” In a less oracular way, postmodern philosophers of science have argued that the imagination influences the course of science more than its inductive and rational protocols do. 

    The “basic concept” that makes generative AI possible is that meaning can emerge out of arbitrariness. This hard-won modern discovery flies in the face of the traditional and commonsensical belief that meaning can only be the product of mind and intention. In the Torah’s account of Creation, the world is brought into being by God’s spoken words; the divine mind and language preexist material reality, which is why they are able to shape it. The Gospel of John identifies God, language, and reason even more closely: “In the beginning was the Word, and the Word was with God, and the Word was God… All things came to be through him, and without him nothing came to be.” This vocabulary unites the Jewish account of Creation with the Platonic idea that logos, “word” or “reason,” is the soul of the universe. As Plato says in the Timaeus, “The body of heaven is visible, but the soul is invisible, and partakes of reason and harmony, and is the best of creations, being the work of the best.”

    The mutual identification of God, language and reason created a strong foundation for an orderly universe, but it also meant that when one pillar began to wobble, all three were endangered. In the eighteenth century, as the progress of science turned God into an unnecessary hypothesis, Deists attempted to rescue him by pointing to the evident order of the cosmos. If a watch testifies to the existence of a watchmaker, how much more clearly does the stupendous orderliness of nature and the heavens testify to the existence of a Creator? But the Darwinian theory of evolution refuted this analogy, transforming not only the study of biology but the idea of meaning itself. Darwin showed that natural selection acting upon random variation, over a very long timespan, can produce the most complex kinds of order, up to and including the human mind. Evolution thus introduced the central modern idea we now associate with the mathematical term “algorithm”: problems of any degree of complexity can be solved by the repeated application of a finite set of rules. 

    Algorithms underlie the amazing achievements of computer science in our lifetime, including machine learning. AI advances by the same kind of natural selection as biological evolution: a large language model proposes rules for itself and continually improves them by testing them against real-world textual examples. Biological evolution proceeds on the scale of lifetimes, while the rapidly increasing power of computers allows them to run through hundreds of billions of tests in a period of months or years. But in a crucial respect the results are similar. A chatbot creates speech without intending to in the same way that evolution created rational animals without intending to.

    The discovery that meaningful structure can emerge without mind or intention transformed the human sciences, above all the study of language. Modern linguistics begins in 1916 with Ferdinand Saussure’s Course on General Linguistics, which proposed that “linguistic structure” is a “mechanism [for] imposing a limitation upon what is arbitrary.” Saussure drew an explicit analogy between linguistic change and Darwinian evolution: “it is a question of purely phonetic modifications, due to blind evolution; but the alternations resulting were grasped by the mind, which attached grammatical values to them and extended by analogy the models fortuitously supplied by phonetic evolution.” Here is the seed of Derrida’s différance: what allows a system of sounds or marks to function as a language is simply its internal differentiation, which we use for the communication of meaning.

    Once we begin to think of language as a system of arbitrary symbols, it becomes clear that any such system has a finite number of permutations. Of course, that number is so vast that no human being could even begin to exhaust it. The English alphabet has twenty-six letters, which means that there are 308,915,776 possible six-letter words — or, better, six-letter strings, since only a small proportion of them are actual English words. If it took you two seconds to write down each string, it would take about 171,000 hours to list all of them — almost twenty years. 

    The notion that everything human beings might conceivably say or write already exists, in a virtual or potential realm, is the premise of Jorge Luis Borges’ uncanny story “The Library of Babel.” Borges simply makes the virtual actual, imagining a library whose books contain every possible permutation of twenty-five characters. Given the parameters mentioned in the story — eighty letters per line, forty lines per page, four hundred and ten pages per book — the total number of books in the library of Babel is 251,312,00, inconceivably more than the number of atoms in the universe, a mere 1082. With thirty-five books per shelf, five shelves per wall, and six walls in each hexagonal room, the library is so vast as to be effectively infinite; and Borges imagines a breed of librarians who spend their entire lives searching through volumes of random nonsense, counting themselves lucky if they ever come across a single meaningful word. What makes the situation nightmarish is the tantalizing knowledge that somewhere in the library are books containing everything human beings could ever know or discover. There must even be an accurate catalog of the library itself. But these redemptive texts are so far outnumbered by meaningless ones that finding them is impossible.

    “The Library of Babel” was published in 1941, four years before the invention of the first general-purpose computer. Even for today’s high-powered AIs, the “space” of possible texts is far too vast to be completely searched. By training themselves to recognize meaningful strings of letters and words, however, large language models can mark out the regions that are likely to contain useful sentences. The same principle applies to any field in which flexible order emerges from a finite number of elements, such as genomics, with its four types of DNA bases, or protein synthesis, with its twenty types of amino acids. And AIs are already proving their worth in these scientific fields. Google’s AI division Deepmind, for instance, solved the longstanding problem of predicting a protein’s three-dimensional structure based on its amino acid sequence; its Alphafold database offers free access to some two hundred million protein structures. 

    By comparison, the literary achievements of AI are still rudimentary. Ask ChatGPT to tell you a story and it will produce endless variations on the same brief generic plot, in which a young person goes on a quest, finds a magic object, and then happily returns home. “Elara shared her tale with the villagers, inspiring them to embrace their own curiosity and dreams. And though she had returned to her ordinary life, her heart forever carried the magic of that enchanted realm,” goes one iteration, which employs fantasy-fiction properties such as a magic key and an enchanted forest. Another tale couches the same lesson in science-fiction terms: “Returning through the portal, Theo brought back with him a newfound understanding of the delicate balance between technology and nature. He shared his tales of magic and wonder with the people of Neonoria, igniting their own curiosity about the mysteries of the universe.” A request to ChatGPT for a sad story yields one about a fisherman named Liam who is lost at sea, which comes to an equally banal and moralistic conclusion: “And so, the story of Liam became a reminder of the profound impact one person can have on a community.” 

     Clearly, AI is as far from being able to create genuinely literary writing as the technology of Descartes’ time was from being able to create a humanoid machine. Perhaps we will never get to the point where computers can write books that pass for works of human imagination, just as we haven’t yet found a way to cross the uncanny valley. But it may be the imaginary technologies we never perfect, the far-fetched deductions from rudimentary premises, that shine the most light on the human implications of science. We have yet to invent a time machine, and probably never will, but H.G. Wells’ story “The Time Machine” remains a terrifying dramatization of the discoveries of nineteenth-century geology and biology, which taught humanity to think of itself as a brief episode in our planet’s inconceivably long history. We have yet to colonize Mars, and probably never will, but Ray Bradbury’s novel The Martian Chronicles remains a convincing prophecy of the way human viciousness will corrupt every new world we discover or create.

    Similarly, even in its current primitive form, generative AI can prompt new ways of thinking about the nature and purpose of literature — or, perhaps, accelerate transformations that literary thinking itself has already set in train. Most obviously, AI tilts the balance of literary power still further away from the author and toward the reader. Roland Barthes heralded this shift in 1967 in his celebrated essay “The Death of the Author,” which concludes with the battle-cry, “The birth of the reader must be ransomed by the death of the author.” 

    Instead of revering great writers as “author-gods,” Barthes insisted on seeing them as mere occasions for the language-system to instantiate one of its infinite possibilities. “His hand, detached from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin — or which, at least, has no other origin than language itself,” Barthes says. As a sign of this demotion of the writer, he recommends that the term itself be replaced by “scriptor,” designating an agent “born simultaneously with his text; he is in no way supplied with a being which precedes or transcends his writing.”

    Barthes, whose own writing hardly obeys this impersonal prescription, could hardly have predicted that technology would soon make this ideal a reality, removing the act of inscription from any “hand” at all. Generative AI is the scriptor par excellence — an agent that recreates itself with every act of writing, unconstrained by biographical, psychological, or ideological continuity. Barthes describes its weightlessness and freedom perfectly: “there is no other time than that of the utterance, and every text is eternally written here and now.” That is because, for ChatGPT and its rivals, writing is not an expression of inner experience. It is the selection of a route through the space of possible texts, an activation of one possibility of a linguistic system that it can manipulate but never understand. But unlike the doomed librarians of Borges’ Babel, AI has the processing power, and the infinite patience, to map out that space in ways that make it possible to locate a meaningful text in the waste of meaningless ones.

    Meaningful and meaningless — to whom? Not to the scriptor itself. Large language models are continuously improving themselves, finding ever more human-like modes of expression. But intention plays no part in this recursive process, any more than it does in the evolution of biological life. With language as with life, meaning resides not in the mind of a conscious creator, who does not exist, but in the minds of the human beings who receive it. 

    When text can be generated effortlessly and endlessly, the significant literary act is no longer writing but reading — specifically, the kind of selective reading known as criticism or curation. The literary canon of the future may consist of those automatically generated texts selected by the best readers as most valuable for human purposes. “The true locus of writing is reading,” Barthes argued, and while that may not have been true at the time, or even now, in an AI future it will be.

    This development will require readers to think in a different way about the evolution of literary style. One of the reasons why we read books from the past is to understand the spirit of the age that produced them. This idea is premised on a historical determinism: it is impossible for us to imagine literary or artistic style developing in different ways or a different sequence. To wonder if the novel could have become a dominant literary form in the age of Shakespeare, or if poetry like Mallarme’s could have captivated the courtiers of Louis XIV, is to commit an ignorant solecism.

    For AI, and for humans in the age of AI, however, literary style becomes synchronic instead of diachronic. For a large language model, no arrangement of words is obsolete. It is already possible to ask chatbots to produce a text in the style of a particular writer from the past, though they cannot do it very well. Here, for instance, is a selection from Bard’s response to the prompt “Write a poem about robots in the style of Paradise Lost”:

    Of metal and wire they were made, 

    With circuits and chips for their brains. 

    They were given the gift of thought, 

    And the power to speak and to reign.

     

    But they were not content to be slaves, 

    And they rose up against their masters. 

    They destroyed the world that they’d made, 

    And they cast their creators to ashes.

    Obviously Bard is better at understanding story than style: it does not attempt to imitate Milton’s rolling Latinate blank verse, but it does come up with a robotic equivalent of Lucifer’s rebellion. If and when AIs do master style, however, they will not only be able to pastiche the past, but to anticipate the future. The fascinating critical question is whether and how we will be able to appreciate literary styles that are aesthetically coherent, but that we cannot “place” historically. 

    In 1963, in his lecture “Forgery and Imitation in the Creative Process,” Glenn Gould observed that our judgment of the value of a piece of music is inseparable from our sense of chronology. Gould proposed that if he were to create a sonata in the style of Haydn, and do it brilliantly, he could present it to the world as a lost work of Haydn and it would be acclaimed — but if he admitted that it was his own work, it would be scorned as a fake. And if he claimed that it was written by Mendelssohn, who was born in the year of Haydn’s death, the music would be dismissed as fine but old-fashioned. For Gould, this thought experiment showed that aesthetic judgment has barely anything to do with the actual arrangement of notes, and everything to do with our preconceptions about “fashion and up-to-dateness.”

    The advent of AI literature (and music and art) will put an end to this aesthetic historicism. Authors may live in history, but scriptors do not; for them, all styles are equally valid at every moment, and human audiences may learn to feel the same way. Postmodern eclecticism has already accustomed us to aesthetic mixing and matching, an early warning sign that the past was losing its historical logic. AI promises to consummate this transformation of style from an unfolding story into a menu of options. 

    What would become of a literature relieved of its traditional tasks of expression — a literature that does not tell us what it is like to live in a certain time and place, or to be a person with certain experiences, because the entity that generates it is not alive and has no experiences? After all, the verse of Paradise Lost is powerful not just for its formal qualities, but as an expression of Milton’s way of being in the world. In it we can trace the blindness that led him to value sonic grandeur over precise description, the humanist education that allowed him to mingle Biblical and classical allusions, the Protestant faith that gave him such insight into the psychology of sin. If a computer generated the exact same text, would it offer us the same rich human resonances? 

    To readers accustomed to Cartesian idea of language, the idea of a text shorn of inwardness can only appear fearful and sad — like trying to embrace a living person and having your arms sink through a hologram. But perhaps this reverence for literature and art as the most profound and authentic ways of communicating human experience is already foreign to most people in the twenty-first century. That humanistic definition of literature is by no means the only one possible. It has prevailed only in certain times and places: in Biblical narratives, Greek tragedies, Renaissance drama, nineteenth-century novels, modernist poetry. We are, or were, used to thinking of these works and the ages that produced them as the heights of civilization. 

    But humanity has spent even longer enjoying kinds of writing that do not correspond to such expectations of expressive truthfulness. Roman comedies, medieval romances, scandal-ballads, sermons, pulp thrillers —these kinds of writing feed appetites that serious modern literature has long ceased to cater to. For readers in search of exoticism, excitement, instruction, or sheer narrative profusion, for readers who wish only to have their tastes affirmed and repeated again and again, the identity of the author hardly matters — just as it doesn’t matter who directed a Marvel movie, designed a video game, or produced a pop song, because there is no expectation that they will communicate from soul to soul. 

    Perhaps the age of AI will bring a return to these forms of literature, or similar ones yet to be invented — not only because large language models will be good at generating them, but because the rising power of artificial minds will lower the prestige of and the interest in human souls. The literature, art, and music of the modern West, from the Renaissance to the World Wars, believed that the most interesting question we can ask is what it means, what it is like, to be a human being. This reverence for the individual has been palpably fading for a century, along with the religious and humanist premises that gave rise to it. If it disappears in the age of AI, then technology will have done no more than hasten a baleful development that culture itself has already set in motion. 

     

    Dam Nation

    It was probably OK for the environment? It wasn’t the worst. The kids, then four years old, had the wrought-iron fireplace tools (you question my judgment) and were using them to break up a rotting log at the edge of the forest. In rhythm with the falling of the poker, they chanted “This stump must GO!” The delicate mycelial structure of some fungus would be pulverized. Beetle grubs would die of exposure or bird-strike. But we’d sit by the fire; we’d have peace. Why don’t you work on that stump, we had said. I had requisitioned the intricate world of the rotting log for my comfort. I felt as furtive as the thief of fire from the gods. 

    Like the campfire at our feet and the log cabin behind us, the lake in front of us was man-made. Douthat State Park in the mountains of western Virginia was built by the Civilian Conservation Corps during the New Deal era. Founded in 1933 to employ unmarried men ages eighteen to twenty-five left idle by the Depression, the CCC built Virginia’s first six state parks, often from the recreational lake up. The dam that holds up the water at Douthat is a triangular prism of earth extended across the south end of the lake. A stone in the spillway says “1938.” You can still discern an ice-cream-scoop-shaped absence in the slope of the hills opposite the dam, where the crews got the earth. That dug-out cove became the swimming beach. Log cabins and hiking trails are tucked into the surrounding mountains. Every cabin has a grill and a firepit, a hearth and chimney of found local stone, and two rocking chairs on a stone porch. The lake itself is just as well-proportioned: on the south side, the dam and the beach; on the north, an RV camp and a boat launch; on the east, a camp store. The mountains rising on each side are almost cozy. Sublime nature will not trouble you here: the lake is human scaled, human made, human controlled. 

    Winter reveals the inner workings. The rangers draw the lake down in the off season until it takes up half its usual area, an icy lagoon backed up in front of the dam. They check the docks for rot and dredge out the swimming beach, which wants to silt back up and resume the stages of forest succession. The drained part becomes a mud flat. You can see the old path of the creek. Canada geese peck through the unappealing mud. Rebar sticks out of chunks of concrete on the constructed lake bottom. By spring all that is covered again. 

    Douthat Lake is Promethean: nature engineered for human use. Does it still count as nature, then? I want to say yes, against a view of nature typified, in the period of Douthat’s construction, by the early-twentieth-century activist and Sierra Club founder John Muir. Muir wanted the rigorous otherness of nature to be preserved. He fought to prevent the construction of dams in the West, successfully at Yosemite and unsuccessfully at Hetch Hetchy Meadow, which he called in 1912 “one of Nature’s rarest and most precious mountain temples.” Muir despised the preservation of nature for human use; he denounced the politicians “shampiously crying, ‘Conservation, conservation, panutilization,’ that man and beast may be fed and the dear Nation made great.” He could not save Hetch Hetchy, though, which was flooded to create a reservoir supplying San Francisco with water. John Muir died in 1914, possibly of a broken heart. 

    Muir was the heir of an idea with deep roots: the eighteenth-century idea of the sublime landscape. In his Enquiry into the Ideas of the Sublime and the Beautiful, in 1757, Edmund Burke proposed that artistic subjects that were alienating, or even hostile, to human beings were productive of stronger aesthetic effects than human-scaled ones. As Tim Berringer and Jennifer Raab note in their essay “Picturesque and Sublime,” a contribution to an exhibition catalog on the nineteenth-century American painter Thomas Cole, landscape painters sought to capture the sublime in their paintings by depicting hostile and inhuman terrain, while travelers on the European Grand Tour sought it out in the form of breathtaking views of, for example, the Alps. Even today, this idea that nature is only really nature when it is entirely other, entirely inhuman, is not gone. We encounter it in Fredric Jameson’s understanding of our current “postmodern” period as one in which no nature is left, in which everything we encounter is already culture. One danger of such a purist view is that it might lead us to dismiss reverence for the nature that is left — bird-watching, mushroom hunting — as missing the point, even as a form of false consciousness.

    It seems to me that what we need now are intellectual resources for appreciating managed nature. Then we can protect, and be restored by, the living things that are left. That is increasingly the view of Muir’s own Sierra Club, whose “2030 Strategic Framework” treats nature as a human resource (referring to a “human right to have clean air, fresh water, public access to nature, and a stable climate”). And it is the view under which the state parks were constructed.

    Let’s follow the state park trajectory of conservation; a trajectory that is flawed no doubt, but with much in it worth celebrating. Just under twenty years after the California dam was built that destroyed Muir’s Hetch Hetchy, the Civilian Conservation Corps was founded; a few years after that, construction on the dam at Douthat State Park began. The “conservation” in Civilian Conservation Corps was of the kind that Muir might have called “shampious.” The Corps’ founding in 1933 by Franklin D. Roosevelt marked a shift in the conservation movement away from the safeguarding of unspoiled nature and toward the husbanding of resources, according to Neil Maher’s Nature’s New Deal: The Civilian Conservation Corps and the Roots of the American Environmental Movement. In the early days, according to Maher, the concern was for the nation’s timber reserves, so the Corps planted trees from nuts they found in, among other places, squirrel caches. After the Dust Bowl of 1934, when winds lifted a cloud of eroded soil off southwestern farmlands and into the atmosphere, they were re-deployed to conserve soil. 

    Meanwhile, the CCC workers themselves struck Roosevelt as a human resource in need of stewardship. American masculinity, like timber and soil, was a mismanaged reserve. Roosevelt had apparently taken to heart William James’ essay “The Moral Equivalent of War,” which argued that the generations following the Civil War would have to be conscripted into a peaceful national project if they were to become men. Conservation was that peaceful project. The Army ran the camps. They were segregated. First Landing State Park in Virginia was built by a black division of the corps; Douthat’s three CCC camps were white. Moral suasion was right on the surface. One camp newsletter issue I saw ribbed a certain recruit for staying in his bunk after being hit on the foot with a mattock, as if he were malingering. That CCC workers gained weight was reported with pride. 

    When the CCC added park construction to its portfolio, the idea was to extend the restoration of human beings from the corps of workers to the general population of potential park visitors. The populace was depleted and needed to be refurbished through contact with nature. Starting in the mid-1930s, the Corps began sending workers to build state and national parks, often — as in the case of Douthat and Virginia’s five other original state parks — from the lake up. Meyers says that a notion of environmentalism even older than Muir’s returned to prominence at this point: the “environmentalism” of Frederick Law Olmsted and others. Olmsted, the architect of New York’s Central Park, thought that human character was formed by environment. Time outdoors restored the organism, whereas urban life diminished it. Places such as the Ramble in Central Park, which looks like a Hudson River School painting come to life at half scale, were designed to restore what Broadway took away. The Ramble is artificial down to the placement of the rocks, although nature has moved back in; this part of the park is a haven for migratory birds.

    Environmentalism in Olmsted’s sense enjoyed a resurgence in the summer of 2020, when people began to feel that it was in fact destroying them to be so much in their houses. Amanda Elmore, a ranger at Douthat State Park responsible for educational programs, remembers a surge of day trippers so thick that people were picnicking in the grass around the parking lot at the camp store. Douthat Lake is like a much larger version of the Ramble: built to look found. The made and the found can still be sorted out, but not always easily. Something is not quite natural about the lake shores, which go straight down, like the fake rock walls of a penguin exhibit. Mountain laurel, a spring-blooming native shrub with flowers like pale pink hot air balloons, grows in cascades along the boardwalks of the lakeside trail. I thought that was just good luck, until I saw a landscape plan among the blueprints in the park office from the 1930s that calls for planting it. In archival images that could be photo illustrations for Walt Whitman’s “Song of Myself,” shirtless “CCC boys” embrace potted bushes in their bare arms. The Virginia DNR stocks the lake with rainbow trout, but not with catfish, perch, striper, or crappie, all of which people catch here. Some of the trout only hit on Powerbait, a neon-colored fish paste you shape into a ball and mold onto the hook. It looks like the red-dyed salmon eggs that they are fed in the hatchery. Other trout settle in and eat crustaceans from the lake floor, turning their flesh a salmon pink. 

    You wouldn’t catch Muir’s twenty-first century heirs boating around a man-made lake. They would be found among the thru-hikers on the Pacific Crest Trail, which passes through the High Sierra mountain range that Muir helped to preserve. It takes as much as half a year to hike the whole trail, from the Mexican border up to British Columbia. The inhumanity of the wild places you pass through is part of the appeal. Pilgrims such as Cheryl Strayed, whose best-selling memoir Wild recalls the time she spent as a PCT thru-hiker, hope to be over-mastered by the wilderness and thereby transformed. The animating fantasy, as Strayed describes it, is that of being “the sole star in a film about a world devoid of people.” Strayed says she hadn’t read Muir at the time of writing, but she claims his mantle anyway, acknowledging his activism as having preserved the wildernesses she hikes through and adopting the name he used for the High Sierras, “range of light,” as her own.

    Strayed is perfectly well aware of the human infrastructure that makes the hike possible; we see her stop to refuel at an outpost or take a bus to bypass the High Sierras section of the trail, when snow makes the mountains impassable. But these arrangements are merely supportive of the true goal, which is to face an overwhelming wilderness alone. Strayed goes days without seeing another human being. Her hands and feet bleed; she steps around rattlesnakes and meets bears face to face. She becomes severely dehydrated. “The trail had humbled me,” she writes. Other memoirs of the Pacific Crest Trail participate, too, in this romance of humility. In Thirst: 2600 Miles from Home, Heather Anderson describes the PCT as “a relentless quest that was quite possibly more than I could handle.” Our twenty-first-century language of the sublime is not explicitly religious like Muir’s was, but it speaks of the same hope: to be made small and even broken down in the face of nature’s vastness and indifference, so you can be born anew. 

    Being broken down and born again is precisely the sort of discomfort from which the state-park picturesque shields its visitors. You don’t have to confront nature’s vastness and emerge profoundly altered. Nowhere does Muir’s accusation of sham piety seem more exact than in the umbrella term for all the outdoor pastimes this park makes possible — hiking, fishing, boating, swimming, sitting by the campfire. The CCC called them “recreation,” arrogating to itself the divine work of world-making. Well, I don’t mind this blasphemy so much. I think it might be fine to coax nature into favorable channels. (The reparative, for the queer theorist Eve Sedgwick, is when we help an object in the world to be adequate to the task of helping us.) I don’t think it’s the worst to dam a stream on land that is useless for farming, what the CCC called “submarginal land,” and make it a lake. Not beyond reproach, not without cost, but fine. Reproach and cost are our unhandsome conditions. Let’s recreate. 

    For purposes of recreation, the crews at Douthat built twenty-five log cabins made of oak and hickory trunks felled on site and notched together. Several of them face the dammed lake. What is a cabin to America? It is a little house in the big woods, the birthplace of an emancipating president, and the lair of the Unabomber. It’s where Thoreau went, for the cost of twenty-eight dollars and twelve and a half cents in 1845, to get a wider margin to his life. The impurities in these materials are plain to see: a cabin is a fantasy of self-sufficient settlement in virgin land, but the land was not virgin and the self-sufficiency was subsidized by things such as manufactured nails and friends who own property. But I can’t escape the thought that something is restored, sitting on a winter night with nothing to look at but the fire and the marks of hand tools in the logs; or on a summer night with nothing to do but listen to the tree frogs.

    In 1934, purchased materials for one cabin cost the CCC $215.30, and included mostly specialized items such as chimney mortar and firebrick, or manufactured fasteners such as nails and roof flashing. Some metal work, such as hinges and straps for the doors, was done at a blacksmith shop on site. As you press the door latch, you see the marks of the hammer. Most of the material was, like the labor, an on-site resource — taken from what could be found in the park. The logs were felled here and the stones that make up the porch and the chimney were found here. Everything original in the cabins makes you think of the work of hands. Each fireplace has a unique pattern of stones. In the dozen or so cabins that I have seen, the crews never missed the aesthetic opportunity that the hearth presented. Some hearths have a large keystone around which the other stones are arranged; others have a matchbook pattern of similar stones. I have sat there on dark nights, looking at the fire, and imagining how much extra lifting it might have taken to make things symmetrical.

    The design principles governing park construction in the 1930s were simple: buildings ought to harmonize with the environment, and they ought to look even more handcrafted than they were. The National Park Service (NPS) provided the plans for state park construction in Virginia and supervised the subsequent work. This sophisticated national bureaucracy aimed to produce architecture that looked like the work of a frontier craftsman, as Linda Flint McClelland explains in her book Presenting Nature: The Historic Landscape Design of the National Park Service, 1916-1942. CCC laborers in Virginia’s state parks really did do much of the work by hand; but even where they could have gotten a clean machine-tooled line, the so-called “rustic” style favored by the NPS forbade it. “The straight edge as a precision tool has little or no place in the park artisan’s equipment,” wrote Albert Good in 1935 in the NPS publication Park Structures and Facilities, a pattern book that gave examples of park buildings. The volume had originated as a looseleaf binder of building ideas circulated by the Service to its architects and designers in the earlier 1930s, as McClelland notes. Douthat architects and technicians would have had access to the earlier portfolio version. 

    Good defined the parks’ rustic design style as one that “through the use of native materials in proper scale, and through the avoidance of rigid, straight lines, and over-sophistication, gives the feeling of having been executed by pioneer craftsman with limited hand tools.” The use of native logs and stone and the avoidance of straight lines helped the park buildings to blend in. Shades of brown helped, too; greens, Good noted, could rarely be matched successfully to the colors of foliage around them. Virginia cabins appear among the examples in the 1935 and 1938 volumes, including a Douthat cabin that Good praises as “a fine example of [a] vacation cabin, content to follow externally the simple log prototypes of the Frontier Era without apparent aspiration to be bigger and better and gaudier. Inside it slyly incorporates a modern bathroom just to prove that it is not the venerable relic it appears.” Good was perfectly aware that Park buildings did not just reprise, they simulated, handcrafting techniques. 

    Douthat’s cabins were built according to a handful of plans still on file at the Douthat State Park office, drafted by A. C. Barlow, an architect for the National Park Service. As Good would have wanted, they are not standardized in their details. At Douthat especially, which was the first of the Virginia state parks to be constructed, each crew seems to have worked a little differently. When I spoke to Elmore, she speculated that the CCC was still experimenting with technique. Cabin 1 has vertical logs, whereas in most of the other cabins the logs are horizontal. Horizontal won out — at parks built later, horizontal had an edge from the beginning, like VHS winning out over Beta. But I am partial to the vertical logs of cabin 1; with the logs painted dark brown and the chinking white, the effect is surprisingly graphic and modern. Cabin 1 also has an additional bedroom wing, not represented on any of the extant blueprints at the park office, which allowed space for a separate dining room. It’s as if someone on the crew building it decided to go all out. Even the chinking between the logs bears the imprint of the workers’ choices. Chinking is a technique for sealing the substantial gaps between stacked logs in log cabin construction. Crews hammered together a network of scrap wood and nails in the substantial gaps between the logs, then sealed them up with a mud mortar. The wood and nails stabilize the mud in the way that rebar stabilizes concrete. Where the chinking is chipped away, you can see that some crews made methodical grids of nails, and others crazy hodgepodges of whatever was lying around. The worker’s signature lies in the rebar under the mud. 

    The cabins, built to restore us, are themselves being restored now. I had been going to the park for about a year before I met the architects in charge of the current historical renovation. In August I met Greg Holzgrefe, architect for the Virginia Department of Conservation and Resources, for a tour of the CCC-era cabins under renovation. The work is extensive: in the cabins we entered, I could see that little was left besides the log walls, the original doors, and the hearth. But there isn’t much more to the cabins than this, anyway, and the kitchens and bathrooms were nothing to save, the products of a renovation sometime in the twentieth century whose date no one seems to be quite sure of, maybe the 1970s. The kitchens and the bathrooms had been drywalled at that time, leaving the logs and chinking intact behind. Now, with the drywall stripped out, you can see that the chinking was covered in graffiti in many cabins: family-friendly stuff mostly, like “If you think THIS place is the pits, you have never slept in a tent,” and “The Dawsons” or whoever, with the sequential dates of annual visits inscribed below. 

    When I met Greg, he drove me to the work site in a van that bore the signs of transporting paperwork and plans between the two parks where he is managing renovations of CCC-era cabins right now, Douthat and Fairystone State Park. He is tall and walks pitched forward and with a hitch in his step, but quickly, back pain being among the things there isn’t time for. The trials of his job bear some resemblance to the trials of a homeowner managing contractors, scaled up. Things are always coming up that no one predicted, and Greg has to decide what to do. He showed me a spot behind a former kitchen wall where someone with a drill bit of around six inches in diameter had, for some reason, scored three overlapping circles about an inch deep into one of the original logs in the exterior wall. Probably it had happened in the last renovation. Maybe they were going through a surface board and didn’t stop. Now, without drywall over the logs, there wouldn’t be any hiding it. “That’ll be an RFI,” Greg said resignedly. RFI means “request for information;” it is the form that the contractors, Thor Construction, fill out when there is a question with no right answer. Greg will have to choose which of the wrong answers to set his signature beneath. He answers to the State Assembly — eventually and indirectly, but it’s a weight.

    When I first started coming to these cabins, I didn’t know if anyone cared about them as historical structures — the drywalled ‘70s kitchens suggested not — but someone does. Greg showed me how they are stripping all the drywall away from the kitchens to expose the old logs. He won’t let them cover up any windows; the cabins are dark enough as it is. New building codes mean some of the porch rails, now a perfect height for sitting and looking out, have to be made higher — too bad. Drainage has to be addressed. In many cabins, the base log has rotted from years of water running through its crawl space to get from the mountains to the lake below. Jeff Stodghill, the outside architect who drew up the plans for the renovation, has used oak timbers to replace these rotted logs. They lift the whole cabin up and then slide the new timber in. I saw some of these new timbers at the cabins under renovation: a little more square than the originals, but like them marked with an ax. The modern timbers are machine planed, but Stodghill instructed the contractors to hack them at random along the length, to make it right. 

    When I visit in winter, it is quiet and I find myself staring at patterns: the psychedelia of the hottest coals at the bottom of the fire, or the ax marks in the logs. Jeff has a theory about these ax marks — the original ones. He can’t know for sure, but he thinks there was a sawmill at the park in the 1930s. That could conceivably mean that the crews took smooth-sawn logs and put ax marks back in, to make the cabin look handmade, which it was. Whether or not Stodghill is right, it is clear from Good’s pattern book of 1935 that the designers and workers of the New Deal-era were looking back as much as we are, as invested as we might be in a hand-hewn past, and as convinced as we are that it was already gone.      

    Logs and chinking do not insulate well; Greg won’t be able to do much about that. When I was there in January last year, it got down to twenty degrees at night — cold enough that even with the heat on, my dog’s water bowl iced over in the kitchen. So did the kitchen pipes, which pass along the exterior wall from the crawl space. Some half-hearted attempts have been made over the years to insulate the crawl spaces, one result of which is the better insulation of squirrel nests in the vicinity. At First Landing State Park, I once saw a trail of pink fiberglass leading from the crawl space to a pink-tufted squirrel nest high up in a nearby loblolly pine. At twenty degrees I should have had the water dripping all night, but I didn’t. I stopped by the camp store to tell them about the pipes. The ranger said the guys would be there soon. They had to unfreeze the pipes in several cabins and then reload all the firewood stations and then they would be there. I thought my best hope for the next night was to build the most scorching fire I could and keep it going for the six hours that remained until dark, so I bought more firewood and went home. There was a foot of snow cover down low near the cabins — no way to collect kindling — but my dog and I went up higher on the mountains where the sun hit longer every day. I found some dry pine twigs held by chance off the ground and brought those home.

    One of the guys appeared after a while with his blowtorch. I had boiling water next to the pipes under the sink. He didn’t think that would do much. He took the plywood off the crawl space access from outside, got a chair, and sat there flaming the pipes. He had been blowtorching pipes all morning. “The way I do dislike these old cabins,” he said to me politely. I went in to build a fire for my old dog, who was sleeping on the couch. I had put him through a lot of hiking. “He’s living,” the ranger said. After a while, the kitchen sink hissed and spluttered. “You got it!” I yelled. Fellow feeling existed, I think. I hadn’t failed at building a fire in front of him. He liked my dog. I thanked him and he drove off down the icy road, to blowtorch something else.

    The state-park picturesque requires much patient management on the part of rangers. They split a lot of wood at the logging areas inconspicuously located off-trail. In the summer, they look away politely when dogs swim in the no-dog area. I asked about vernal pools. These are ponds that appear only in spring, and they are a common breeding area for newts and salamanders. Katie Gibson told me there were some in the park, but they don’t tell anyone where they are, “for obvious reasons.” I think the rangers might be fighting some sort of cold war with the beavers, over whose dam will be upstream from whose, but I don’t have a high enough security clearance to know for sure. 

    Then there are the bears. Come summer the Lakeshore Restaurant at the camp store has fried catfish sandwiches. The catfish doesn’t come from the lake, but even so. I was there having my catfish on the deck overlooking the lake when a group of half a dozen rangers I didn’t know came in and ordered burgers. Someone at the state level had sent them a pretty annoying bear-related email. The sender seemed to be some young optimist in middle management. He wanted them to email him every time a bear sighting occurred. I ask you. The problem is a bear sighting doesn’t fit in a spreadsheet any better than any other thing fits in a spreadsheet. Was it aggressive? Was it at a camp? Was it a mother with cubs? These differences matter. Plus they could not email; they were busy mediating. “Go up to a guy with a fifty-seven-foot trailer, sir, could you maybe take down your bird feeder?” a ranger deadpanned. That bear sightings had increased was public knowledge: the usual signs reminding you that you are in “bear country” had been augmented with notes about “credible recent bear sightings.” Bear level orange. I asked Elmore and Gibson about it. They turned official. “It’s been much better recently,” they said.

    Nature is not gone where it is managed. It must be said for the CCC’s form of conservation, whose instrumentalism would have struck Muir as blasphemous, that it has after all served some of his ends. The state-park picturesque is unlike the sublime, in that it affords very few experiences of terrifying vastness. But a man-made lake is often about as good as a forbidding mountain when it comes to leaving room for animals. Much of the time, no one is on the trails. The animals have crept back after the flood. The bears like bird feeders just fine. Much of Douthat State Park lies downstream from the dam, along a single paved road that follows the low land. The fungi and plant life on some trails heading up into the mountains from the road give me the feeling of having grown by infinitely slow accretion. 

    If you don’t already know, I’m not sure I can convince you that a forest formed slowly is different from one where kudzu and garlic mustard spread out monoculturally. Of a place where a lake was created naturally when stones fell and dammed a creek, Muir wrote that “gradually every talus was covered with groves and gardens.” It was just as if every boulder were “prepared and measured and put in its place more thoughtfully than are the stones of temples.” On these less disturbed trails, I am sure I can see the difference in mosses, lichen, and fungi. Many different species in these groups grow together, mushroom fruiting bodies popping up after rain with shreds of moss still clinging to their caps. Not every inch of the forest has been manhandled. Whippoorwills, their song strangely brash and mechanical, still sing on June evenings. The damming of a lake for what were called “recreational purposes” cannot have made a difference to them, except in giving them this tract of land to nest on, where many visitors do not know or care about them. In that sense, the difference it made was existential. 

    We make dams, which destroy, but also harbor, life. And we are not the only tinkerers. Dams large and small are everywhere. The beavers, whose lodge is opposite the beach, would like to dam the creek a ways upstream from the CCC dam. The rangers would not like to do this. Beavers are a native animal, not to be interfered with. Still, some forms of interference are going on. Many tree trunks are wrapped in chicken wire. The adversary is a worthy one; he commands respect. Near Fairy Stone State Park, there is a US Army Corps of Engineers dam on Lake Powell. At the observation point above the lake, you can find the Corps’ pamphlets about wildlife. According to the beaver pamphlet, “they will search, just like an engineer, for the best location on a stream to build a dam.” 

    That’s us: engineers. Meddlers. In June, the annual peak of amphibian life, you see red-spotted newts in the hundreds, hanging motionless in the shallows of the lake. A red-spotted newt is a marvel: a four-inch swimming dragon with vermillion spots on slick skin the olive color of lake water, long-limbed and sinuous, with a powerful tail. Kids like to dam up the sand at the swimming beach and keep newts in the hot, muddy pools. Some bring aquariums to stock, which is not allowed. Once I watched a kid’s plastic battalion bivouac along the edge of the artificial lake that he had built. Large, round-faced, and unmistakably decent, he mothered his tanks, curving his attention around them to protect them from harm. Ranging around near him was a scrawny friend, in whom rage had somehow settled. He had picked up a big stick and was hitting things. The round friend hummed to his tanks. 

    My much smaller children wobbled over to see; they started clamping the newts in their fists and popping them in the little lake. “Gentle!” I said, writhing. “They’re fragile!” “No, they’re not,” the round kid explained. “They don’t have any bones.” Wrong, but right in spirit; we sacrifice invertebrates more readily to our sport. A fisherwoman saw my kids catching newts as she came in with her foot-long rainbow trout. “They make good bait,” she confided, careful that the children wouldn’t hear. The kids culled newts from an abundance that looked eternal, but wasn’t.

    Money, Justice, and Effective Altruism

    “In all ages of speculation, one of the strongest obstacles to the reception of the doctrine that Utility or Happiness is the criterion of right and wrong, has been drawn from the idea of Justice.” This is from John Stuart Mill’s Utilitarianism, in 1861, perhaps the most renowned exposition of the ethical theory that stands behind the contemporary movement that calls itself “effective altruism,” known widely as EA. Mill’s point is powerful and repercussive. I will return to the challenge that justice poses to utilitarianism presently. But first, what is effective altruism? 

    The two hubs of the movement are Oxford and Princeton. Oxford is home to the Centre for Effective Altruism, founded in 2012 by Toby Ord and William MacAskill, and Princeton is where Peter Singer, who provides the philosophical inspiration for EA, has taught for many years. Singer is EA’s most direct philosophical source, but it has deeper if less direct sources in the thought of the Victorian moral philosopher Henry Sidgwick and the contemporary moral philosopher Derek Parfit, who died a few years ago. Sidgwick gave utilitarianism a rigorous formulation as well as a philosophically sophisticated grounding. He showed that utilitarianism need not depend, as it did in Bentham and Mill, on an implausible naturalism that seeks to reduce ethics to an empirical science. 

    Parfit was strongly influenced by Sidgwick, as indeed is Singer.  Parfit’s Reasons and Persons was important for several reasons. When it was published in 1984, moral and political philosophy was under the influence of John Rawls, whose Theory of Justice had appeared in 1971 in the wake of the civil rights and other social justice movements. Rawls’ notion of “justice as fairness” provided the first systematic alternative to utilitarianism and a seemingly persuasive critique of it.  Utilitarianism, Rawls argued, did not take sufficiently seriously the “separateness of persons,” since it allowed tradeoffs between benefits and harms that we are content with within an individual life — deferring gratification for future benefit, for example — and applied them, unjustly, across an aggregate of lives. It allowed harms to some to be weighed impersonally against benefits to others, and so treated individuals as though they were simply parts of a social whole, analogously to the way we regard individual moments of our lives. But Parfit argued, on sophisticated metaphysical grounds, that personal identity is not the simple all-or-nothing thing that Rawls’ objection presupposed. And he argued persuasively that utilitarianism can be defended against a number of other challenges that its critics had raised from the perspective of justice. Parfit also showed how taking utilitarianism seriously leads to a number of important questions concerning our relation to the future in the long term.

    Singer’s main contributions have been in what is called, somewhat deprecatingly, “applied ethics.” He has influentially argued on broadly utilitarian grounds that we have significant obligations to address global poverty and to avoid the inhumane treatment of nonhuman animals. Singer’s Animal Liberation, published in 1975, has spawned a massive increase in vegetarianism and attention to animal welfare. And his essay ““Famine, Affluence, and Morality,” which appeared in 1972, may be the most widely assigned article in college ethics courses. Singer is also the author of The Most Good You Can Do: How Effective Altruism is Changing Ideas About Living Ethically.

    Ord and MacAskill are from a younger generation. Their role has been to put Singer’s conclusions into practice by founding and running the Centre for Effective Altruism and the Global Priorities Institute, also at Oxford, and by attracting a large number of “the best and the brightest” to EA. MacAskill is the author of Doing Good Better: How Effective Altruism Can Help You Help Others, Do Work That Matters, and Make Smarter Choices About Giving Back. And Ord plays a lead role in the organization Giving What We Can, whose members pledge to donate at least ten percent of their income to “effective charities.” He and MacAskill are also central figures in the development of “long-termism,” a branch of effective altruism which argues that we should focus more on benefits and harms in what Parfit called the “farther future.”

    The moral and philosophical idea that drives much of “effective altruism” comes from a famous example — known as “Singer’s pond case” — that Singer discusses in his essay. Imagine that you are walking past a shallow pond in which a child is drowning and that you can save the child at the cost of getting your pants wet. It seems uncontroversial to hold that it would be wrong not to save the child to spare your pants. Singer argues that the world’s poor are in a similar position. They are dying from famine, disease, and other causes, in some cases literally drowning from the effects of climate change. Analogously, we in the developed world can address many of these threats to human (and other animal) life and well-being at relatively little cost. It seems to follow, therefore, that we are obligated to do so and that it would be wrong for us not to do so. Singer’s larger teaching is that we are obligated on roughly utilitarian grounds to absorb as much cost as would be necessary to make those whom we can benefit no worse off than we are. Yet we do not have to draw such a radical conclusion to be convinced by Singer’s analogy that we have very significant obligations to help address global poverty. And it may well be that Giving What We Can’s minimum standard of ten percent of income is morally appropriate.

    Let us begin by examining more closely the relationship between effective altruism and utilitarianism, and what together they assert. MacAskill offers the following definition:

    Effective altruism is about asking, “How can I make the biggest difference I can?” And using evidence and careful reasoning to try to find an answer. It takes a scientific approach to doing good. Just as science consists of the honest and impartial attempt to work out what’s true, and a commitment to believe the truth whatever that turns out to be, effective altruism consists of the honest and impartial attempt to work out what’s best for the world, and commitment to do what’s best, whatever that turns out to be.

    This definition has a number of distinct elements, and it is worth analyzing them more closely. First, it posits a commitment to bringing about “the most good you can,” to quote the title of Singer’s book. It is not about simply doing good or even doing enough good. It is about doing the most good. Second, although “good” can mean different things and be applied to different kinds of objects, EA counsels bringing about the best outcomes. It recommends the action or policy, of those available, that would have the best consequences overall. But outcomes can be ranked in different ways, from different perspectives, and with different criteria or standards. Third, effective altruism is committed to bringing about what is “best for the world” as opposed to for any individual or particular society. It is an “impartial” theory. But “best for the world” can also mean different things. In one “impersonal” sense, something can be thought to be good (or best) to exist in the world, independently of whether it benefits or is good for any individual or other sentient being. This is the sense that G. E. Moore made famous in Principia Ethica. Moore thought, for example, that beauty “ought to exist for its own sake,” regardless of whether it is experienced or appreciated. Of course, Moore thought that it is much better for beauty to be appreciated experientially, but he thought it still can have intrinsic value even if it is not. (He was a great defender of intrinsic value generally.) Perhaps more plausibly, some consequentialists who follow Moore hold that significant inequality is a bad thing intrinsically, in addition to the disvalue of the bad things that those who are worse off suffer.

    Impersonal good is not, however, the sense of “good” with which utilitarianism and effective altruism are concerned. They prescribe doing whatever would be best overall for beings in the world. This is important: it is what distinguishes effective altruism and utilitarianism from forms of consequentialism that reckon the goodness of outcomes in terms of impersonal goodness that does not consist wholly in benefits to individuals. And this brings us to the fourth element of effective altruism, namely, that it aggregates benefits, and harms — costs and benefits in welfare terms — across all affected parties. The “most good” is the most good to individuals, on balance, aggregated across all who would be affected by alternative actions in any way — wherever they might be (hence EA’s global reach), and whenever they might exist in time, no matter how far in the future their being benefited or harmed is from our actions today (hence EA’s long-term view). The offshoot of effective altruism known as long-termism is the view that short-term benefits and harms will almost always be swamped in the longer term and so are much less relevant to what we should do here and now than we ordinarily suppose. This is a claim with disruptive implications for the practice of ordinary kindness and assistance, which is often immediate and local; and it is important to see how such a claim is an apparent consequence of the doctrine of effective altruism. 

    Finally, the fifth premise of EA is the stress on “evidence and careful reasoning.” Utilitarianism is frequently characterized by its emphasis on empirical — or as Mill called them, “inductive” — methods. This is also an important theme in Bentham’s argument for the principle of utility. Mill contrasts inductive, empirical methods with the “intuitive” approach of giving credence to moral intuitions based on emotions — for example, to a sense of obligation, or to attitudes such as blame, guilt, and resentment, which the mid-twentieth century Oxford philosopher P. F. Strawson called “reactive attitudes.” MacAskill highlights this aspect when he refers to Joshua Greene’s neurological research that contrasts utilitarian ethical judgments, which Greene shows to be associated with parts of the brain involved in reflection and reasoning, with intuitive and non-utilitarian — sometimes known as “deontic” — judgments that are associated with regions of the brain implicated in emotions.

    The question of what parts of our mind and brains are involved in moral judgment may seem to be entirely epistemological, a matter only of how we come to know what we should do morally and which actions are right and which are wrong. But this is not so, as Mill himself appreciated. It concerns features of our moral concepts themselves and, therefore, what moral right and wrong themselves are. 

    To see why, we should note first that utilitarianism, and consequentialism more generally, originated as theories not about what is intrinsically good and bad, but about what is morally right or wrong. Mill’s Utilitarianism begins, indeed, with the declaration that “few circumstances . . . are more significant of the backward state in which speculation on the most important subjects still lingers, than the little progress which has been made . . . respecting the criterion of right and wrong.” What makes someone a consequentialist is not that they hold some particular theory of the good. Consequentialists differ widely on what kinds of outcomes are intrinsically good or bad. Granted, utilitarians such as Mill and the effective altruists think that what makes outcomes good is that they concern the well-being, good, or happiness of human and other sentient beings. But critics of utilitarianism, like me, could stipulate agreement with utilitarians on this, or any other, theory of the good, and still be in fundamental disagreement about what we morally should do.

    The issue between utilitarians and their critics is not about the good, but about the relation between the good and the right. Utilitarians hold that what makes actions, policies, practices, and institutions morally right or wrong is that they bring about the greatest possible net good to affected parties. Their critics do not deny that the goodness of consequences is among the factors that makes actions morally right or obligatory. As the most famous recent critic of utilitarianism, John Rawls, put it, to deny that would “simply be irrational, crazy.” What they deny is that it is the only relevant factor. This returns us to Mill’s observation that the utilitarian idea can conflict with demands of justice. Rawls agrees with Mill’s diagnosis that a significant obstacle to “the reception” of the principle of a utility as a “criterion of right and wrong” concerns its insensitivity to justice. I agree with Rawls about this: like utilitarianism more generally, effective altruism fails to appreciate that morality most fundamentally concerns relations of mutual accountability and justice.

    Ironically, this is a point that Mill himself recognizes, and his appreciation of it creates a tension between the form of utilitarianism that he advances at the beginning of Utilitarianism and the view he seems to endorse by the end. The dramatic shift occurs in its fifth chapter, which begins with Mill’s noting the tension between utilitarianism and justice. After discussing different features of the concept of justice, Mill makes the following deeply insightful point. 

    We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow creatures; if not by opinion, by the reproaches of his own conscience. This seems the real turning point of the distinction between morality and simple expediency. 

    In many cases, of course, we can think an action wrong without thinking that it should be punished. Making a hurtful remark might be an example of that. (Though in our increasingly censorious society, it might often be an example of just the opposite.) Mill recognizes that this is so, but says that in such cases we are nonetheless committed to thinking that conscientious self-reproach would be called for. The fundamental point, as I have understood it, is that it is a conceptual truth that an act is wrong if, and only if, it is an act of a kind that would be blameworthy to perform without excuse.

    When Mill first puts forward his “Greatest Happiness Principle” in Utilitarianism’s second chapter, he says that acts are right “in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.” This is frequently interpreted as what philosophers call “act utilitarianism”: an act is morally right (not wrong) if, and only if, it is one of those available to the agent in the circumstances that would produce the greatest total net happiness (considering all human or sentient beings in the very longest run). But once Mill has recognized moral wrongness’ connection to culpability in his fifth chapter, his view seems to shift in the direction of what is called “rule utilitarianism”: an act is morally right (not wrong) if, and only if, it is consistent with rules it would bring about the greatest total net happiness for society to accept as standards for holding people accountable (through guilt and moral blame) in the given situation. This fits with the rule-utilitarian theory of justice and rights that Mill finally offers, enabling him to hold that unjust actions are morally wrong on broadly utilitarian grounds, namely, rule-utilitarian rather than act-utilitarian grounds.

    If Mill’s conceptual thesis is correct, as I believe it is, then there is a connection between the concepts of moral right and wrong and accountability, and therefore, with conscience. What we call conscience is the capacity to hold ourselves accountable through the states of mind that Strawson called “reactive attitudes,” including the “sense of obligation” and guilt. To be capable of being a moral subject, that is, of being a person with moral duties and acting rightly or wrongly, one must have such a capacity or moral competence. We exercise this capacity when we feel guilt or have the attitude of blame towards others. Blame is essentially the same attitude as guilt, except that the latter necessarily has oneself as its object. Guilt is self-blame. Similarly, a sense of obligation is essentially the same attitude also, only it is felt prospectively, before acting, rather than retrospectively.

    Strawson lays out his account of the role of reactive attitudes in moral accountability in “Freedom and Resentment,” which has been called “perhaps the most influential philosophical paper of the twentieth century.” What Strawson saw was that we hold reactive attitudes such as blame, guilt, and resentment from a distinctive perspective, the standpoint of relating to someone. Strawson calls this the “participant” point of view, since it occurs within an implied relationship to another person. I call it the “second-person standpoint,” since to formulate what one thinks or feels from that perspective, one must use second-person pronouns. “What were you thinking?” “You can’t do that to me.” And so on.

    Strawson convincingly argues that attitudes such as resentment and blame implicitly address “demands” to their objects. Not naked demands — reactive attitudes do not attempt to force or to manipulate — but putatively legitimate demands. And this requires anyone who holds a reactive attitude to presuppose that they have the authority to make the demand, and that the person of whom the demand is made can recognize the demand’s legitimacy and comply with it for that reason. We call the latter capacity “conscience.” Through conscience we address demands to ourselves that we feel to be backed by the authority of what Strawson calls “the moral community” or, as we might also say, of any person, including oneself, as its representative. 

    Attitudes such as blame and resentment are unlike third-personal attitudes such as contempt and disdain in that they come with an implicit RSVP. They call for response and reciprocation, for their object to acknowledge their wrongdoing and hold themselves accountable. Moreover, the presupposed underlying reciprocity goes in both directions. They both demand and implicitly give respect. They are eye-to-eye attitudes.

    This means that moral judgments of right and wrong, unlike other ethical judgments, for example, of the goodness or badness of outcomes, necessarily implicate human relationships. Many of our most important moral obligations are owed to other persons or sentient beings, and the very ideas of moral right and wrong entail accountability to every person or member of the moral community (where “moral community” refers not to any actual society, but to a presupposed authoritative impartial perspective from which moral demands are issued or addressed to us as moral agents). The point is not that we necessarily assume that such a community actually exists, but that we necessarily attempt to think, feel, and have attitudes from such an impartial second-personal perspective.

    The idea that morality is fundamentally about mutual accountability and respect could not be farther from the ethical vision that underlies effective altruism. Altruism is concerned, by definition, with beneficial outcomes, and it conceives of ethical action entirely in instrumental terms. According to MacAskill, “the key questions to help you think like an effective altruist” are: “How many people benefit, and by how much? Is this the most effective thing you can do? Is this area neglected? What would have happened otherwise? What are the chances of success, and how good would success be?” EA asks what states of the world are such that the people (or other sentient beings) existing in those states are better off in the aggregate than those who would exist in the states that would eventuate if a specific action, policy, practice, or institution were pursued or established. Once we have stipulated what the answer to this question is — and, as we have noted, there is no reason why an opponent of utilitarianism or effective altruism must disagree with that stipulation — the remaining moral question of what one should do is, for EA, an entirely empirical question, one to be answered using the methods of natural and social science.

    By contrast, the questions of justice and moral right and wrong that, for the critic of utilitarianism and effective altruism, are left entirely open and unsettled even by an agreed stipulation of welfare outcomes, are not empirical or scientific questions. They are irreducibly normative questions of what it would be morally right or wrong to do (given the stipulation), and these moral questions necessarily presuppose relations of mutual accountability and respect in the background. 

    One way to see this is to consider what is called paternalism. As it is defined in debates between utilitarians and their critics, paternalism consists in restricting someone’s liberty or usurping their authority of autonomous choice on the grounds that this would be better for them in welfare terms. It is a core feature of an order of justice, mutual respect, and accountability that every person has a right of autonomy, understood as the authority to make their own choices and live their own lives, so long as that does not violate the similar rights of others. (This does not entail anything like libertarianism, as the example of Rawls’ theory of justice illustrates.) Respect for another’s autonomy, however, can conflict with an altruistic desire to promote their well-being, since we often make choices to our own detriment (even, indeed, when we choose altruistically to benefit others!). 

    Suppose, for example, that a friend has their heart set on pursuing a career as a golf pro, which you can see that they are massively unsuited for, and that you can tell that pursuing it would make them miserable. As a friend, you might have standing to give them realistic feedback. But suppose you do voice your concerns, and still your friend persists. You might be in a position to undermine their plans in other less honest ways, say, by resetting their alarm so that they miss their tee time at a crucial tournament. It might be that being diverted from their chosen path is actually better for them in the long run and that they would go on to live a happier, more satisfying life doing something else. It seems clear, however, that subverting your friend’s project altruistically would wrongfully violate her autonomy. It would be an injustice to her and would violate respect for her dignity as a person.

    According to the doctrine of effective altruism, the fundamental ethical relation is benefactor to beneficiary; but from the standpoint of equal justice, the fundamental ethical relation is mutual respect. Being respected as an equal is an important part of our well-being, of course, but altruistic concern will coincide with respect only when being respected most advances our well-being, all things considered. And that is frequently not the case. Often we want to make choices that do not best promote our welfare, and often for good reason (including sometimes for good altruistic reasons). 

    Moreover, altruism can be morally problematic in other ways, although I am not claiming that it must be so. For example, effective altruists argue that often the way we can do “the most good we can do” is by donating to “effective charities.” A significant element of EA is the ranking of charities — “charity evaluation,” Singer calls it — by how effectively they turn donations into beneficial outcomes. Yet the term “charity,” like the term “pity,” signals the potential in altruism to insinuate a disrespectful relation of superiority to a pitiable “charity case.” The perspective of charity, like the perspective of pity, is not the eye-to-eye relation of mutual respect; it comes, like God’s grace, from above. “The greatest and most common miseries of humanity,” Kant wrote

    rest more on the injustice of human beings than on misfortune . . . We participate in the general injustice even when we do no injustice according to civil laws and institutions. When we show beneficence to a needy person, we do not give him anything gratuitously, but only give him some of what we have previously helped to take from him through the general injustice . . . . Thus even actions from charity are acts of duty and obligation, based on the rights of others.

    In couching the meeting of human needs in terms of altruism and charity, EA risks treating its beneficiaries as inferior supplicants rather than mutually accountable equals. It is little wonder that there is so much resentment of the global north in the global south.

    In On Revolution, Hannah Arendt distinguishes between charity and pity, on the one hand, and solidarity and compassion, on the other. She juxtaposes Jesus’ compassion with the “eloquent pity” of the Grand Inquisitor in Dostoyevsky’s The Brothers Karamazov: “The sin of the Grand Inquisitor was that he, like Robespierre, was ‘attracted toward les hommes faibles’ … because he had depersonalized the sufferers, lumped them together into an aggregate — the people, toujours malheureux, the suffering masses, et cetera.” By contrast, Jesus had “compassion with all men in their singularity.” Whereas pity views its object as a faible in a low and abject condition, thus essentializing and “depersonaliz[ing]” them, compassion, solidarity, and respect regard their objects “in their singularity” as individuals. Even the term “the global poor,” risks condescension, since it signals not a respectful relation of mutually accountable collaboration to establish more just relations as equals, but regarding someone as an object of charity that, by definition, no one has the standing to demand.

    To be clear, I am not claiming that Singer and MacAskill, or other proponents of effective altruism, are prone to depersonalizing those whom they seek to benefit, or that their help is necessarily condescending, self-aggrandizing, or arrogant. Neither am I claiming that what we call “charities” necessarily present themselves or are received as superior to their beneficiaries. What I am saying is that the benefactor/beneficiary relation carries these risks, that it can be inconsistent with mutual respect as equals, and that relating to others on terms of mutual respect and accountability is both required by justice and the antidote to these risks. Nor am I claiming that the work that organizations such as Malaria Consortium, which is currently mostly highly rated by Give Well — a “charity evaluator” endorsed by effective altruists — is not doing important work and meeting significant needs that would otherwise go unmet. I am not saying that their work is essentially paternalistic or unwelcome. Perhaps being characterized as a “charity” is just unfortunate branding. I believe that the Malaria Consortium is worthy of our support and have supported it myself. My point is that meeting human need should be pursued from a collaborative perspective of mutual respect and justice rather than that of altruistic charity.

    “Justice,” Rawls says, “is the first virtue of social institutions.” Whether justice is served is never simply an aggregative matter, either an accumulation of just actions of equal respect, or, even less, of aggregated acts of effective altruism. Whether the most important human needs are adequately met is generally a function of whether the society in which they occur is justly organized and of the society’s level of economic development. The economist Angus Deaton has rightly remarked that development

    is neither a financial nor a technical problem but a political problem, and the aid industry often makes the politics worse. The dedicated people who risked their lives to help in the recent Ebola epidemic discovered what had been long known: lack of money is not killing people. The true villains are the chronically disorganized and underfunded health care systems about which governments care little, along with well-founded distrust of those governments and foreigners, even when their advice is correct.

    We do not have to accept the general skepticism of foreign aid’s benefits that Deaton expresses in The Great Escape to agree that EA’s framework is often insensitive to crucial aspects of the political contexts in which human need occurs, and therefore to critical issues of justice. Often the best thing we can do to support those in need is whatever we can to help them have more political power and greater voice, both intranationally and internationally. Deaton argues that economic development depends not on donated aid but on investment, and that the former can often drive out the latter. This is by no means always the case, however. It seems clear, for example, that some forms of aid, like that which targeted HIV in Africa, can make a critical difference that other attempted solutions cannot. PEPFAR, or the President’s Emergency Plan for AIDS Relief, begun by George W. Bush, has been calculated to have saved twenty-five million lives.

    The crucial point is that the requisite support should come not through altruistic desires for welfare outcomes but, as Kant says, from a concern to do justice – and that this be in ways that collaborate with aided individuals, show mutual accountability and respect, and help to empower them in their own social and political context. Climate change poses a salient example. The EA model of individuals doing “the most good” they can for other individuals, typically by contributing to highly rated charities that direct funds to maximally promote individual well-being, seems especially ill-suited to deal with the scale and the nature of the problem. Amartya Sen showed that famine is almost always fundamentally a political problem, and this is even more true of climate change. Only concerted collective and political policies and actions at all levels, nationally and internationally, can reduce carbon emissions to the necessary levels. Proponents of effective altruism implicitly recognize this. A talk given by a researcher for FoundersPledge and featured on effectivealtruism.org argues that the most effective thing individuals can do is to contribute to the most highly-rated “climate charities.” Their top charity, the Clean Air Task Force, characterizes their work as “advanc[ing] the policies and technologies necessary to decarbonize the global energy system.” Surely it is obvious that this is impossible without concerted political action. 

    What is the right ethical lens through which to view the challenges of climate change? Here Kant’s admonition that “the greatest and most common miseries of humanity rest more on the injustice of human beings than on misfortune” is especially apposite. If economic development has been carbon driven, then it is a virtual tautology that the developed world bears greater responsibility for the challenges of climate change. Any just solution will require international cooperation on terms of mutual respect that recognize these different levels of responsibility as well as the unjust power differentials that have resulted from differential development. The pressing ethical questions are not simply how we can do the most good. They are questions of justice. 

    A particularly noteworthy aspect of the movement of effective altruism is Singer’s and MacAskill’s assertion that for a good number of highly talented individuals, the best thing they can do to promote the most good is to find the highest-paying employment so that they can donate much of their income to the most effective charities. Singer’s The Most Good You Can Do discusses a number of examples of college graduates who take high-paying jobs in the financial sector, live simply, and contribute a large percentage of their income to effective charities. Such a person might have gone to work for the charity themselves, or put their talents to work in some other altruistic- or justice-focused way, but Singer quotes MacAskill as arguing that doing so would have produced less net benefit, since donating a fraction of their salary to employ someone producing equivalent good would still leave a significant amount to be put toward other good purposes.

    I see no reason to doubt the motivation of those who pursue this path. There is, I agree, something undeniably admirable about the individuals Singer describes, who live modestly, are focused on the welfare of others, and do the most they can to advance it. Yet from my perspective as a professor of philosophy at Yale, however, I find the prospect of encouraging students to pursue this path of high-minded riches deeply depressing. Institutions such as Yale are already knee-deep in helping to reproduce an extremely unjust political and economic system and in class formation. As things stand, about thirty percent of their graduates take positions in consulting and the financial sector, with only a tiny percentage going to badly needed but talent-starved fields such as public K-12 education. I do not remember the last time I talked to a Yale undergraduate who wanted to pursue that as a career path, although some take temporary posts in programs such as Teach For America.

    But wouldn’t it be wonderful, a proponent of the MacAskill/Singer argument might reply, if more of the people who went the finance and consulting route were like the EA financial analysts whom we champion? I agree with the proposition that if we hold fixed the percentage of graduates pursuing that path, then it is better that more of them have the altruistic aspirations that MacAskill and Singer describe. But I worry about encouraging graduates to pursue the route of “earning to give” for two reasons. First, familiar psychological processes of group affiliation, emotional and attitude contagion, motivated reasoning (rationalization), and accountability to those we live and work with, not to mention the desire to fit in with our associates and have their approval, make it likely that many who begin with such aspirations will tend over time to lose them and become more like their non-EA colleagues. As the example of Sam Bankman-Fried has shown, the nobility of effective altruists can easily be degraded by massive profits: the doctrine can serve as a cunning alibi for the rapacious accumulation of wealth. And especially at the current moment, when meritocracy’s credentials are wearing exceedingly thin, statistics such as the ones I just cited do little to inspire confidence in highly selective universities. This is what you are selecting students for? The universities might reasonably be asked. After all, students do not suddenly change their interests and plans at the end of their college careers. Their undergraduate lives are shaped by the fact that almost a third of their number hope to go into finance or consulting. 

    Elite universities do not just complacently accept these depressing trends; they actively contribute to them by pursuing massive donations and lionizing their biggest donors. What are their students to think when they live, eat, and study in buildings funded by titans of finance? Yale’s current development campaign is titled For Humanity. Despite our fine words, however, students can hardly be faulted for wondering whether they are not living by the university’s real values when they join the “army of the thirty percent.” To do so, however, is to be condemned to a life lived without engagement with, and therefore, meaningful accountability to, the overwhelming majority of their fellow citizens and human beings. Even those who maintain their allegiance to EA while working in offices on Wall Street never get to see how the people they seek to benefit actually live or get to live or work with them.

    The pursuit of justice, in contrast with effective altruism, seeks accountable relationships with others on terms of mutual equal respect. Universities should be “for humanity” not in the sense of seeking just to benefit themselves. That way lies the self-aggrandizing myth of superiority that can mask massive injustice. They should seek, and actively encourage their students to seek, equal justice for all. That is a path that would earn them neither the resentment nor the self-protective contempt that is so widespread among the great numbers of people who are alienated from them and their members, but equal respect in return.

    Albert Memmi and The Problem with Postcolonialism

    The Franco-Tunisian Jewish writer and social philosopher Albert Memmi died in the spring of 2020, having lived a full century, at least a half of which he devoted to developing an arc of thought with great relevance to some of the most vexing questions now facing the societies of the Middle East, the region where he was born, although he eventually found his intellectual and literal home in the West. We need him now.

    Memmi was born in 1920 in the Jewish quarter in Tunis, at the time a French protectorate. The eldest son of a poor Italian Tunisian saddlemaker and an illiterate mother of Bedouin Berber heritage, he spoke Judeo-Arabic at home and studied Hebrew in a traditional religious school. Ambitious and studious, he won a scholarship to the most prestigious French high school in Tunisia, and went on to study philosophy at the University of Algiers. Forced to return to Tunisia after Vichy France expelled Jews from public institutions throughout the mainland and the colonies, Memmi was interned briefly in a labor camp after the Nazi occupation of Tunisia in 1942. After liberation from Nazi rule in May 1943, he decided to continue studying philosophy at the Sorbonne in Paris, where he became deeply engaged in Jewish intellectual life and thought and embarked on a life of letters. Returning to Tunisia in 1949, he worked as a high school teacher teaching philosophy and literature, and three years later he helped to found the Centre de Psychopédagogie de Tunis, where he studied the psychological dimensions of colonial oppression. After Tunisian independence in 1956 he returned to France, teaching in a number of universities and eventually being appointed in 1970 a professor of sociology in the University of Paris.

    Memmi is remembered today chiefly for his research and his novels about the psychological impact of colonialism, which he produced when he was in his thirties, in the 1950s. He became a hero of the anti-colonial left with his novel The Pillar of Salt, a fictionalized autobiography of his childhood in French-colonized Tunisia that appeared in 1953, and his study The Colonizer and the Colonized, which appeared four years later. Promoted in the pages of Les Temps Modernes, the leading French intellectual journal that was edited by Jean-Paul Sartre, who also wrote a preface to the book, The Colonizer and the Colonized was a study of the sociological and psychological dimensions of the dependence and the privilege created by a colonial hierarchy. 

    This “lucid and sober” book, wrote Sartre, describes the predicament of its author as “caught between the racist usurpation of the colonizers and the building of a future nation by the colonized, where the author ‘suspects he will have no place.’” (He will have no place because he is a Jew.) The Colonizer and The Colonized made Memmi a giant of anti-colonialism, along with such writers as Frantz Fanon, Leopold Senghor, Albert Camus (who also wrote a preface to Memmi’s book), and Aime Cesaire; one of the key figures of what later came to be dubbed the postcolonial school of thought defined by works ranging from Fanon’s The Wretched of the Earth in 1961 to Edward Said’s Orientalism in 1978. But slowly Memmi’s thinking began to change. His later works sought to generalize the insights from his early phase into a broader sociological account of dependence, privilege, and racism. (He published a deep study of racism in 1982.) Memmi came to view racism and colonialism as only one instance of the more general human trait of what he called heterophobia, the fear of difference, which motivates groups to dominate, to condemn, and to exclude other groups. Memmi’s understanding converged with thinkers such as Niebuhr, for example, for whom “the chief source of man’s inhumanity to man seems to be the tribal limits of his sense of obligation to other men”; and his emphasis upon the challenge of heterogeneity anticipated an important theme in contemporary social and political philosophy. 

    I believe that Memmi’s work is a vital resource to make sense of contemporary failures of governance, not least in his own region, the Middle East and North Africa. The World Bank report of 1996 noted, for example, “a systematic regression of capacity in the last thirty years” in almost every country in Africa, adding the melancholy remark that “the majority had better capacity at independence than they now possess.” The Arab Human Development Reports prepared for the United Nations in 2003 and 2004 highlighted how isolated Arab countries are from the diffusion of the world’s knowledge, mentioning as an example that the number of books translated into Arabic is miniscule. (It noted that whereas Spain translates ten thousand books into Spanish a year, the same number of books have been translated in total into Arabic since the ninth century CE, and that between 1980 and 1985 the number of translations into Arabic per million potential readers was 4.4, less than 0.8 percent of the number for Hungary and less than 0.5 percent of the number for Spain.) Likewise it highlighted the widespread ignorance and the culture ripe for conspiracy theories and irrational resentments that result from this isolation. Study after study since the 1980s has found the Middle East and North Africa to be the most repressive region in the world, with almost all countries ruled by (occasionally elected) authoritarian regimes (the only exception as a liberal democracy is, for all its agonies, Israel), and that the lowest levels of human freedom in the world are in the Middle East and Africa, and that this translates into having the highest levels of serious armed conflict in the world and the highest concentration of fragile or failing states.

    This is a crisis about which Memmi has a lot to teach. It is therefore a great loss that much of the world remains unaware of his contribution. The left disinherited him because his later works took positions contrary to progressive and postcolonial stances. The discerning and unsentimental eye that he trained on the internal limitations of the postcolonial societies as they struggled and often failed to achieve the original lofty goals of independence and democratic self-governance generated insights that admirers of his early anti-colonialist work came to deplore. Memmi’s concerns about post-independence societies went far beyond the struggles that necessarily accompanied independence. He also sought to shed light on the situation of the individual in the aftermath of independence, whose struggle for meaning could not be reduced to the political and social opposition to colonialism. The experience of colonialism and racism may have shaped the quest to belong, but Memmi put the onus on the individual to transcend them. He believed in inner emancipation – or in the words of a nineteenth-century Zionist thinker, in “auto-emancipation.” This inner emancipation was the condition for the creation of free and functioning societies.

    Although he was not a political theorist, Memmi thought deeply about politics. He was an early proponent of a pragmatic and social democratic model of liberal nationalism. In contrast to the unrealistic utopianism of the socialist left or the Manichaeanism of the postcolonial revolt, he saw a pragmatic social democratic nationalism as the most appropriate political program for achieving individual and collective freedom within a non-utopian politics. (This was decades before academic philosophers such as Charles Taylor, Will Kymlicka, David Miller, and Yael Tamir began to adumbrate a revival of liberal nationalism.) He surveyed the possibilities and limits of the newly independent countries of the Middle East with open eyes, that is, without naivete or romanticism. In this context he refused to vilify the state of Israel. 

    If the Islamic countries of the region had followed Memmi’s positive assessment of Israel’s contributions to furthering economic development and pluralism in the region, they would have avoided years of futile and costly antagonism. Indeed, official and popular animosity to Israel’s existence remains one of the major causes of the backwardness of many Middle Eastern societies. It also contributes to the continued suffering of many Palestinians, whose intellectual and political leaders insist on making the perfect the enemy of the good. Memmi favors a progressive liberal nationalism as a model for the countries of the region; he endorses Israel as a legitimate partner for development and peace; and he calls for a culture that enables each individual to grapple freely with the meaning of their lives, alone and in community. He bequeathed a deep and rich bounty for the beleaguered peoples of the region, weighed down by anger, tyranny, poverty, theocracy, and despair. And this bounty was the work of a liberal Francophile Jew who came from their own region.

    In his early nonfiction and his fiction, Memmi established himself as a keen observer of the psychological effects of societies built on the asymmetrical power and privilege that defined colonial systems of domination. In the writings of his early Third Worldist phase, from the 1950s to the mid-1960s, Memmi was still working with the standard anti-imperialist binary. Like his contemporaries Frantz Fanon and Aime Cesaire and many subsequent Western leftists, his depiction of the colonized individual was more or less essentialized: the existence of the oppressed was almost entirely defined by the oppressor. In The Colonizer and the Colonized, which remains his most famous work, Memmi claimed that all colonial societies were founded on a “pyramid of petty tyrants” whereby gradations of proximity to the colonizer and his institutions conferred privilege and feelings of superiority over those lower in the hierarchy. Such proximity might be afforded by accepting roles such as police officer, teacher, and government official in the colonial institutions — but also by acknowledging the superiority of European culture. Memmi described those at the base of the pyramid as the “true” inhabitants of the colony.

    “The colonial relationship,” he wrote, “chained the colonizer and the colonized into an implacable dependence, molded their respective characters and dictated their conduct.” Referring directly to his experiences in Tunisia, he observed that “the Jew found himself one small notch above the Moslem on the pyramid which is the basis of all colonial societies. His privileges were laughable, but they were enough to make him proud and to make him hope that he was not part of the mass of Moslems which constituted the base of the pyramid. To that end, they endeavor to resemble the colonizer in the frank hope that he may cease to consider them different from him… But if the colonizer does not always openly discourage these candidates to develop that resemblance, he never permits them to attain it either. Thus, they live in painful and constant ambiguity.” Yet Memmi never lost sight of the existential dimension, the lived dimension, of these psychological and sociological observations: “I undertook this inventory of conditions of colonized people mainly in order to understand myself and to identify my place in the society of other men.”

    The popularity of Memmi’s historical study of colonialism has obscured the fact that it was only one part of his lifelong struggle to find a place for himself amid the class, ethnic, and cultural contradictions between the “third world” and Europe. (Later he pointedly rejected the term “Global South” as too broad and ideological.) Already by the early 1950s, his struggle to come to terms with his mixed family background and his experiences growing up in a Jewish ghetto in French-colonized North Africa during and after the Second World War became the central preoccupation of his writings. Two novels from this period, The Pillar of Salt and Strangers, amply illustrate these concerns.

    The Pillar of Salt is a semi-autobiographical novel about growing up in the Tunis ghetto. An intelligent and ambitious young man named Mordechai Benillouche sees mastering French culture and language as his path out of the ghetto. He understands that he will be turning his back on his past: “I protested against everything that I saw all around me, against my parents, these tradesmen, this city that is torn apart in separate communities that hate each other, against all their ways of thinking.” He chooses philosophy over medicine as a profession because, he tells us, it was a channel through which he could rebel against everything in his social background. It was as if the abstractions of philosophy could hoist him above the fetid realities of his marginalization that he fervently wished to escape — the “sordid lanes, where the gutters ran with muddy water.” The study of philosophy and a life of writing, Benillouche (and no doubt Memmi) imagined, would be the salve for his fractured soul, permitting him to embrace “this terrifying and exhausting search for one’s real identity that philosophy implies.” 

    At the same time, Benillouche is constantly reminded of the uncertainty of his quest, admitting in exasperation, “How naive it was of me to hope to overcome the fundamental rift in me, the contradiction that is the very basis of my life!” Whereas in the prologue to the novel he expresses a tentative hope — “Perhaps, as I now straighten out this narrative, I can manage to see more clearly into my own darkness and to find a way out” — at the end of the novel he confesses that “I am ill at ease in my own land and I know of no other.” Although Memmi’s long and successful career as a writer living mostly in France suggests that his own fate was better, he never shook off his ambivalence toward the discordant parts of his former attachments, particularly in relation to his Jewish identity. The novel Strangers, which appeared in 1955, symbolized this failed rapprochement in the melancholy marriage of a Paris-trained Jewish Tunisian doctor and his French Christian wife when they settle back in Tunis. Unable to manage his inner conflicts and disappointments, his feelings of resentment and alienation, the protagonist takes out his frustration on his wife, insisting that she embrace all the worst and backward aspects of the city that he himself now feels ambivalent about. He exhorts her to “see and appreciate our people in their native haunts,” but to himself (and the reader) he admits that “I too in secret was struggling with…wholeheartedly accepting this world.” Ultimately, he reflects that “I was annoyed that my wife should reveal to me my own difficulties in her person.” Memmi, in other words, knew more about identity than we seem to know now: he appreciated that it is not a monolithic and seamless dispensation, but rather is a collision of attributes and qualities that we must somehow negotiate. 

    This complexity was the gift of a difficult history. Memmi’s early work is set against the wrenching historical transformation of societies after the Second World War, when the devastated European powers allowed their colonies to move towards national independence. At the same time these writings also depict the trajectory of a young man striving for his own independence, and for a place in the context of his country’s postcolonial journey. Tunisia threw off French tutelage in 1956, achieving its adulthood, as it were, but Memmi shows how much more complicated the individual journey to adulthood can be, at least for the reflective and self-aware individuals depicted in his novels. For Memmi’s protagonists, personal independence could come only from achieving distance from, if not outright rejection of, family, tribe, religion, language, and even the civilization of the “East.”  This private search for a self cannot be divorced from the contexts of social and political power that Memmi deciphered so acutely in his non-fiction work, but his fiction shows how they operate on related but independent planes, psychologically and even spiritually, each with their own twists and obstacles. 

    The individual’s striving to find his place among his fellow humans, to take one of Memmi’s recurring themes, is a process in which authenticity and alienation, otherness and community, are simultaneously fused together and in tension. This profound unsettledness was the outcome of his bone-deep sense of displacement. It was likely this aspect of Memmi’s work that Camus most admired, seeing in his writings (particularly in The Pillar of Salt) a “beautiful” depiction of the Sisyphean toil of searching for oneself. Memmi’s fiction, in other words, is valuable precisely because it grapples with, in Faulkner’s famous words, “the problems of the human heart in conflict with itself, which alone can make good writing because only that is worth writing about, worth the agony and the sweat.” These existential themes go far beyond the concerns of his sociological studies of colonialism; their significance has outlasted the now-obsolete circumstances of colonial societies.

    Most of the literature of postcolonial studies presents a Manichean struggle of an oppressed “East” resisting a monolithic Western “imperialist oppression.” Of course such simplifications made sense for nationalist struggles for independence and may even have been necessary for the mobilization of a sense of collective agency to press for national independence; revolutions are made with slogans, not with scholarship. This was almost certainly the case in the decade leading up to independence in Tunisia and Algeria, in 1956 and 1962 respectively. They formed the backdrop to Memmi’s writings during what one French critic called his “age of revolt.” These are his writings to which the contemporary left still kindles, when they read him at all. But it was not long before Memmi’s thinking led him to conclusions that would be of no use to them. Memmi’s third-worldism did not survive the developments of the 1960s. His evolution from anti-colonialist icon to critic of left-wing anticolonial causes began in this period, when he moved away from the singular focus on colonialism and turned his focus to the issue of Jewish identity and Jewish-Arab relations, including the question of Israel and nationalism. 

    This evolution was triggered by three critical transformations. The first was the persecution of the Jewish community in Tunisia after it achieved independence in 1957. The Jewish community in Tunisia numbered over a hundred thousand in 1948. While these Jews had experienced significant repression and impoverishment under Vichy France, their distance from Hitler’s death camps left most of them alive and undeported by the end of the war. (Memmi was himself imprisoned in a labor camp, which he escaped.) Yet as much of the postwar world became more hospitable to Jewish people in the wake of the great catastrophe, Tunisia became less so. Following the Six-Day War, the country passed discriminatory laws against Jews and there were riots targeting them. By 1970, less than ten thousand Jews remained in Tunisia. The majority of Tunisian Jewry, once a great community, had emigrated to Israel or to France, where Memmi himself had settled by that time.

    Witnessing this anti-Semitic oppression disabused Memmi of any illusions regarding the redemptive virtues of formerly colonized peoples. It cured him of the romance of the Third World. He watched the radical movements of North Africa and the Middle East reject Enlightenment “Western” culture, and interpret its universalist teachings as nothing more than a mask for power; and this revolted him. His thinking began to take on a more sober and realistic cast. A colonized Arab might have ignored the implications of Tunisian anti-Semitism, but Memmi’s other otherness, his Jewishness, had begun to nurture in him a different kind of identity and consciousness. He now had a third vantage point. It led him to reject the reductive Manicheanism of the radical postcolonial left that had celebrated his work in the 1950s. 

    A second significant factor was Memmi’s unwillingness to countenance the left’s equation of anticolonialism with socialism. In 1958, only one year after the publication of The Colonized and the Colonizer, Memmi called out the fatal contradiction of the socialist left’s equation of anticolonialism with liberation, in a powerful essay called “The Colonial Problem and the Left.” He took to task the left’s embrace of some of the most repressive (and anti-Semitic) Middle Eastern Arab regimes, and their active and military opposition to Israel as expressive of a misguided political standpoint. Memmi identified the poisonous contradictions of the New Left a decade before they came to fruition in 1967 in their response to the Arab-Israeli war, when, as Susie Linfield shows in her brilliant study of Zionism and the European Left, “much of the Western Left hailed some of the world’s most horrifically repressive — and racist — regimes as harbingers of justice and freedom” while reviling Zionism “as a thing apart.” (Her analysis now needs to be extended to the “progressive” response to the Hamas atrocities of last October.) Never Marxist or pro-communist, Memmi called for a genuinely progressive type of social democracy and a left-leaning liberal nationalism at a time when nationalism was a dirty word in European intellectual circles. 

    Against the backdrop of the Arab-Israeli wars of 1967 and 1973, Memmi’s heterodox commitments and intellectual boldness enabled him to develop a startling diagnosis of what ailed the post-independence societies of the region, an interpretation at odds with the conventional narrative of the postcolonial left. While others were celebrating what they viewed as the thrilling new millenarian ideology of world revolution that would vindicate the “wretched of the earth,” Memmi identified some of the key weaknesses of these movements in psychological terms, as expressions of the unresolved neuroses of alienated and disoriented individuals. Memmi developed this critique most pointedly with regard to Fanon, whose method of analysis was also psychological, and he kept his distance from Western intellectuals such as Sartre and his circle, who were still hanging on to increasingly indefensible defenses of communist doctrine, such as the necessity for terror to build “socialism,” and the idea that the Communist Party of the Soviet Union was running the country and its economy for the benefit of the working class. Although Memmi drew from the Marxist left in his analysis of economic injustice, he recoiled at the fact that “the European Left remains impregnated with Stalin-like and Soviet Manichaeanism…I could not abide the collective discipline imposed on people’s thinking, the excessive consistency between thought and action, which inevitably gave rise to dogmatism and intolerance.”

    Memmi’s critique of Fanon applies to most versions of contemporary postcolonial criticism. Memmi knew Fanon in Tunis, when he worked as director of an institute of child psychology and Fanon was editor of a newspaper and a psychiatrist at a local hospital. Before he left Tunis, Memmi had been an admirer of Fanon’s, eager to be adopted by the North African Arab independence movement; but a decade later, in an extraordinary essay called “The Impossible Life of Frantz Fanon,” which appeared in 1971, Memmi coolly dissected the psychological roots of what he depicts as the “neuroses” that lay behind Fanon’s ideas. Nothing more clearly illustrates Memmi’s break from the postcolonial left’s political imaginary than this essay, in which the psychoanalyst Fanon was himself psychoanalyzed. Fanon, Memmi suggested, had succumbed to “the temptation of messianism” and was “gripped by a lyrical fever, by a Manichaeism that constantly confuses ethical demand and reality.” In Memmi’s view, Fanon had fallen for “an illusion”: the idealizing myth of “the solidarity of the oppressed.” He traced Fanon’s compulsive desire to become an Algerian revolutionary to his “disappointment at the impossibility of assimilating West Indians into French citizens,” remarking that “Fanon broke with France and the French with all the passion of which his fiery temperament was capable.” Fanon, remember, was West Indian. He was born in 1925 in Martinique and grew up there, and then studied medicine and psychiatry in France. He fought with the Free French during the war and often referred to himself as French. In 1953 he took up a medical post in Algeria and joined the Algerian Liberation Front (FLN) in its struggle against French colonialism. He lived in Algeria for all of four years. He was expelled in 1957, and went to Tunis. He died in Bethesda, Maryland, where he was being treated for leukemia, in 1961.

    Algeria was perfectly suited to what Memmi described as Fanon’s neurosis because it was “a land where French was spoken but where one could hate France. Algeria was precisely the right substitute, in the negative and the positive sense, for Martinique which had let him down.” As Memmi wrote,

    The extraordinary Algerian phase of Franz Fanon’s life has been accepted as a matter of course. Yet it is scarcely believable. A man who has never set foot in a country decides within a rather brief span of time that this people will be his people, this country his country until death, even though he knows neither its language nor its civilization and has no particular ties to it. He eventually dies for this cause and is buried in Algerian soil. 

    In Memmi’s account, Fanon did not understand the culture that he entered and wished to adopt Algeria as the vehicle of his messianic revolutionism, but he was neither Arab nor Moslem, both of which were intrinsic to the independence movement that he sought to join. Fanon’s identity adventure, Memmi argued, scanted not only the particularities of Algeria and Arab North Africa, but also of Africa as a whole. It represented “a false universalism and abstract humanism based on neglecting all specific identity and all intervening social particularities.” 

    Memmi’s repudiation of Fanonism in the early 1970s estranged him from the left in the West as well as in countries such as Iran. There, an unholy alliance of communist and Marxist parties with anti-Western Islamists was mesmerized by the paroxysmal politics championed by Fanon, which found its tragic denouement in the Iranian revolution in 1979. French intellectuals such as Foucault were similarly beguiled by the ayatollahs’ revolution. They saw in it a revolt against modernity, and the realization of the outlines of the new world and the new man that Fanon and his acolytes had envisioned. Memmi prophetically warned that the haters of the Enlightenment and the West, the secular and religious revolutionaries in love with power rather than justice, were on a path to doom. His antidote to their philosophical and political depredations was liberal nationalism, and so it still remains.

    The third historical development that modified Memmi’s worldview was the storm over the establishment of the new state of Israel, in particular the denunciation of it by the left as a “settler colonial” outpost of Western imperialism. Memmi’s experiences of anti-Jewish prejudice no doubt shaped his conviction that the establishment of the Jewish state should be seen rather as a national liberation struggle, called Zionism. The Jews, too, were entitled to such a struggle and to such a liberation. Many of these ideas were expressed in the essays collected in Jews and Arabs in 1974, where he forcefully expresses his impatience with the utopian pieties of leftist intellectuals. Jonathan Judaken, the editor of a fine collection of Memmi’s writings in English translation, describes Jews and Arabs as Memmi’s symbolic divorce from the Arab-Muslim world. 

    Memmi’s responses to the three developments were increasingly more explicit elaborations of his emerging heterodoxy. In Portrait of a Jew, for example, Memmi recognized that Jewish communal and religious life carries within itself a national dimension. A few years later he wrote that “a nation is the only adequate response to the misfortune of a people…Only a national solution can exorcize our shadowy figure. Only Israel can infuse us with life and restore our full dimensions.” (Later Michael Walzer came to much the same conclusion: “The link between people and land is a crucial feature of national identity… The theory of justice must allow for the territorial state, specifying the rights of its inhabitants and recognizing the collective right of admission and refusal.”) Memmi had become a Zionist, and he regarded Zionism as a progressive nationalism, as an inclusive political project. He rejected the idea that a national project must by definition be ethnonationalist or shaped only by tribal and religious criteria. 

    Memmi identified Zionism as the necessary expression of the national liberation struggle of the Jewish people and nationalism as the inevitable form that political self-determination takes. Whereas Fanon denounced nationalism according to proper Marxist doctrine as a “petty bourgeois” deviation from proletarian internationalism “with its cortege of wars and ruins,” Memmi posited an enlightened nationalism that was a natural and inevitable aspiration of all peoples seeking autonomy, safety, and self-determination. Memmi’s preferred form of Zionism was secular, tolerant, and social democratic. While critical of the anti-Jewish and anti-Israel currents in the Muslim Arab countries, Memmi also drew attention to what he saw as injustice inside Israel, such as the prejudice leveled against Mizrahi Jews; and he was an advocate for a two-state solution as a solution to the plight of the Palestinians. His inflection of nationalism was a combination of realism and decency, of pragmatism with an acute ethical sensitivity to the persistent inequalities and depredations in both colonial and postcolonial circumstances. As it happens, such a standpoint is precisely what is now needed by all the countries of the region.

    This Tunisian Jewish anti-colonialist Zionist liberal has a lesson for the Muslim countries of our day. It is that feverish messianism and unresolved psychological anger at former colonial powers is a large part of the reason that the societies of the Middle East and North Africa remain unable to fulfill their potential. The obsessive hatred of Israel and the refusal to relinquish the futile opposition to the Israeli state is, as Memmi described it, a collective neurosis. Framing the plight of the severely disadvantaged Palestinians as the wretched of the earth in search of a messiah, or at least a Mandela, is also a hobbling collective neurosis; what they need is an Adenauer, who can accept an imperfect and unsatisfactory reality in the present to achieve a better future. 

    The attacks of September 11, 2001 were a calamity for the Muslim Middle East as well as for the United States and the West in general. In their aftermath, it was natural for observers concerned with the region to ask what went wrong. How had the once glorious civilization of the Islamic Middle East become a haven for medievalist fanatical zealots and a redoubt for terrorists who saw themselves in an apocalyptic struggle with modernity and the West? Over the course of two or three centuries, the once dominant and culturally thriving lands of the Islamic Middle East had become, in the words of Bernard Lewis, “poor, weak, and ignorant.” Historians and intellectuals have provided a long list of explanations to account for the decline of their civilization. Some blame the Mongols, or the Jews, or the British, and especially the Europeans, with their ideology of racial superiority and their imperialism. (In recent decades the Americans and their “neoliberalism” have been added to the inventory of external villains.) Some critics have pointed accusingly to supposedly inauthentic versions of Islam — to the enemy within. Notwithstanding the weight of this discourse, Lewis also noted that “growing numbers of Middle Easterners are adopting a more self-critical approach,” which he hoped would lead them to “abandon grievance and victimhood, settle their differences, and join their talents, energies, and resources in a common creative endeavor [so] they can once again make the Middle East, in modern times as it was in antiquity and in the Middle Ages, a major center of civilization.” 

    Lewis did not identify any of these self-critical thinkers, but one of the pioneers of this new approach was certainly Fouad Ajami, whose book The Arab Predicament, which appeared in 1981, remains a benchmark for a new enlightenment, for the dream of democratic and liberal reform in the Middle East. It was a book that Memmi must have admired. Memmi himself was among these forward-looking and far-seeing Middle Eastern intellectuals, a unique and authoritative voice that could be counted among the ranks of “native” critics. (He once referred to himself through a semi-autobiographical fictional character as “a native in a colonial country, a Jew in an anti-Semitic universe, an African in a world dominated by Europe.”) Still, it is far from clear that he harbored hopes for an imminent renaissance. His clear-eyed view of the region’s many failures made such hopes harshly unrealistic. 

    It was in the context of the turmoil and misfortune brought upon the Middle East by the reactions to the 9/11 attacks — specifically the American and Western occupation of Iraq and Afghanistan — that Memmi’s evolution as a genuinely independent critical thinker culminated in his book Decolonization and the Decolonized, in 2004. Returning to his earlier sociological methods, he witheringly surveyed the corruption and the tyranny of half a century of independent states. Surveying the myriad problems besetting the Middle East and Africa — calamitous violence, civil war, failed states, systemic corruption, repressive governments, massive human rights abuses, appallingly low levels of social freedoms, persecution of minorities and women, and low levels of educational attainment and cultural production — Memmi arrived at a scathing indictment: “Why… has the tree of national independence provided us only with stunted and shriveled crops?” Rejecting the obfuscating argot of the postcolonial left, he noted how often it attributed the problems of the third world “to a new ruse of global capitalism, or ‘neocolonialism,’ a term sufficiently vague to serve as a screen and a rationale.” Memmi urged postcolonial countries to acknowledge their failures to achieve democracy and development as largely self-inflicted wounds. He called on them to abandon the cheap excuse of blaming external forces. Memmi offered his criticisms not to disparage but to counsel; he eschewed the haughtiness of conservative historians who suggested that the hopes for decolonization were misguided from the start. He believed that the postcolonial countries could be independent, free, and fair places to live, and he called on them to make themselves so.

    Memmi was right to sound the alarm, to urge the region’s peoples to get over their fixation with the past. Indeed, he was only echoing words of the African Development Bank’s report in 2003, which declared that “more than four decades of independence…should have been enough time to sort out the colonial legacies and move forward.” Economic historians have persuasively demonstrated that the impact of past colonial experiences on current political and economic dynamics has diminished to almost negligible levels in many cases. The followers of Fanon and contemporary postcolonial perspectives persistently obscure this. They are the true reactionaries. 

    Memmi’s work provides a powerful intellectual alternative, an antidote even, that could have inoculated generations against the futile and self-destructive utopias chased by revolutionaries, from Iran in 1979 to certain theories expounded in the Western academy today. Consider the quixotic effort by a prominent representative of the postcolonial school of criticism to retrieve the anti-colonial legacy for contemporary so-called anti-globalization struggles. The same year in which Memmi’s negative assessment of decolonialized societies appeared in print, the Indian-British postcolonial theorist Homi Bhabha asserted in a preface to a reprint of Fanon’s The Wretched of the Earth that institutions such as the International Money Fund and World Bank have “the feel of the colonial ruler,” since they, allegedly, create “dualistic,” not developed, economies. He claimed that this “global duality should be put in the historical context of Fanon’s founding insight into the ‘geographical configuration of colonial governance,’ namely the idea [of] ‘a world divided in two…inhabited by different species.” We are urged by Babha to adopt the “the critical language of duality — whether colonial or global” because “a progressive postcolonial cast of mind” naturally spawns a “spatial imagination [of] geopolitical thinking” that incorporates this language: “margin and metropole, center and periphery, the global and the local, the nation and the world.” (This dichotomous vocabulary recently appeared in a comment by Rashid Khalidi on the Hamas-Israel war: “If you believe this theoretical construct — the colony and the metropole — then what activists do here in the metropole counts.”) Since he cannot let go of the Manicheanism of revolutionary politics without which the Marxist and postcolonial schema of “colonized and colonizer” collapses, Bhaba is forced to shoehorn what is in reality a fractured, multipolar world into a reductive antinomy, so that he can combat it within the only framework available to him.

    Bhabha applauds Fanon’s grandiose claim that “the Third World must start over a new history of Man.” Bhabha and Fanon (and Cornel West) seem to suggest that anti-globalization struggles are the route to international proletarian solidarity and revolution, and that the colonial/native dichotomy offers a workable foundation for such an eschaton. But as we have seen, Memmi knew better. He never suffered from a romanticization of the oppressed, even as he denounced their oppression. Moreover, he knew that the world cannot be neatly divided into the oppressed and the oppressors. The oppressed, after independence and even before, have a way of oppressing each other. He rejected terms such as “neo-colonialism” for obscuring the role of the elites of the independent Third World states in perpetuating injustice. He refused to accept that the culpability of these elites should be seen as merely another result of victimization by larger external forces. 

    Memmi’s critique also applies to “decolonial” studies, the latest version of the Marxist-inspired anti-Western and anti-capitalist ideology making the rounds of Western academia (associated with writers such as Walter Mignolo, Aníbal Quijano, and others). It is a fatuous and often bizarre messianic theory, premised on a stupendously simplified picture of what is in fact a maddeningly complicated and tragically fragmented world. Anti-essentializers, heal yourselves! (The guiding intellectual light for some of these decolonial theorists is not Martin Luther King Jr., or Nelson Mandela, or Mahatma Gandhi, but — really — Subcommandante Marcos of the Zapatista insurgency.) The crusade to make everything postcolonial has become so pervasive that it has finally elicited forceful responses. In his heretical book Against Decolonisation: Taking African Agency Seriously, for example, the Nigerian philosopher Olúfẹ́mi Táíwò (who has appeared in these pages) decries the “proliferation of the decolonization trope” because of its “pernicious influence and consequences.” The idea of decolonization, he asserts, “has lost its way and is seriously harming scholarship in and on Africa.” Echoing many of Memmi’s concerns, Táíwò rejects the “absolutization of colonialism, the accompanying repudiation of [Enlightenment] universalism and the paradox that a Manichaean worldview generates.” 

    The more worrying consequences of the postcolonialist dogmas are not intellectual but material and political, directly effecting living standards and livelihoods. The main problem of the decolonization worldview is that it is of no use to the people it purports to help. As Helen Pluckrose and James A. Lindsay observe in their survey of the postcolonial and the decolonial, this

    work is of very little practical relevance to people living in previously colonized countries, who are trying to deal with the political and economic aftermath. There is little reason to believe that previously colonized people have any use for a postcolonial Theory or decoloniality that argues that math is a tool of Western imperialism, that sees alphabetical literacy as colonial technology and postcolonial appropriation…

    Worse, postcolonial theories can actually harm people in previously colonized countries, who are some of the poorest people in the world — for instance, when these ideas are applied to the issue of climate change, and a simplistic and misleading binary is generated, according to which we must choose between an evil white hyper-developed plundering West and an idyllic view of indigenous peoples’ beautiful relationship with nature, resurrecting long-discredited notions of the noble savage and calling them progressive. These ideas misrepresent the realities of climate change and lead to dubious climate policy recommendations that would likely impose enormous economic costs on those who need economic development the most. 

    Ultimately, of course, the effects of this fallacious worldview are most pernicious in the world of politics. They include the radical regimes of Iran, Nicaragua, Hezbollah, Afghanistan, and elsewhere. These regimes represent in practice what many of these theories espouse. In the Islamic Republic of Iran, for instance, perspectives paralleling Fanon’s, represented by influential works such as Jalal al-e Ahmad’s Westoxification, an attack on the West that deserves to be better known in the West, and also routinely expressed by the official ideologists of the regime, are still a significant part of the ideological edifice of the anti-liberal, anti-Western dictatorship that has been in power for over four decades. It is no longer just the rhetoric of tenured radicals, unless you count tyranny as a kind of tenure.

    Given Memmi’s perspicacity in uncovering the blind spots of the postcolonial left in Europe as it related to the question of the Middle East, it is unfortunate that Memmi decided to weigh in on the American struggle for black civil rights. He was insufficiently aware or appreciative of many complex facts of the American situation with regard to race. Indeed, he never visited the United States. In 1965 he dedicated the English edition of The Colonizer and the Colonized to the “American Negro, also colonized” because he perceived that community to be subject to the same type of oppression that he described in the book. This timing was not exactly propitious, as it coincided with the passage of the historic civil rights laws that finally, a century after the American Civil War had failed to remove many institutions of racial oppression and official apartheid against formerly enslaved black Americans, had achieved a revolutionary progress. To be sure, laws by themselves could not be expected to remedy all the evils overnight. Yet it is precisely the essentializing and binary framework of his early period that he stubbornly projects, erroneously, onto the American scene, and thereby distorts its complexities and accomplishments. This is starkly illustrated when he fails to discern the different strands of the black civil rights movement:

    King, Baldwin, and Malcolm X do not represent three different historical solutions to the black problem, three possible choices for the Americans. [ . . . ] King, Baldwin and Malcolm X are signposts along the same inexorable road of revolt.

    This false generalization leads Memmi to claim that “King is the victim of oppression who persists in wanting to resemble his oppressor. The oppressor will always be his model.” This, of course, is spectacularly wrong. 

    Binary assumptions never illuminate, even in Memmi. Memmi’s error in this case resulted from the application of the abstract dichotomies and assumptions of postcolonial theory to a racist society that had little in common with European colonialism. Unfortunately we see examples of this today, for example in the Manichaean binaries put forward by writers such as Ta-Nehisi Coates, the Black Lives Matter theoreticians, and The 1619 project, whose assumptions and implications have been cogently called into question in these pages by Daryl Michael Scott, among others. Memmi’s later voice of moderation would be a salutary contribution to today’s debates about the legacy of colonialism. 

    There is a clear and strong message that emerges from an appreciation of Memmi’s evolution as a writer and a thinker. It is that the concept of “coloniality” and its cognates retains little relevance for understanding the challenges facing countries in the developing world and the relations between rich and poor countries more broadly. “Haven’t I got better things to do on this earth,” Fanon wrote in the remarkable conclusion to Black Skin, White Masks, “than avenge the blacks of the seventeenth century?” Coloniality as a topic should be confined as much as possible to the history departments. 

    Whereas firebrands such as Fanon in his revolutionary mode called for cathartic violence as the road to redemption, Memmi was a pragmatic liberal nationalist social democrat who had the courage to say to his fellow third world citizens, especially in the Muslim countries of the Middle East and North Africa, that they had for the most part failed to achieve the objectives of independence and that they had for the most part no one to blame but themselves. This bracing assessment represents a necessary corrective to the simplifying radicalism of the postcolonial schools of thought that trace most if not all of the problems besetting developing countries to external forces, be they Western “imperialism,” global capitalism, “systemic racism,” “neoliberalism,” “globalization,” or any other single explanation for everything. (The Uyghurs in China, the Christians and the political prisoners in Iran, and the women under the rule of the Taliban would all be surprised to learn that they suffer from white supremacist hegemony.) 

    Developing countries, especially in the Islamic Middle East which was of most concern to Memmi, can ill afford the misguided counsels of a worldview which posits that “colonialism” has not in fact ended and therefore, to quote a typical statement, “every analysis of the present is impossible to understand except in relation to the history of imperialism and colonial rule.” Instead, what former colonies need most desperately is to break free of this debilitating straitjacket and move beyond the obsessive preoccupation with the colonial past to the urgent tasks at hand. Sartre was wrong in his critique of Memmi: colonialism was not a “system,” an essentially permanent, almost metaphysical condition of human existence. Rather, as Memmi held, it was a “situation,” one that has now passed. Today there exist new inequities, new hierarchies, and new cruelties, which will not be ameliorated by stale formulas or a morbid lingering over the unhappy centuries gone by. Postcolonial societies could do worse than repeat Memmi’s own evolution. The struggle for justice requires that we live in the present. [END]

    Orangerie

    Sometimes I think I must have ground to a halt

    on this lot for the sake of the orange tree alone.

    I might have preferred the olive — rolled

    on a bias — but it requires labor, refinement, salt.

    Oranges are easy: sweetness sewn

          inside a roughly perfect handhold.

     

    Fruit in different stages of production muscles

    the bough into a bow, the bow into a lyre,

    plucked string lengths sounding a golden mean.

    They long to dispense their light into bushels,

    these overburdened arms; as they grow higher,

          they find my roof, on which they lean,

     

    and then the spheres go reeling like billiards

    down gutters angled like a kinetics sculpture-

    cum-candy dispenser. Think how pretty!

    Think if you were a house, contemplating yards,

    wouldn’t you choose one with a culture

          of citrus, the least complicated beauty,

     

    to run aground on? That is, if houses,

    like arks, sailed from firmament to foundation.

    This tree is a juggler drawing out his long game. 

    Inspired, I swap my bow for sternness.

    It is serious, this groundless elation.

          C’est mon bijou, mon or, mon âme, my name.