On Seeing Old Skis in the Garage

    So many slopes they touched, and once
    leaned outside while I tromped into the parlor
    of an alpine monastery, clattering boots, my bluster
    welcomed to dine silently with the brothers
    who had also vowed to get to the powder
    of what is daily fused with life: to glide, to carve,
    to schuss and float with what the spirit clamors for —
    even though my body’s sluggish, slow, it remembers
    mountains, glory in the snowfell
    hill, its bluebell kindred skills — a rough jouissance
    is what I brought, in all my choices good
    and not so good, the might-have-beens
    and new offerings from the range
    I’m entering, something milder — I’d still strive
    for the milk of kindness, hold out my simmering
    so the fat might rise like broken proteins
    to the top, to be skimmed off.

    Meditation with a Gash in the Natural Order

    I like parking at the big box store, watching people come out and go in.
    Swaying winter grasses in the median, sky that brigand Saturday blue.
    I’m waiting to pick up my son from his guitar lesson. Already masterful,
    he doesn’t quit. Even Jimi Hendrix continued with a vocal coach,
    up to the very day he died. I have so much useless knowledge.
    Like what the monk said about meditation: if while sitting on your cushion
    you have the best idea you’ve ever had — stunning, complete —
    you mustn’t get up to capture it. Also, a lot of what we call miraculous
    is just the way things work: a monarch always emerges from its chrysalis
    as a radically different worm. A miracle is living flesh restored to reeking
    corpse. It’s the man sitting calmly, shaved and dressed, after he’d raved
    at city gates for more years than anyone who stepped around him
    could remember. The monk said: when wondering what to do in life,
    do what will cost you the most. Commit to watching the gorgeous
    bubble evanesce. That’s the only way this works.

    An Occasion

    Our bones will touch in the water
    one day after the supernova,
    or maybe it’ll be an Electromagnetic Pulse
    we bought the old Volvo to outsmart— 

    we escaped the need for computers
    to govern coffee makers,
    and made our own kombucha—
    but one by one the streaked coyotes,

    wimpled foxes picked off
    the rooster and our stupid hens. A cascade
    of tiny choices to occasion
    an implosion.                         

    The mushroom log left wildly fruiting
    in the hand-hewn springhouse afternoon.

    The American Strategic Imagination: An Agenda

    Depending on how history is written, Russia’s invasion of Ukraine may be looked back on as the beginning of a third world war. President Zelensky’s government, along with its advocates in allied governments, has been making this argument since the war’s inception. They frame Ukraine as one battlefield in a larger global struggle, one that pits a growing axis of authoritarian nations against the Western-oriented liberal democracies that have dominated the post-Cold War world order. In this version of history, the war in Ukraine is not the Ukrainians’ war alone but the West’s war, too, an existential struggle for all freedom-loving peoples. There is plenty of evidence that lends credence to this argument. Had Putin’s initial invasion gone according to plan, a year later we would be talking about a similar invasion of Taiwan — as we anyway already are envisioning — and then the question of whether we were in the midst of a third world war would hardly merit debate. 

    Conditions remain ripe for an upheaval of the global order of the type induced by a world war. These upheavals, in the modern era, have occurred approximately every century. The First and Second World War should more properly be categorized as a single conflict with Versailles more of a ceasefire than a peace, and the twenty years of the Napoleonic Wars that birthed a century of continental stability certainly qualify as a world war. One indicator that we might already be in a world war — or that one is imminent — is that the generation that can remember the last one has died. Without memories to restrain us, we become reliant on our imaginations, not only to prevent war but, if one begins, to help us navigate its exigencies, and to win.

    Whether the war in Ukraine is part of a third world war, in which liberal democracies must beat back a rising tide of authoritarianism, or whether it is an isolated territorial-philosophical conflict is not a question of semantics. Defining a war’s scope is essential for any war planning, and for any victory. The role of imagination in the making of strategy has too often been under-appreciated. The conclusions that planners and officials will draw from analysis and data will always be circumscribed by the limits of what they can imagine about the future, by their sense of historical possibilities. If it is true, as the old adage has it, that generals always fight the last war, that is in part because they have not trained their imaginations to picture the next one. Inspired by its disgust with the Iraq war, the Obama administration drew the conclusion, and enshrined it in Pentagon doctrine, that land wars are a thing of the past. Tell that to the tank officers in Ukraine.

    Innovation — of concepts and weapons, of everything — always involves imagination. And the imagination of future warfare is essential also for another reason: it forces us to conceive of the war from our adversary’s point of view as well as from our own. The strategic imagination is a significant deterrent to the other side’s greatest advantage, which is strategic surprise. While the strategic imagination can certainly run wild — remember General Buck Turgidson — the greater danger is that it not run at all.

    In the past fifty years, America’s two great military defeats —Vietnam and Afghanistan — were the result of misunderstanding the scope of the wars that we were fighting. In the former, American policymakers believed we were engaged in, as President Kennedy put it in his inaugural address, “a long twilight struggle” against transnational communism, when in fact the Vietnamese were fighting a war of national liberation. In Afghanistan, we believed we were fighting “a different kind of war,” as President Bush said to Congress ten days after September 11, a war against transnational terrorism. Yet like the Vietnamese, the Taliban were also fighting a war of national liberation with no objective greater than expelling foreigners from their homeland so that they could impose their theocracy upon the population. Their sympathies with al-Qaeda were nauseating, but not their reason for being. In both incidents, a failure to imagine our adversary’s psyches and define the true nature of their objectives and of the very war we were fighting led us to disaster.

    Although it would be easy to discount the Kremlin’s absurd narrative of the war in Ukraine — one in which Zelensky is a Nazi, the West is the aggressor, and there is a genocide against ethnic Russians — it would be a mistake to ignore this narrative entirely, no matter how ridiculous, both when formulating a strategy to defeat Russia and when creating an agenda for our own strategic imagination. The first item on this agenda must be a robust understanding of the conflict from our adversary’s point of view. Data alone may not be able to depict our enemy. The specifics of such an understanding will be fluid, it will involve imaginative interpretation, and it will consistently clash with our own narrative. 

    A war is like a coin. It has two sides, and what we call the casus belli is really a debate as to what side of the coin we are on: whether a revolution is in fact a civil war; whether an invasion is in fact a liberation. Irreconcilable political narratives — or imaginaries, to use the academically popular term — are not a contradiction. The war itself becomes the very process through which these narratives will be resolved. But any strategy that does not consider an adversary’s counter-narrative — no matter how odious that narrative might be — is destined to fail.

    Although war is waged in the consciousness of peoples and nations, it is also a craft that requires a tradesman’s skill. Both the Napoleonic Wars and the world wars of the twentieth century resulted in societal and technological advances that few could have predicted. Perhaps the greatest innovation of the Napoleonic Age was the levée en masse, in which Napoleon crafted a new citizen army from a body politic mobilized for military service. His new army, which few could have imagined before the popular revolutions of the eighteenth century, supplanted the long-serving professional and mercenary armies that had dominated the continent for centuries. The primacy on the battlefield of the citizen-soldier, which cemented national identities, would have implications well-into the twentieth century and even into our own, as both Ukraine and Russia adopt strategies to keep their societies mobilized for war. 

    The issue of how to keep a nation — and allied nations — mobilized has direct bearing on what we now call influence operations, a discipline which must sit atop any strategic agenda. Influence operations — sometimes known as psychological warfare or disinformation — are as old as war itself. Clausewitz’s renowned dictum that “war is a continuation of politics by other means” is an articulation of the first principles of influence operations. Politics do not stop when the bullets start. Nations at war must target popular opinion in their adversary’s country as well in their own to achieve victory. This does not mean that a government should lie to its own people. 

    Authoritarian nations possess an intrinsic advantage over open societies in the control of information, but this does not mean that they will ultimately succeed in swaying popular opinion, particularly over the course of a long war. Eventually the truth has an almost miraculous way of getting under the door. Speaking about information warfare during the Cold War, Adlai Stevenson once described our preferred policy toward the communist powers this way: “When you stop lying about me, I’ll stop telling the truth about you.” But the truth must be aggressively deployed — for our adversaries, inconveniently and damagingly deployed — in influence operations. Truth may be the first casualty of war, but it is also one of its most potent weapons. Unfortunately, current American attitudes toward influence operations often seem analogous to Secretary of State Henry L. Stimson’s attitude a century ago to the then-evolving discipline of espionage: “Gentlemen don’t read each other’s mail.” 

    Today we can ill-afford misplaced decorum when crafting policies around influence operations. One of the greatest restrictions placed on American influence operations is a fear of blowback, in which propaganda or other forms of influence or disinformation disseminated beyond our borders filter back into the United States. Although different U.S. government agencies, such as the Department of Defense, the Department of State, and our intelligence services, have varied risk tolerances for blowback, when operations targeting foreign audiences also find American audiences it violates current programmatic statutes that govern the scope of information operations. 

    Government protections of the American population against its own propaganda extend back to the aftermath of the Second World War. The Smith-Mundt Act of 1948, which regulated State Department broadcasts such as Voice of America, was one of the first pieces of legislation to restrict American propaganda efforts. This was due to concerns over empowering government agencies to disseminate ideological materials to the American people. In 1975, after the Church Committee Report revealed Operation Mockingbird, a large-scale multi-decade CIA program to manipulate domestic American media, other restrictions soon followed. These included Executive Order 11905, signed by President Ford, which brought the intelligence agencies to heel on a variety of covert programs, to include assassination and covert influence; and Executive Order 12333, signed by President Reagan in 1981, according to which the president outlines the duties of America’s intelligence agencies, those responsible for American propaganda efforts where the hand of the United States government must remain hidden. Such covert efforts are where the threat of blowback is greatest. It is also where our risk tolerances are lowest, and where we are being overtaken by our adversaries who harbor little to no concern for blowback against their own populations. The question still needs to be asked whether the abolition of covertness for various purposes is strategically wise. It is undeniable that our national obsession with the evils of covertness has made us look away from a more important aspect of our influence operations, which is its robustness.

    In the past decade, both Russia and China have used influence operations and propaganda against the United States to great effect. In 2014, the Russians deployed soldiers in unmarked uniforms during the invasion of Crimea and the Donbas. Disinformation, like cancer, requires only the presence of a single malignant cell to metastasize, and Russia’s “little green men” in their unmarked uniforms allowed the Kremlin to propagate a narrative that their soldiers were homegrown Ukrainian separatists. In the United States we have seen the potency of Russian disinformation firsthand, in our own elections. (“Cut it out,” Obama once scolded Putin about Russian hacking. It didn’t do the trick.) Russian interference in the campaign of 2016 proved catalytic, yielding an exponential result that played out over years, creating a firestorm in American politics with few parallels. As is often the case with well-placed disinformation, the targeted society will do your work for you if you let it. 

    The Chinese Communist Party understands this. For the past two decades, the CCP has effectively used the American profit motive as a tool of American self-censorship. During the Cold War, when the Soviet Union posed the greatest threat to global freedom, American culture articulated that threat, particularly in Hollywood. In the past twenty-years during China’s rise, American cultural institutions have remained largely silent. Unlike with the Soviets, American and Chinese financial interests are intertwined. The Chinese have used this codependence as a tool to silence American critiques. In 2022, as American producers agonized over whether to include a Taiwanese flag on Tom Cruise’s leather jacket in Top Gun Maverick for fear of offending CCP censors and eroding the film’s Chinese box-office, Chinese producers had their biggest box-office hit of all time, The Battle at Lake Changjin, which glorifies the Chinese slaughter of Americans during the Battle of the Chosin Reservoir. The previous box-office champion had been the second installment of The Wolf Warrior franchise, in 2017, in which a Chinese former soldier battles against the arch-villain, a bloodthirsty former U.S. Navy SEAL named “Big Daddy.” 

    Chinese influence operations extend far beyond Hollywood. Their sway over global governance bodies like the World Health Organization has quashed any consensus as to how the Covid 19 pandemic began. A parade of international administrators with their timid “investigations” and non-sensical public statements have proven quite willing to carry China’s water on this issue. All that was needed was to sow some doubt. If Russia’s brand of disinformation — little green men in Ukraine, election tampering, claims of American biological labs in Ukraine — seems more absurdist than the means employed by the CCP, both are plenty effective in obfuscating the truth. 

    The sheer volume of propaganda, manipulation, and disinformation dispensed by authoritarians would seem impossible to counter. An American strategy that would seek to reform government agencies so that they could dispense the same type of propaganda as their authoritarian counterparts is certain to fail. An open society — even if flawed — cannot compete on the field of lies with the authoritarians. The only propaganda strategy that we can consider is one that aggressively propagates the truth. There is nothing that more weakens the hold of dictators and autocrats on their populations than the truth and its ruthless strategic proliferation.

    If one holds the cynical view that the truth is subjective, a matter of competing narratives, then this and all truth-based strategies are doomed to failure as authoritarians will always outmatch us. (But if the relativists and perspectivists are right, on the other hand, then we should be less inhibited in our propaganda!) The Biden administration’s handling of Russia’s troop buildup in the days leading up to the invasion of Ukraine presents a refreshing and encouraging example of how information operations based on the truth can outmaneuver those based on lies.

    As the Kremlin massed its divisions, it continued to insist that these troop movements were part of a military exercise and that war remained avoidable. At the same time, the Biden administration had intelligence that Russia was in the process off coordinating a false-flag operation — a type of psychological ploy in which a military attacks itself or others under a flag that is not its own — to instigate a war. At a press briefing three weeks before the invasion, John Kirby, the Pentagon spokesman, preempted the Kremlin’s plan: “We believe that Russia would produce a very graphic propaganda video, which would include corpses and actors that would be depicting mourners and images of destroyed locations, as well as military equipment at the hands of Ukraine or the West, even to the point where some of this equipment would be made to look like it was Western-supplied.”

    The Biden administration adopted a strategy of flooding news outlets with sensitive intelligence, on everything from the false-flag operations to the movements of Russia’s frontline trauma hospitals and command centers, all of which proved that despite Russia’s claims to the contrary their intention to invade was clear. The Biden administration adopted this strategy over the objections of Zelensky who, at the time, remained concerned about inciting panic inside his own country. Although this strategy did not prevent a Russian invasion, it did limit the Kremlin’s ability to further claims that Ukraine was the aggressor. And just as important, it prepared us mentally, and our allies too, for what was coming. In this instance, the Biden administration skillfully shaped the American strategic imagination. The international condemnation and economic isolation that followed Russia’s invasion is due in no small measure to the Biden administration’s strategy of preempting Russian disinformation.

    Although a strategy of preemption proved effective in Ukraine, no similar strategy was deployed against the Chinese government as they restricted and manipulated studies around the pandemic’s origins. Three years later little consensus exists as to the virus’ origins, though theories abound. The issue itself has become politicized, with views so entrenched it seems no amount of evidence can now sway beliefs; the creation of irreconcilable narratives is, of course, the purpose of a disinformation campaign. In the pandemic we may have lived through a dress rehearsal of the future of biological warfare, a discipline which must sit firmly atop any agenda for our strategic imagination.

    In recent years, biological warfare has taken the form of gas attacks visited by Bashar al-Assad on his population in Syria and assassinations ordered by Putin against his political enemies. These tactics — in which individuals are poisoned, and armies and civilian populations are shelled or rocketed with gruesome agents — have evolved little since first appearing a century ago. They are designed to induce fear and are typically limited in scope to the area in which they are deployed. A pandemic, if ever weaponized, would auger in a different type of warfare, and we would be naïve — we would be catastrophically unimaginative — to believe our adversaries are not imagining ways to do exactly this. 

    In Ukraine, we have seen the critical importance of economic sanctions in modern war. In the pandemic, we saw how a virus brought the global economy to the brink. The grisly nature of traditional biological weapons will likely limit their use in the future; politically, they cost more than they deliver. But a biological catastrophe — of the kind that we have already lived through — would surely be a feature of any future world war, not simply due to the human toll but also the economic toll. 

    Imagine the United States along with its allies was fighting a peer competitor, an authoritarian nation such as China with the capacity to exercise significant control over its population. We would, of course, do our best to exercise economic pressure on them, and they would do the same to us. As in wars past, our national means of production would prove decisive to the war effort. Now imagine that this authoritarian nation possessed a virus like the coronavirus and that it had already developed its own vaccines. As people grew sick that country would be able to implement a vaccination campaign, isolating its citizens from the deadly effects of the virus. Without the vaccine in our possession, America’s war effort would be crippled. Our adversary would, of course, claim no knowledge of how this new virus spread. They would deny its origins and have no moral obligation to share vaccine technology with a nation they are at war with.

    The costs would be profound. Our most recent pandemic saw the largest drop in American manufacturing in seventy-four years. Aircraft carriers such as the USS Theodore Roosevelt were forced into port due to outbreaks of virus among the crew. A healthy army facing a sick one possesses an obvious advantage, and the same advantage extends to the economies supporting those armies. The United States would eventually develop its own vaccine, but the disruption would prove significant and could provide a peer-level adversary with a decisive edge. 

    After living through the Covid 19 pandemic, the way we think about biological warfare must change. It should still include the acute nerve agents and chemical weapons that we have seen in the past, but it also must incorporate a view of biological warfare that includes man-made pandemics and accounts for how such events could be deployed as tools of economic warfare when laundered through the very same types of disinformation campaigns that have, thus far, obscured any global consensus and accountability regarding the origins of Covid 19. The moral case to abolish chemical and biological weapons is obvious. But, as with nuclear weapons, there is the unpleasant but effective matter of deterrence. Even if weapons of mass destruction do not deter conventional or cyber weapons, they have so far deterred other weapons of mass destruction. A balance of terror, that Cold War doctrine and Cold War reality, is existentially hideous but strategically wise. 

    If discussions of a third world war echo another era, it is because this war long existed as part of the Cold War’s vernacular, a time when students cowered under desks for bomb drills, when families constructed fallout shelters in the backyard, and when nuclear winter was the most well-known definition of climate change. To imagine a third world war means to update our conception of it, but also to redefine its terms. And one of these terms, deterrence, has unfortunately fallen out of use. During the Cold War, deterrence (particularly of the nuclear kind) evolved into an entire discipline, with strategists on both sides of the Iron Curtain relying on game-theory to ensure that humanity did not annihilate itself with its newly discovered nuclear weapons. The result of this deliberate approach to deterrence was decades of relative peace, and certainly nuclear peace, between the two superpowers. Also, we were lucky.

    We still possess that destructive capability though deterrence, as a tool, is discussed far less. In the decades immediately after the Cold War, we lived in a unipolar world, in which deterrent strategy lacked its previous relevance because the United States enjoyed a significant power imbalance over any would-be adversary. In recent years, the unipolar post-Cold War world has yielded to an increasingly Hobbesian multipolar world. And in a multipolar world strategies of deterrence become increasingly complex, with so many competing actors involved that it is virtually impossible to arrive at elegant deterrent solutions such as “mutually assured destruction,” which prevented a nuclear war between the Soviet Union and the United States for decades. 

    Yet it is not simply the proliferation of actors that makes deterrence strategies complex, but the proliferation of threats. Unlike in past decades, nuclear weapons are not the only means by which societies can annihilate one another. There are the biologicals and the chemicals. And there is another new dimension of warfare. The end of the Cold War coincided with the creation of the Internet. 

    As deterrent strategies became a thing of the past, every modern nation — indeed, every nuclear-armed nation — was undergoing a decades-long project of taking its infrastructure online. Thirty years later, we have awoken to a multi-polar world in which the United States faces peer-level competitors that not only possess society-ravaging nuclear arsenals, but also cyber capabilities that could wipeout our infrastructure. The chaos that follows a significant infrastructure strike would lead to civilian deaths as planes crash, hospitals lose power, and cities descend into darkness. The economic and social havoc would be immeasurable. And even though cyber-attacks shut down critical infrastructure with the flip of a switch, that infrastructure cannot be brought back online with a second flip. The damage is often permanent. 

    For this reason, our strategic imagination requires a thorough education in the new vulnerabilities, the new possibilities of destruction. One of the challenges of creating deterrent strategies around cyber-warfare is that it is difficult to envisage such destructive capability. Most of this work gets done in films and science fiction. At the end of the Second World War, at Hiroshima and Nagasaki, the world witnessed the destructive capabilities of nuclear weapons. We never needed to imagine a mushroom cloud over a city, we had witnessed it, and it required scant imagination to know that, no matter one’s nationality, there would be few winners if the great powers of the world chose to unleash their nuclear arsenals. 

    Cyber is different. The world has yet to witness the full destructive scope of a strategic cyber-attack, and because the threat is largely intellectualized, and not yet experienced, the likelihood of a misstep is greater. One of those potential missteps, for example, involves the permeability between cyber war and nuclear war. Whereas a cyber-attack may justify a cyber-counterattack, are there circumstances of crisis in which it would justify an escalation — a breaking of what nuclear strategists used to call the firewall? A nation crippled by a cyber-attack could very well respond with a nuclear attack, particularly if an adversary has compromised their ability to respond in kind. Even if a cyber-attack is designed with a limited scope, it is often difficult to control the spread of the attack, resulting in collateral damage that could lead to an unanticipated escalation. This was the case with Stuxnet, the American- and Israeli-designed malware that targeted Iran’s Natanz uranium enrichment facility in 2010. Although Stuxnet proved successful in crippling Iranian nuclear infrastructure, the Americans and the Israelis failed to contain the spread of the malware. It has since attacked industrial capability across Iran and in other Middle Eastern countries. 

    Over the past thirty years, American and Russian conceptions of the use of strategic weapons have evolved in opposite directions. While the United States created security strategies that minimized the role of strategic weapons in future conflicts, Russia pursued new concepts and capabilities to expand their roles. Whereas it is unlikely that Putin would resort to a tactical nuclear weapon, his nuclear saber-rattling is not merely rhetoric. It is based upon the current Russian doctrine of “escalate to de-escalate.” The latest version of this doctrine, titled Basic Principles of State Policy of the Russian Federation on Nuclear Deterrence, was released in June 2020. It declares that Russia “reserves the right to use nuclear weapons to respond to all weapons of mass destruction attacks.” A strategic cyber-attack would certainly qualify as a “mass destruction attack,” but it remains vague as to what else might fall into this category. It also classifies “aggression against the Russian Federation with the use of conventional weapons when the very existence of the state is in jeopardy” as warranting a nuclear response, but this seems a subjective standard, particularly with an authoritarian like Putin who abides by the dictum l’état, c’est moi. When it comes to effective strategies of deterrence, ambiguity is sometimes an advantage but sometimes not. A psychological truism teaches that all ambiguous behavior is interpreted negatively, and Russia’s current strategic posture places a premium on unpredictability, which makes deterrence the more challenging. 

    Since invading Ukraine, Russia has been tempting a mass destruction event. The shelling of Europe’s largest nuclear power plant at Zaporizhzhia was particularly reckless, though it would be a mistake to believe that Russia — and its authoritarian allies like China — are behaving irrationally. Russia has its reasons and its worldview; some of its thinking is characteristic of great power raison d’etat, some of it is peculiar to Russia and its view of its history and is less rational. If today we are at the outset of a third world war, this is because our adversaries are fighting to upset and then redefine the global order. There is no clearer way to upset that order than by mimicking the act of creative destruction that created it: the use of a weapon of mass destruction. 

    This could be a nuclear attack, a cyber-attack, or even a biological attack akin to another pandemic. If such an attack occurs, it will be accompanied by a narrative propagated by the authoritarians who launched it. The attack itself will matter, but what will also matter are the myriad taboos that it will break, jolting us out of one strategic imaginary and into another. The immediate destruction wrought by a low-grade tactical nuclear weapon would be of less relevance than the raw fact that it would be the first nuclear weapon used since the Second World War. This would upset the global order and so would be a logical step for those authoritarian nations whose goal is to destroy the long-enjoyed global dominance of liberal democracies.

    Ample opportunities exist to avoid these grim scenarios. By understanding the intentions of our adversaries, we prepare ourselves to counter their strategic agenda with our own. Our agenda must account for challenges in information operations, biological warfare, and cyber warfare, but it must never lose sight of the truism that armies win wars, not weapons alone.

    In Ukraine, an authoritarian Russia along with its allies hopes to prove that the sun has set on liberal democracy. Thus far, what has stopped them is a fully mobilized society and a highly motivated army. The fighting has been a hybrid of low-tech (infantry, artillery, armor) and high-tech (drones, precision missiles, artificial intelligence). The heart of any battle, according to Clausewitz, is “slaughter.” A people’s will to endure that slaughter has always and will always prove a determinative factor in war.

    In the world wars fought since the Enlightenment, authoritarian armies have performed poorly. I do not mean to downplay the military achievements of the authoritarians: Napoleon certainly knew how to fight a battle, and the Germans invented the decentralized, mission-style tactics that the Ukrainians have used to outmaneuver the Russians. But war is a human endeavor, a contest of wills. A society’s will to remain free will always prove stronger than those who compel them to obey. As we imagine the future, we must not lose sight of this.

    Come Dressed as the Sick Soul of Late Capitalism

    [Innocent wayfarers, beware. This essay contains what are vulgarly known in the trade as “spoilers,” so if for some unfathomable reason you’ve yet to view Succession, Glass Onion, and The White Lotus, tread gingerly and try not to gasp.] 

    It may be the most famous and chewed-over exchange in American literature that never actually took place, at least not in real time. In 1936, when the country was still in the hold of the Great Depression and in no mood for mooniness, Esquire magazine published Ernest Hemingway’s cinematic story “The Snows of Kilimanjaro,” a meditation on mortality and the beautiful consoling desolation of a cathedral mountain, all that. Amid the flashbacks and the regrets, the narrator couldn’t resist sneaking in a catty sideswipe: “He remembered poor Scott Fitzgerald and his romantic awe of them and how he had started a story once that began, ‘The very rich are different from you and me.’ And how someone had said to Scott, ‘Yes, they have more money’.” 

    That “someone” was of course Hemingway himself, unable to resist puffing his chest at “poor Scott”‘s expense. Earlier the same year Esquire had published Fitzgerald’s revelatory confessional “The Crack-Up,” so it was understood that he was in a precarious state. Fitzgerald’s understandable ire at being mocked and misrepresented — he complained to their mutual editor, the Solomonic Maxwell Perkins — forced Hemingway to soften the passage later for hardcover publication and substitute the weak-water name “Julian” for “poor Scott.” Didn’t matter. Sophisticated readers knew the real score. For decades, the original back and forth in print was patted down and packed into a tidy conversational anecdote, with Hemingway’s snappy comeback considered by many (most?) the definitive retort — a bull’s-eye reality check — to Fitzgerald’s dreamy, minty-green Jazz Age romanticism.

    The verdict has been reversed over time. Is there any doubt today that Fitzgerald, swimming in the aqua sparkle of his own perceptions, had it right and Hemingway was talking out of his pith helmet? It was Lionel Trilling who defended the Fitzgerald case most elegantly. “The truth is that after a certain point quantity of money does indeed change into quality of personality: in an important sense the very rich are different from us…” It was true then and it is even truer in this millennium. The evidence pimp-slapped in our faces is that the rich are more different from the rest of us than ever before — they are evolving into a mutant species. 

    As the middle class is increasingly whittled thin — witness union jobs being replaced by a gig economy, the coronation of corporate executives, the premature knighting of Palo Alto wunderkinds, the emergence of Davos Man, and saturation bombing of the airwaves with ad blitzes for online sports gambling and mega-millions lottery draws — the chasm is widening yearly between the have-somethings and the have-it-alls. It has only gotten worse since Covid, only widened. Tech billionaires, hedge funders, private equity predators, Saudi princes, Russian oligarchs, the former president who besmirched the office, and similar excrescences of turbo-charged late-stage lift-off capitalism have top-loaded this century into a second Gilded Age, one that even the ongoing global recession hasn’t been able to dent. 

    A second Gilded Age might seem to be a bonanza opportunity for novelists, for some young, hip, penetrative Gen X/Gen M/Gen Z/Gen-whatever Edith Wharton to train her spy glasses or AR goggles on. But perhaps the spectacle of the mega-rich is simply more than contemporary novelists (a more inwardly investigating crew) can consolidate. The traditional big social novel of manners and disturbing flutters in the drawing room may be too antiquated an undertaking. The pursuit of great wealth and the cruel delight of writing the little ingrates out of your will largely disappeared from serious fiction, as serious fiction itself has been eased into the infirmary. The strenuous toils of Theodore Dreiser (The Titan, The Financier) belong to an iron age. The stately mansions of later John O’Hara lie empty and neglected. Inherited money inhabited the background of Louis Auchincloss’s novels, but it was a listless resource, carpet-worn. 

    The contemporary remakes and invocations of The Great Gatsby — will we ever be rid of them? — offered retro cosplay that’s unable to capture the lyrical lift of the prose, the goosy thrill of Puritan restraint being kicked to the curb and the human body flowing free as if for the first time. The field was left to commercial pop fiction to project fantasies of the rich, virile, fertile, and resplendent in potboilers whipped to a mad fandango by Jacqueline Susann, Judith Krantz, Harold Robbins, Shirley Conran, and other pagan immortals of the airport paperback rack. Some of the anecdotes in these concoctions may have been pinched from honest gossip but the overall effect was of escapist make-believe.

    For a brief fun time, before everything got engorged, the true signifiers of American wealth required a keen acquisitive eye to spot. They didn’t call undue attention to themselves, but were hostessy and understated, niblet-sized and exquisitely prepared. Truman Capote, who prided himself on being the keenest double agent inside the velvet folds since Marcel Proust, informed an interviewer that what separated the rich from the rest of us primates was their serving of tiny vegetables: “Delicious little tiny vegetables. Little fresh-born things scarcely out of the earth. Little baby corn, little baby peas, little lambs that have been ripped out of their mothers’ wombs.” Tom Wolfe’s New York magazine account of Leonard Bernstein’s fundraiser for the Black Panthers — “Radical Chic” — introduced us to the party with a rapture over the hor d’oeuvres. “Mmmmmmmmmmmmmmmmm. These are nice. Little Roquefort cheese morsels rolled in crushed nuts. Very tasty. Very subtle. It’s the way the dry sackiness of the nuts tiptoes up against the dour savor of the cheese that is so nice, so subtle.” Not that there wasn’t the occasional representative of wealth with more democratic taste buds. William F. Buckley, Jr., whose sailboats were stocked with wine and champagne before they embarked into distant latitudes, was addicted to a grocery store brand of peanut butter called Red Wing. But this seems to have been more of a quirky personal indulgence, not something he’d slather on Triscuits when Mrs. Kempner came calling.

    Prime time television, bless its bionic heart, filled the void left by serious fiction and classic Hollywood films and then, as television inevitably does, overfilled it. On network TV, extreme wealth was often troweled out as a bestowal of the golden promise of Southern California on the fortunate few, whether they be oil-rich lucky-strike yokels (The Beverly Hillbillies) or crime-solving playboys (Burke’s Law). The Reagan era became the heyday of the rich clan soap opera that had the swoosh and swoop of a pink poodle Ross Hunter production: Dallas, Dynasty, Flamingo Road, and Falcon Crest, where the matriarch was played by Ronald Reagan’s first wife, Jane Wyman. The storyline from Falcon Crest’s debut episode: “Wealthy vintner Angela Channing feels threatened when her nephew Chase Gioberti returns to Falcon Crest for his father’s funeral.” Angela Channing — a name that could ring church bells — was well advised to be on guard. On all of these soaps, barely a season went by when there wasn’t a misplaced nephew or niece or illegitimate son or daughter popping out of the topiary to demand his or her rightful due. It was also the era of the lavish mini-series adaptation, such as Lace, based on Shirley Conran’s bestseller, and remembered today for Phoebe Cates’s sneering icebreaker, “Which one of you bitches is my mother?”

    To complement the fictional exploits of the coiffed, avaricious, and scheming no-gooders, there gurgled up a new genre of reality television, pioneered by the guided tour through Lifestyles of the Rich and Famous, which premiered in 1984 and ran for over a decade. The popularity of the series ratified that in the 1980s it was no longer enough to be rich or famous, you had to be rich and famous, for that was the new American Dream. Each syndicated episode of Lifestyles was like a fawning vacation brochure or magazine spread with a bumptious, trumpety voiceover supplied by its irrepressible and uncharacteristically effusive British host Robin Leach, whose catchphrase motto “Champagne wishes and caviar dreams!” was like a wedding toast. Lifestyles of the Rich and Famous might have been relegated to the slag heap of a period novelty if it hadn’t inspired copycats such as MTV’s Cribs (2000 to present), documentaries about the palatial lives of fashion designers (Valentino: The Last Emperor, in 2008), and, most infectiously, the “staged reality” extravagant lunches and battle royals of Bravo cable’s Real Housewives franchise produced by Andy Cohen, the David O. Selznick of Ryan Seacrests. It is hard to keep track of how many cities have rich Real Housewives emerging from limos and going Godzilla. Every urban squad of prancing, feuding divas takes valuable time out from babying their tiny pedigree dogs to attend restaurant openings, disparage their frenemies, fling drinks in eachothers’ unreal faces, and point menacingly long fingernails as they trade heavily bleeped-out trash talk. The campy, pool-splashing, hair-pulling catfight between Krystle and Alexis in season three of Dynasty was the precursor for every “Real Housewife” dominatrix match. 

    Cementing fan loyalty and tabloid fever between seasons are the real-life headline scandals that leave cracks in the fake-real facade, as with the arrest and imprisonment of Jen Shah (Real Housewives of Salt Lake City) and Teresa Guidice (Real Housewives of New Jersey), and the commotion on Real Housewives of Beverly Hills over “powerhouse attorney” Tom Girardi, the then-husband of ice-blonde aspiring disco goddess chanteuse and BH housewife Erika Jayne, who was accused of stealing money from clients, some of them desperate and destitute, and the roiling undercurrent of suspicion that Jayne, hardly a spotless lamb, had to be aware of what hubby was up to — the old fool had been bankrolling her vanity career. It made for many squinty, tense pauses and spitfire moments on Real Housewives of Beverly Hills. In relating this, I acknowledge that to the uninitiated it may sound as if I’m speaking Romulan — as when I tried to explain Buffy the Vampire Slayer for the dubious enlightenment of the late John Simon — but this is the streaming canal in which some of us oar. 

    Although in toto these reality TV fishbowls reveal sociological glints of how we live now — or rather how they live now — their blatancy appeals to low-information, viral-clip viewers in need of incessant cheap kicks. For an immersive experience of how the richy rich think, act, behave, misbehave, maneuver, socialize, enjoy their toys, ignore their children, speak in code (“like real Americans, they always talked in code,” to adapt an insight from Norman Mailer), maintain the pecking order, monitor the perimeter, and forge a phalanx whenever they move in concert, only high-budget, hierarchy-obsessed, mission-driven dramas and satires will do. Only they can muster the necessary resources of screenwriters, directors, actors, costume designers, location scouts, etc., to evoke and enter the distortion field of spoiled monsters and damaged psyches.

    For a curated verisimilitude, Tom Wolfe-worthy signifiers are strategically implanted in the most fanatically detailed film and television chronicles of the super-rich, whether it’s the strictly-business “stealth wealth” (as if there is such a thing) black ball caps that the male Roys wear screwed tight on their heads in HBO’s Succession, which has concluded its triumphant four-year run, or the Audemars Piguet Royal Oak Offshore Camouflage timepiece and Randolph Engineering Aviator sunglasses that Bobby Axelrod (Damien Lewis) sport in Showtime’s Billions. These accouterments of killer cool seem to have been issued by the murders and acquisitions division to princelings and upstarts who compare themselves to modern-day pirates, gangsters, apex predators, and fighter pilots, and pride themselves on their agile wits, their mastery of Machiavelli, Sun Tzu, and Jedi moves (yet have a spaz if their favorite bottled water arrives a tad lukewarm). Underachievers are kept under constant notice. The trading floor at Bobby Axelrod’s Axe Capital is a glassed pavilion dojo where only the top survive and flushed-out schmuckos take the walk of shame carrying their belongings in a box. Unreluctant to deploy blackmail and hardball tactics, Bobby Axelrod is Michael Corleone with a bouncier step. Neutralize your foes with extreme prejudice and the gormless boards of directors will fall like dominoes. The scope of the wealth and global designs of Axelrod and his rival plunderers make “the masters of the universe” in Tom Wolfe’s The Bonfire of the Vanities look like tiddly-winkers.

    Tom (Matthew Macfayden): “Umm, do you want…a deal …with…the devil?”

    Cousin Greg (Nicholas Braun), after a pause: “What am I going to do with a soul anyway? Souls are boring.” 

    Succession, Season 3, Episode 9. 

    Succession is less of a bro fantasy than Billions, more of an acid bath where illusions and ideals are dissolved and sentimentality separates from the bones, which is why many deem the series cold, heartless, and intractably cynical. As if Jonathan Swift was some sweetheart. Created by Jesse Armstrong, Succession’s line of attack fuses the scabrous, scorpion invective of The Thick of It, In the Loop, and Veep with the infighting, stylized lingo, and devious subterfuge of peak David Mamet. No series has consistently shown greater gunslinger skill with caustic sound bites while keeping tabs on the chief imperative. In Mel Brooks’s Silent Movie, in 1976, the Hollywood studio modeled on Gulf & Western was named Engulf and Devour, which could double as the corporate handle for Succession’s Waystar Royco conglomerate, with its portfolio of theme parks, cruise ships, and troubled film division. (All film divisions are troubled.) Its crown jewel is the innocuous sounding American Television Network, or ATN, a red-meat, right-slanted “bigot spigot” cable news operation capable of driving the national dialogue, dictating the next president, and dragging even the loftiest reputations through the muck. Any resemblance to Fox News and the Murdoch family is strictly intentional and the ATN lineup struts its own Tucker Carlson/Sean Hannity anchor stud, a Nazi-flirting smug vacuity who bears the perfect evil name of Mark Ravenhead.

    Clinging to the throne and constantly chaffing in irritation and fury at the fools around him, many of them family members, is Royco patriarch Logan Roy (Brian Cox, an aging lion who can inject menace into a simple Uh-huh”), an old school analog-bred media magnate who minimizes mind games to go for the throat or the groin or, preferably, both. Logan Roy possesses an animal cunning for reading situations and subtle momentum shifts; like HBO’s other iconic anti-hero, Tony Soprano, Roy has an animal cunning — a psychological sniffer — for who’s with him, who’s against him, who’s wavering, and who needs to be gang-planked. Animal is the word. “Boar to the floor!” is Roy’s sadistic idea of a parlor game, and he vows to go “full fucking beast” on his foes. When he prowls the floor of the ATN newsroom in ominous sunglasses, one character says, “It’s like Jaws if everyone in Jaws worked for ‘Jaws.'” 

    Seeking to extricate itself from its attachment to old media (newspapers, local television stations, basic cable, movie production), Waystar Royco, mighty as it is, fears being gobbled up in a single gulp by some digital baron “Zucker-fuck.” The younger tribe of Roys — primarily sons Kendall (Jeremy Strong) and Roman (Kiernan Culkin), and daughter Siobhan, better known as Shiv (Sarah Snook) — angles to divorce itself from the doom and chaos they’ve done so much to sow and join the newer species of super-rich that originated in Silicon Valley or some other incubator of myth-hype, red pill hubris, and algorithmic domination. This gnawing awareness of the vexing gap between the traditional 1% and the top 1% of the 1% is rawly laid out plain in Succession when Roman, in one of his conversations with his sexty, executive mommy-figure Gerri Kellman (J. Smith-Cameron) promises that if their scheme pays off, “You will get properly ‘fuck you, fuck you I-don’t-even-care-about-climate-change I’m-in-New-Zealand-with-my-own-private-army’ rich. Not like some pathetic asshole beach house on the Vineyard rich.” A beach house on Martha’s Vineyard, so lame.

    The New Zealand reference in Roman’s spiel alludes to Peter Thiel, whose plans to build an extensive luxury lodge alongside a mountain-surrounded lake as part of his apocalypse insurance policy have been thwarted by local authorities. Thiel is not alone in preparing blueprints for when everything goes kerflooey. Douglas Rushkoff’s recent Survival of the Richest: Escape Fantasies of the Tech Billionaires is an allegory about an elite group of doomsday preppers who intend to live out the coming social disorder and pestilence in scenic, remote compounds that will be self-sustaining, impregnable to zombie invasion, and offering the utmost in civilized comfort as the earth bakes. In times of trouble you can always count on the rich. The undying wet dream of wealthy anarcho-libertarians has been for their own Galt Gulch, the mountain hideaway in Ayn Rand’s Atlas Shrugged where society’s elite doers, makers, and dissenters isolated themselves from the shabby ranks of takers, losers, and liberal simps with their meeching platitudes. Some of the Gulf States are planning glass-domed mega-cities in the desert powered by wind and solar that will contain their own amphitheaters and five-star hotels. “The plan, distilled, is to become the global headquarters for the mega-wealthy,” Scott Galloway writes in his newsletter No Mercy/No Malice. That will do for most, but for a few visionary billionaire survivalists the world is not enough, to borrow the title of a James Bond film. Elon Musk, as everyone knows, has made it his mission to colonize Mars, and Amazon’s Jeff Bezos and Virgin Atlantic’s Richard Branson are competing astro cowboys with their satellite launches and dreams of orbital tourism. 

     Since space colonization is going to take a while and underground compounds lack eye-candy and are indistinguishable from the underlit subterranean nerve centers in Marvel movies and dystopian sci-fi, the preferred getaway in movies and TV for rest, relaxation, and inviolable refuge is a private secluded island that combines the fortress capabilities of Dr. No’s Crab Key with the lush splendor of a tropical paradise. Guests arrive by invitation only, their presence a privilege extended by the host, who cloaks an ulterior motive or two beneath the too-hearty bonhomie. Everything is arranged to perfection, the hospitality staff seamlessly appearing and reappearing as if on winged feet. Or smooth rollers. Robot sherpas serve as the luggage conveyors in Glass Onion, Rian Johnson’s successor to the improbably successful Knives Out, once again centering a tangerine-skinned Daniel Craig with a preposterous Southern gumbo accent trying to solve an Agatha Christie-ish whodunit where clues appear and vanish like magic coins. As in Christie’s And Then There Were None, the great-grand-mommy of this elimination game, a group of unsuspecting strangers have been summoned for a remote outing only to find themselves at the mercy of machinations that produce many a scream and squeal.

    Here the host is billionaire Miles Bron, played with vapid gusto by Edward Norton and blatantly modeled on Elon Musk, a purported galaxy brain tech visionary who has hoodwinked the press and the public into believing that his company’s innovations all sprung from his fecund brow. Bron’s guests are fellow futurists and disruptors — “Disruptors have assembled!” he cries, as if hailing the Avengers — but there is no question that he is the alpha dude disruptor supremo, its Tony Stark. (One of the many Twitter nicknames hung on Musk is Phony Stark.) A culture-philistine (to use Nietzsche’s term) whose Rothko painting is hung upside down, Bron has come into possession of the actual Mona Lisa, which he hopes to deploy as an ace card to impress world leaders and broker global peace or something equally grandiose. The Mona Lisa in its translucent case is just a MacGuffin to keep everyone’s eyes off of the misdirection. 

    As in the later, glossier Hollywood adaptations of Agatha Christie (the original Murder on the Orient Express and Death on the Nile, not the clomping Kenneth Branagh remakes), Glass Onion unfurls a bright tapestry of brittle pleasantries, shadowy motives, fidgety gestures, ominous foreshadowings, and devious mind games, heightened by the showy entrance of an unexpected intruder — Janelle Monae, who is tasked with projecting old Hollywood Rita Hayworth/Ava Gardner-ish glamor and delivering a flat tire. Old Hollywood homaging as well is Johnson’s penchant for peppering the film with amusing cameo pop-ins: Hugh Grant, Jared Leto, and, making his final screen appearance, Stephen Sondheim, whose co-written script for The Last of Sheila was an inspiration for the Knives Out enterprise. The dolled-up cast seems to be having a grand time, which is part of the Easter egg attraction of these films for entertainment buffs and makes them a harmless exercise in slumming for those of us observing from the sidelines.

    Meanwhile HBO’s The White Lotus miniseries — two seasons thus far, with a third season under construction — also stacks its cast with familiar movie-TV faces but it mixes in provocative newbies to freshen up the entourage and add cosmopolitan flavor. Created, written, and directed by Mike White, whose métier is the comedy of creeping unease, The White Lotus pulled off the feat of formulating its own aesthetic from the outset, a cushy ambiance that provides its own commentary on the action. The affluence on display is a confluence of air and attitude, a state of grace soon to come unpeeled. The new wealth — generated by crypto, software, apps, Tik-tok and Instagram product endorsements, sponsorships, and celebrity appearances, OnlyFans stardom, and new sluices for money laundering — seems to have been conjured from nothing, with no apparent effort from its beneficiaries. It seems to flow wherever they go, and for these surfers of invisible, undulating currents, expensive possessions (art, rare wines, diamond tiaras, Architectural Digest interiors) are less existentially desirable than the supreme ease of movement. They’ve come to expect everyday life to be a series of seamless transitions, like a single gliding Steadicam shot from beautiful dawn to beautiful dusk, not that they pay much attention to either. They are too busy contemplating the wonder of their being and why it doesn’t make them happier, more receptive. 

    One of the ingenious aspects of The White Lotus is how it reveals how narcissists become not just spoiled but infantilized by their idyllic lifestyle fantasies. The slightest hiccup in service or hitch in itinerary and they turn into crybaby complainers, offended by every unscheduled raindrop. It is also the series that has most ingeniously incorporated the incel as a human pathogen and negative-energy capsule. Integral to the series’ sensibility is a pervasive, nullifying affectlessness — a cool, glib, eyes-shaded anomie that comes across as so Californian. Caring about something or someone risks being considered uncool; just breathe in and out the moment, dude. Practicing mindfulness only staves off the demiurges for so long, however. Those looks your spouse is giving you when she lifts her Ray-Bans — they spell trouble.

    The plot intrigues of The White Lotus stress-test the vacationers until they shed their protective coating and expose how their wiring really works under crisis or when temptation beckons. Jungian shadow elements can be teased out most tantalizingly for prestige-TV viewers in a balmy picture-postcard getaway where the characters’ inhibitions and self-defenses melt away. This is the overriding advantage that serial television has over feature film: revelations can be hinted at and winkled out over the course of several episodes rather than blurted out in a single bolt. Pressure becomes more systematically applied. The White Lotus also devised its own clever variation on the Agatha Christie formula. Its first season opens with a coffin being loaded on a plane and the question becomes, Who, among the vacationers we’re about to meet, leaves in a box? Who’s the mystery corpse?

    BC

    Jorma (Henrik Dorsin): I’m very rich. Yes, let’s not beat around the bush. I’m very rich.

    Ludmilla (Carolina Gynning): How rich are you?

    Jarmo: Oh, I’m so fucking rich!

    —shipboard conversation, Triangle of Sadness

    Ruben Oslund’s Triangle of Sadness — the title, not as portentous as it sounds, refers to a geometrical frown patch that can appear between the eyebrows — takes place on a luxury yacht where a soiree of privileged wankers are being feted by an infinitely patient crew. (Luxury yachts are for the depraved-rich genre what opera houses were for Balzac.) Their party is joined by a pair of bodies beautiful, Carl (Harris Dickinson) and Yaya (Charlbi Dean Kriek), he a male model, she an Instagram influencer — a matched set of dollhouse cliches to go with the other loaded stereotypes lounging on the deck. For all of its auteurist mojo handjive (individual chapter titles, a long introductory scene that exhales documentary dead air, a distended running time, a subverting Ironic Twist), the film is a self-pleased demonstration of shlock instincts and facile follow-through. Its cavalier knowingness thinly camouflages a rather sophomoric class struggle on a Ship of Fools (or, if you prefer, a Love Boat of the Damned) which coarsely degenerates into a duel of capitalism vs. Marxism quotation-mongering during a raging storm between the socialist Captain (who doesn’t need a proper name in the credits, given that he’s portrayed by an instantly identifiable Woody Harrelson) and a Reagan-idolizing Russian oligarch (Dimitry, bellowed by Zlatko Buric). 

    As the yacht is buffeted to and fro, their audio chatter counterpoints the passengers’ volleys of projectile vomiting — pea-soup geysers that outdo The Exorcist‘s Linda Blair in spray radius and velocity. After the boat goes down, sparing us any further shots of Harrelson slovenly smacking his lips, a few ragged survivors wash ashore on what appears to be a deserted island. A power reversal ensues as Abigail (Dolly De Leon), the yacht’s lowly cleaner and “toilet manager,” takes charge, and the catered-to have to fend for themselves and barter for favors. Roughing it doesn’t come easy to these softies. This dictatorship of the proletariat seems destined to meet a premature end when it is disclosed that a spa resort lies nestled on the other side of the mountain — who knows, possibly another White Lotus. No matter how far away and Robinson Crusoe-ish an island may seem, cabanas spring up like toadstools and beach umbrellas are planted like victory flags. Global capitalism will not be denied.

    Where Succession, Glass Onion, Triangle of Sadness, and The White Lotus have their slapstick, farcical sides, their jarring pratfalls, The Menu is staged with the solemnity of a Passion Play, which gives its flashes of dark humor far more incision. Directed by Mark Mylod, who has honed his needlepoint precision directing numerous episodes of Succession, The Menu presents a sacrificial rite disguised as a unique dining experience — a masque of the red death with impeccable table arrangements and flawless plating. As with The White Lotus, Triangle of Sadness, and Glass Onion, the experiencers in The Menu are a group of achievers and hangers-on who fancy themselves inside dopesters of discernment. They’re ferried to a remote island — but of course, where else? — for a special multi-course meal prepared by chef assoluto Julian Slowik (Ralph Fiennes) and his ninja staff. Before each course the chef offers a brief introduction to the dish and a homily intended to add their appreciation of the thought, the finesse, and the distinctive and locally sourced ingredients. “A course of a single raw scallop perched on a craggy rock and surrounded by carefully tweezed seaweed and algaes is virtually indistinguishable from an actual dish at Atelier Crenn, a San Francisco restaurant with three Michelin stars,” The New York Times helpfully reported. Authenticity in details adds to the absurdism of what transpires, the flattering foreplay for la grande bouffe.

    Although the cult of the (male) chef has taken a deserved blow in recent years with the sexual harassment allegations against former numerous television cooking-show celebrities, the mystique remains, which Fiennes wears like an untarnished crown. His Chef Slowik is impresario, emcee, choreographer, wizard, and samurai of the cutting board. His authority radiates from a tight core of uber-willpower, a testimony to Fiennes’s gift for containment and slow release. It’s frightening how his tight smile sometimes lingers a beat too long, surveying the room. Slowik’s control only begins to betray hairline cracks when some of the feeders, instead of accepting their roles as congregants, begin to behave like customers. They ask for additional seasoning, request substitutes, or get overly chatty and show-offy. He declines to alter the menu with a Caesar-like smile that tenses as the evening proceeds and his patience snaps. In his furious pride and concentration in preparation that goes into his dishes, Slowik resembles a five-star extension of Seinfeld’s Soup Nazi. But where the nicknamed Soup Nazi was satisfied to boot and permanently banish annoying customers from his establishment, Fiennes’s Slowik metes out the punishments of a Lord High Executioner. One of the victims, a past-his-prime action star named Georgie Diaz, played by John Leguizamo (and based, says Leguizamo, on Steven Seagal), is condemned for appearing in a film that Slowik saw on his day off, hoping to be entertained. He was not entertained. He was most disappointed. For this, Diaz must pay. It is a tribute to Fiennes that he makes this explanation sound eminently reasonable. Anyone who has sat stonily through an Adam Sandler comedy can sympathize.

    Like the passengers in Triangle of Sadness and the vacationers in The White Lotus, the A-listers in The Menu have become so accustomed to telling others what to do that they don’t know what to do when they have no one to boss around — when they’re the ones being bossed. Even the wait staff doesn’t indulge their piques. The guests’ inner resources have so shriveled from neglect that most of them are unprepared for the impact of what the critic Marvin Mudrick once called “life direct,” and by the time their instincts kick in it’s too late — they’re fodder. The interesting twist in The Menu is not how feebly the diners resist and how quickly they capitulate, but that they and the staff start to accept that perhaps they have earned their place on the pyre. They whimper, they plead, they submit. This is the price for not having lived the right life. This makes for neat allegory, and the tidy violent end — a choreographed die-in — panders to an audience’s yearning for retribution that real-life villains and greedheads seldom face. 

    The sole escapee from the bloodbath and conflagration is Anya Taylor-Joy’s Margot, who munches a simple cheeseburger that Slowik has made for her with care and devotion. By her unpretentious all-American taste and cheeky insubordination, Margot is spared the bonfire of the vanities. She is also granted absolution because no one wants to see Anya Taylor-Joy killed off, just as no one watching season two of The White Lotus would have wanted to have Aubrey Plaza get the tarp pulled over her. In a difficult, perilous time, Hollywood needs a few unexpendables to keep audience identification from being irreparably severed. Leave all-out nihilism to the mad-hatter satirists.

    The problem with most extreme movie satire is that it has nowhere to go but into overkill. From Dr. Strangelove to Don’t Look Up, the nerviest expeditions rely on a cataclysmic finale to spike their message in the end zone. The depiction of class warfare in High-Rise, from 2016, adapted from J. G. Ballard’s novel, descends into chaos, anarchy, orgiastic stabbing, and the roasting of a dead dog’s leg. It’s that kind of film. Glass Onion, after preening its insouciance through reams of repartee and exposition, climaxes in a giddy orgy of glass sculpture smashing and the fiery destruction of the actual Mona Lisa, a priceless touchstone of Western art torched because of a billionaire’s vainglorious ego. 

    The largest discharge is not blood or fire or showering debris but vast deposits of merde. When push comes to shove for the ultra-rich, all crap literally breaks loose. It is almost a psychoanalytical banality, these filmmakers’ preoccupation with this conjunction of wealth, shit, and mortifying incontinence. “’We [know] about the superstition that connects the finding of treasure with defecation,’ Freud wrote in “Character and Anal Eroticism,” to reinforce the point that, on the subconscious level, “feces have always been understood as a form of currency.” (I owe this to Simon van Zuylen-Wood’s jaunty, punny essay on “Feces and the Gold Standard: A Psychological Explanation of Goldbuggery” in The New Republic in 2012.) The Magic Christian, the novel written in 1959 by Terry Southern and adapted into a now-forgotten film a decade later with a mind-boggling cast (Peter Sellers, Ringo Starr, Laurence Harvey, Roman Polanski, and Raquel Welch are among the duped), ends with desperate, greedy saps fishing for pound notes scattered in a large tub filled with urine, fecal waste, and other unpleasantries. The burly Russian oligarch in Triangle of Sadness, who has made his fortune in fertilizer, proclaims himself “The King of Shit!” He keeps crowing the word as if to rub his fellow travelers’ noses in it. The film soon rubs our face in it, too. Its key punchline edit comes when Harrelson’s Captain extols the $250 million craft being pummeled by the elements and the next cut is to a miserable passenger crouched on a soiled toilet. Freud also hypothesized that misers were those held in their stool as children, hoarding money in adulthood, which may support the revelation in Succession that Logan Roy died while trying to fish his iPhone out of a clogged airplane toilet. The symbolism is almost too much.

    This scatological imperium may have been foretold in a Hollywood film that many of us scoffed at in its day, and rightly so. In 1974, at the weary end of the disaster epic The Towering Inferno, the disillusioned architect played by Paul Newman proposed that the burnt-out, windows-shattered one-hundred-and-thirty-five-story hulk be left standing as “a kind of a shrine to all the bullshit in the world.” Even in our Watergate-era cynicism, what naifs we were then. We little suspected how much worse would be in store. The world’s bullshit supply was still in its developmental stage, amassing its resources to achieve full sentience, establish free-market capitalism as the undisputed queen of the ball, and extend privatization into every sphere and cranny of endeavor until much of mankind would be superfluous and disposable, supplanted by intelligent machines. 

    It is in the nature of satire to go too far, but now “too far” scarcely feels far enough, given the enormity of the wealth accumulating to those at the top of the diamond pyramid and the social fissuring below. If there is something punitive and body-horroring about so many of the films and series about the super-rich, it may reflect the frustration that no matter what the crimes and excesses of the Moneygods, karma isn’t coming for them—the fix is in. Their escape pods are loaded up and at the ready. So karma has to be dealt out on screen, with as wicked a hand as necessary and a wham-bang finish. Rough justice may not be real justice, but you take your reckonings where you can get them. 

    After Neurocentrism

    Some thirty years ago, with the launch in 1990 by the Bush administration of the “Decade of the Brain,” neurocentrism took hold in the Western world — America, Japan, and Europe. It held on well into the aughts. Neurocentrism is the belief that the brain is the seat of the mind, that they are in some sense the same entity, and that therefore one can understand mental and psychic life by understanding the brain, which is often dubbed the most complex object in the universe, with its estimated eighty-six billion neurons and hundred trillion or so synaptic connections. As a consequence of discussions about the brain already underway in the 1980s between upper-level American science agencies, councils, and associations, the government awarded generous funding for research in neuroscience, psychology, and neurology. It aimed in large part to address the staggering cost of neurodegenerative diseases, which was (correctly, as it turned out) predicted to increase massively over the next decades, as well as to study the aetiology and the effects of neurological disorders and accidents. 

    An underlying assumption of the program was that it would constitute one of the ultimate achievements of humankind to unravel the brain’s functioning. A similar hope was pinned on genetics, with the Human Genome Project launched in 1990, with a similar equivalence posited between genomes and selves. If one came to grips with the biology, in short, one would finally understand the nature of life, identity, and consciousness. There was an essence that one could seize. Science would yield ultimate truths. In those years a reductionism of mind and life to their constituent parts prevailed; it was galvanized and encouraged by the optimistic ethos of the time. Popular books about the brain, and also about genetics, flourished. To be sure, there existed corners of resistance to reductionism, in the name of phenomenological complexity, with philosophers of mind exploring the nature of consciousness — for instance, a Journal of Consciousness Studies was founded in 1994, which provided a forum for collaborations between philosophical speculation and empirical data, and interdisciplinary conferences on the major topic started taking off then. But this resistance took place within rarefied academic spheres. Neurocentrism was easier for non-specialists to comprehend. 

    Meanwhile the fields of mind sciences grew, and multiplied, separately from biological neurosciences, insofar as the term “mind” designates not a physical entity but the abilities that allow organisms to function in and interact with the world. A “Decade of the Mind” was announced in 2007. The cognitive sciences yielded modular models of the mind, represented for a while as subdivided into mechanisms supposedly developed during the Pleistocene. Importantly, these cognitive sciences were a formidable and fertile response to the behaviorism that had preceded them, insofar as they supposed, in contrast to behaviorist assumptions, that there was indeed such a thing as a mind that could be studied. The association of the cognitive sciences with neuroscience then gave birth to cognitive neuroscience, which made use of imaging technologies to explore mental functions.

    The appearance in 1991 of functional magnetic resonance imaging (fMRI) — a technology that allows one to observe the brain in action — was a historical revolution whose impact on the imagination was not unlike that of the moon landing. It seemed to announce a bold, bright future when one could finally peer into places that had never before been visible. The first steps into this future were taken, however, with a measure of presentist and materialist hubris, and often at the cost of philosophically informed subtlety. Now, three decades later, research and funding continue, and rightly so — but the mood, the priorities, and the assumptions have radically changed. And so it is time to take stock of where the mind sciences are today, and what place these sciences now hold in the collective imagination, especially in light of the bewilderingly rapid evolution of computer science and, most recently, of artificial intelligence — an expression whose assumptions also need parsing.

    At its height, neurocentrism in its excitement generated countless claims about the cerebral location of mental functions, delivered in the media as so many revelations about the “place” of the deepest aspects of human experience — cognition, language, volition, emotion, and even artistic and religious feeling. Experience was “in the brain” and therefore, somehow, better understood, or so went the claims. In this respect, the historical moment was structurally at least reminiscent of phrenology, popular in the early decades of the nineteenth century until it was dismissed as pseudoscience some years later, according to which the brain’s divisions determined personality, qualities, and faults, and mental functions had specific, visible locations and could be measured by bumps of the skull. The neurocentrist excitement was also reminiscent of the older urge to posit a “homunculus” inside the organism to account for its operations — no matter that this created an eternal circularity, since assigning functions to some agent or structure in the brain begs the question of how that agent or structure works in the first place. It begs the question, too, of how one can determine a causal link between the structure and the function. Extracting indubitable causal connections out of a myriad observed correlations remains, as it happens, a central puzzle and problem for brain science, as indeed it is for all sciences. 

    The excitement that was provoked by fMRIs is understandable. Thanks in part to these and other novel imaging technologies, highly important work has emerged since the 1990s, in all fields of neuroscience as well as neurology, in brain anatomy and physiology. A focus on neural networks replaced the erstwhile localizationism, and the identification of the functions of decidedly interconnected brain areas became, and continues to become, more and more fine-grained. Advances in neurophysiology fed into the development of neuropharmacology and of second-generation targeted medications for psychiatric disorders — though the functioning of these pharmaceuticals remains inadequately understood and their use can be controversial. New tools such as optogenetics were born, to study the behaviors of individual neurons. There developed, and there continues to develop, a better understanding of the etiology of various dementias, despite their continued intractability. Since 1990, there has also been the invention of deep brain stimulation (DBS) to counter some of the symptoms of Parkinson’s, and of transcranial magnetic stimulation (TMS), a non-invasive technique that acts upon and helps to decode neural activity during specific tasks, such as recall or attention. The genetic basis of some devastating neurological diseases, such as ALS or Huntington’s Chorea, emerged out of these significant studies. The use and applications of computational neuroscience grew tremendously, allowing for the refinement of our understanding of attention, orientation, and vision, or for the creation of brain-computer interfaces that allow paraplegic patients to move again — and the tools and concepts that it deploys grow in sophistication every day. 

    The various imaging technologies, moreover, are growing increasingly refined, in some part because awareness has also grown of how complex it is to interpret the images that these technologies yield. Indeed, it is much more of a cultural given today than it was in 1990 that there are no readable maps of any kind without signposts, nor without readers who know the signposts — and that neither signposts nor readers are devoid of bias. A map is never a one-to-one rendering of what it represents. Studies of the brain are not limited to the interpretation of images, in any case. They can provide support for studies that take place at multiple levels — genes, molecules, neurotransmitters, single neurons, and neural networks — and within numerous interconnected subdisciplines, such as neurophysiology, developmental neuroscience and psychology, social psychology, cognitive and affective neuroscience, computational neuroscience and robotics, all of which converge by now with the many fields pertaining to the cognitive sciences and psychology. With the development of epigenetics, the impact of the environment on infants’ and children’s psychic development, and thence on lifelong mental health, has become much better understood, too. The gene-centric biological determinism that had initially characterized the decade, and that could be used to justify conservative social and educational policies opposing investment in public education, not to mention outright bigotry, thus ended up being undermined by some of the very projects that the Bush administration had underwritten. 

    In other words, neurocentrism is no longer the calling card of the mind sciences. It had come along with public enthusiasm for all things “neuro,” which for a few years became a ubiquitous, homunculus-like predicate in the media and publishing worlds. It then waned by necessity, along with public enthusiasm and neuro-fatigue. Public interest in the neurosciences and psychology does continue to simmer today, especially when they address general issues of psychology and well-being, psychiatric disorders and neurodegenerative diseases, but the reductionist enthusiasm has ebbed — much for the better. 

    Now we have a different problem. The recoil went too far. Related to the change of mood is the less welcome growth of skepticism with regard to scientific research generally. Broadly unaware that scientific results and interpretations are provisional, the public tends at once to overvalue and to undervalue the scientific enterprise — attributing to it a capacity to deliver certainty, as it seemed to do at the height of the neurocentric decade, and dismissing it when this desired certainty is not at hand. This misconception of science is precisely what feeds into pseudosciences like the phrenology of yore and the dangerous myths of today, from anti-vax theories to the denial of climate change. An awareness of what scientific research is and what scientists do is necessary for the proper calibration of trust in the value of their expertise. This holds true for the mind sciences as well. These need to be, and in fact increasingly are, informed by philosophical argument and humanist concerns, since by its very nature, the study of the human mind at work upon itself remains a minefield of confusions. 

    These confusions regarding the scientific study of mind are far from new. The study of the brain as it has been practiced since the late nineteenth century, when the neuroscientist Santiago Ramón y Cajal ushered in modern neuroscience with his discovery of the neuron, is not the study of the mind and the psyche. The study of the brain zooms in on the organ and its microscopic components. The study of the mind, by contrast, starts from observations of human behavior. Studying the brain does indeed provide clues about the mind, but the obverse does not necessarily hold, especially given how complex the brain is. The territory has been mapped, the places named, the sulci and gyri identified, some functions recognized and some mechanisms understood, to some extent — yet so much of it is still unknown. In 2004, the neuroscientist Gerald Edelman entitled a book on consciousness Wider than the Sky: The Phenomenal Gift of Consciousness, a title drawn from the poem by Emily Dickinson — “The Brain — is Wider than the Sky” — that is unavoidable for those who want to conjoin humanist musings about meaning with the hard-edged world of scientific experiments. Edelman offered an attempt, one of many at the time, to show how the brain gives rise to the mind.

    But this does not mean that the brain and the mind are identical. And along with the ebb, since the Decade of the Brain, of reductionist enthusiasm and phrenological equivalence, there have emerged over the past two or three decades increasingly rich theoretical constructs and empirical data bolstering arguments against the identity of mind with brain, arguments that until recently were purely the remit of philosophy. The philosophers may enjoy contemplating the mind at work upon its own processes, as they always have. But today the empiricists — neuroscientists and psychologists — are in a better position to provide answers to some of these philosophical questions. They look at ourselves from without, while trying to build an image of the thinking, subjective entity that we each are. 

    It does seem obvious that without the brain there would be no mind — and no advanced animal life at all. The Hippocratic doctors of ancient Greece were craniocentric, as was Plato — though not Aristotle, who believed that the heart was the seat of it all. But brain and mind are, by definition, different entities. We know what the brain looks like. In contrast, no one has ever seen a mind. Nothing visually contains the mind — not even the brain. The assumption that brain produces mind, at least to some degree, arose at some point in the history of self-conscious humans out of correlations between accidents and behavioral changes, and of course well before the advent of brain imaging. Human psychology, however, concerns persons, not the gooey organ within their skulls. Brains, in sum, are necessary for minds, but minds are not reducible to brains.

    In fact, out of the ancient recognition of how strange it really is that intangible mind should arise out of tangible matter, there was born the renowned “mind-body problem,” which posits the irreducibly mysterious nature of higher mental life and consciousness. The apogee of this problem in the West was the dualism famously perfected in the seventeenth century by Descartes — famously mocked by Gilbert Ryle three centuries later as “the ghost in the machine” — which split apart immaterial mental experience from the material body in which mental life took place. Dualism is a powerful and seductive theory, because thoughts and feelings do not have the concreteness of matter, at least in everyday experience. It also informs religious beliefs about life after death in many societies besides Western ones: the awareness of the self-aware mind goes hand in hand, metaphysically and anthropologically, with the awareness of death. The West was marked by the Cartesian version of dualism, not least because it was compatible, including for Descartes himself, with Christian dogma. Thought and feelings pertained to the immaterial (and therefore immortal) dimension of humans, which was called soul, and non-human animals were considered soulless, mortal mechanisms.

    Within the framework of secular modernity, there was no longer any political need to please the Church with a metaphysical doctrine of human exceptionalism. With Darwinian evolution, and the accompanying reconception of humans as evolved animals, human cognition and emotion could be studied in scientific rather than metaphysical terms. From the late nineteenth century, scientific psychology, as established most notably by William James in America, Wilhelm Wundt in Germany, and Théodule Ribot in France, began to parse our corporeal beings, and did away with any immaterial soul. (In this regard James’ view of spiritual life was an exception within scientific materialism.) The psychology that emerged then took as a given that subjectivity could be studied through a combination of observation, introspection, measurement, clinical evidence, and philosophical acumen. 

    But scientific psychology did not put the mind-body problem to rest. The anxiety bred by the conception of ourselves as mortal animals has never gone away. Religious feeling persists in all corners of the world. And the metaphysics of mind are not reducible to the science of mind: the mind-body problem has remained in the philosophical conversation of the last and current centuries, in particular with the so-called “hard problem” of consciousness, in the formula of the philosopher David Chalmers. As he contended in 1996, however successful we may become at parsing the mechanisms involved in “the cognitive and behavioral functions in the vicinity of experience,” missing from the picture are the qualia of experience — that is, what it is like to have any experience at all, in the oft-quoted words of Thomas Nagel, whose essay “What Is It Like to Be a Bat?,” published in 1974 remains a reference point for the anti-materialist argument. The problem as Chalmers states it delimits conceptually the bounds within which an empirical account can have explanatory power, and beyond which it acts somewhat like snow that will not stick to a persistently slippery terrain. On this view, experience and biology partake of two different orders, and so consciousness necessarily escapes the physiological mechanisms that make it up. 

    Not everyone agrees that there is a “hard problem,” however: the snow, so to say, could eventually stick. The neuroscientists who study the nature of felt experience take for granted that it is a biological phenomenon through and through, and on a continuum between lower-order and higher-order mechanisms. Their concerns are not philosophical: the existence or not of the problem of consciousness is not relevant to their empirical research. And it is noteworthy that the notion of a “hard problem” arose as such — as a problem — precisely at the height of the neurocentrist Decade of the Brain, when, with materialist reductionism at its apogee, the old mind-body dualism was replaced with a brain-body dualism that split the brain apart from the rest of the body. This split ensured that cognition was studied apart from the embodied brain, irrespective of the biology involved in cerebral activity, of cell physiology, of genetics, and also of the environment in which the living organism develops and lives. It fed into the development of computational neuroscience, and of its conception of cognition as disembodied and affectless, out of the cybernetics of earlier decades. It also sundered the continuum between lower-level mechanisms that Chalmers, like his early modern predecessors, had deemed available for empirical study, and the higher-order ones that he deemed indeed impregnable to empirical accounts. Now the human animal was as if split into three parts: machine-like brains, machine-like bodies, and disembodied minds. 

    This brain-body dualism has begun to diminish only recently, over the past two or three decades. But this is the case mostly within some circles of psychologists, for everyday language tends to remain dualistic — “it’s all in the mind” means that it is not real; emotions are conceived to float in an abstract realm; your body “belongs” to you; and so on. (Early in his memoirs Bertrand Russell recalled a philosophically amusing old adage: “What mind? No matter. What matter? Never mind.”) Some areas of philosophical speculation are still conducted as if biology were entirely incidental, in an enactment of the dualist stance, in part because the conceptual structure is missing for a proper integration of scientific theories into philosophy. Yet from its earliest beginnings philosophical contemplation overlapped with empirical observation. When Thales asserted that all is water, he offered an empirical description as well as a metaphysics. Etymologically, metaphysics is what comes “after the study of nature,” sequentially after Aristotle’s Physics, later denoting what transcends empirical study. And until the modern era philosophers were also “natural philosophers” — scientists, in other words — who conducted empirical enquiry. 

    Today we do have the tools to construct empirical answers to philosophical questions about the nature of self and mind: what we need to develop now are the tools to understand philosophically these empirical answers, to develop a proper “science humanism.” Yet owing to the divisions between disciplines, few humanists pay attention to science — just as few scientists are in a position to “humanize” their research. In recent decades, there even have been attempts in the humanities and social sciences to dissolve matter and deny the empirical character of science entirely, by making it into just another human expression — social constructionist models that envision biology entirely as a phenomenon that cannot be known apart from the context in which it is theorized, of import only as a cultural occurrence. Computational neuroscience, meanwhile, yields increasingly complex models of a disincarnate “mind,” while what the world knows as “artificial intelligence” is surpassing some aspects of human cognitive competence at increasing speed. 

    Despite these admittedly powerful and often concealed redoubts of dualism, a growing number of researchers in neuroscience and psychology are now taking on board how the brain is in fact interconnected with the body —and that, as the neuroscientist Antonio Damasio has put it, the brain serves the body, rather than the other way round. This is so because, like all else on earth, the brain has evolved into its present shape out of primitive life. As Damasio described it in 2018 in his The Strange Order of Things: Life, Feeling, and the Making of Cultures, from the brainless single-cell organism, such as bacteria, endowed with the capacity to perceive and act upon its perceptions, grew increasingly complex multi-cellular organisms that eventually developed nervous systems to coordinate their multiple parts. Bodies evolutionarily precede the brains that serve them, and consciousness, as indeed all higher mental function, is an upshot of processes internal to the evolution of life. Mental experience encapsulates felt experience, and without the body there would be no feeling — nor any brain, either. 

    This is also why it is no longer possible to hold on to the old story according to which the faculty to reason and to contemplate cosmos, life, and self is a function of a disembodied thinking thing — Descartes’s res cogitans, posited in opposition to the extended bodily thing, res extensa. With his epochal Descartes’s Error, which appeared in 1994, at the height of the neurocentric moment — and based on imaging research conducted with his wife Hanna Damasio —Damasio first explained how, without emotional activity and input, deliberations that seemed to partake of rational evaluation were disabled. From then on, and via their subsequent research and his writings, Damasio was at the forefront of neuroscientists who showed how central emotion is to our highest faculties, and how the notion of a disembodied brain, let alone a disembodied mind, is a fantasy that has nothing to do with our biological reality. The so-called “affective turn” in neurosciences and psychology was launched. Research on emotions accelerated, not only in the sciences but also in philosophy and, more recently, the social sciences: the rationalist and cognitivist canon remains, but emotions are finally being considered centrally. Unlike a thought, a feeling is much less easily mistaken for an abstraction untethered from experience. Whether or not a thought is incorporeally determined, as Descartes believed, having a feeling entails experiencing a physical sensation. And so the “affective turn” was conducive to the notion of the person as a psycho-somatic unity. 

    In the 1990s philosophers also insisted that the mind must be understood not only as embodied, but also as embedded, enactive, and extended within the environment with which it interacts dynamically — that our experience is the upshot of this dynamic interaction of brain and body in relation to the world, and that our minds are a dimension of this interactivity. In fact, this conception of the body’s relation to thought and experience had already been central to phenomenology in the late writings of Edmund Husserl and, monumentally, in Maurice Merleau-Ponty’s Phenomenology of Perception, which appeared in 1945. On this approach, known as “4E cognition,” our tools are also aspects of minds that extend beyond individual skulls. It is misleading, in this account, to contemplate minds in isolation, since we have evolved as social beings. The subject is primarily “intersubjective,” as the philosophers Emmanuel Lévinas and Paul Ricoeur argued, following Merleau-Ponty. The first-person, interactive, felt experience that phenomenology embraces as necessary to our self-understanding has re-entered the realm of science, which is filling in the philosophical picture. A number of philosophers and scientists have joined this phenomenological approach with some aspects of Buddhism, most notably Francisco Varela, whose The Embodied Mind: Cognitive Science and Human Experience, written with Eleanor Rosch and Evan Thompson and published in 1992, marked a turn for students of cognitive science who believed that scientific investigation of the mind must begin with subjectivity. 

    Since then, our understanding of corporeal subjectivity, from the bottom up rather than from cognitive heights, as it were, has grown tremendously. It depicts the central nervous system — brain and spinal cord — as crucially interconnected with the peripheral nervous system, which includes the somatic and autonomic nervous systems. Feelings from skin, muscles, skeleton, and viscera are processed in the brain which monitors their functioning, yielding what is known as interoception. In turn, interoception meshes with and acts upon exteroception, the perception of external stimuli via the five sense modalities. Together these sense-perceptions constitute our very sense of self. 

    The study of interoception has intensified among psychologists over the past decade or so, yielding insights into the somatic basis of the self, the centrality of emotions to its constitution and of bodily states to awareness, well-being and illness. In the words of the neuroanatomist Arthur D. Craig, whose research on the topic has been foundational for the many studies that have multiplied since, interoception is “the sense of the physiological condition of the body,” whether conscious or not. Interoceptive signals travel along specific neural pathways from all bodily systems from skin to gut — vasomotor, cardiac, digestive, sexual, respiratory — and include the sensing of pleasure, pain, hunger, thirst, temperature. They are processed in particular in an area within the cerebral cortex called the insula. Increasingly targeted and sophisticated studies are showing how our sense of self is indexed on these dynamic perceptions and on the brain’s constantly predicting internal bodily states. In turn, these processes reflect the homeostatic regulation of the organism within an always changing world, without which it would not be viable. These constant feedback processes “provide the basis for the subjective image of the material self as a feeling (sentient) entity, that is, emotional awareness,” as Craig puts it. Consciousness, according to this analysis, is the upshot of the neurally encoded capacity to represent these processes as feelings. And in turn, these feelings are indications of our fluctuating bodily states within the world of non-selves, which shapes who we become from infancy on. As many researchers are showing, including developmental psychologists, or the neuroscientist Vittorio Gallese with his notion of “embodied simulation,” we are intersubjective from birth: without others, we do not develop stable selves. And because this multi-pronged and conceptually rich body of ongoing research begins with the subject, it avoids the conundrum that was faced by the disincarnate sciences of the mind that were developed in the age of reductionism, and which resulted in the “hard problem” of consciousness. No, we are not “just” our brains, and this current science is showing how that is the case.

    With this intense focus on the biological underpinnings of the self as a dynamic entity that is embodied, interactive, and intersubjective, as opposed to disembodied, fixed, and isolated, experimental sciences are meshing with philosophical speculations. They had been conjoined in antiquity — Aristotle, remember, was a metaphysician and an empiricist — and to some degree in early modernity, when Descartes, too, practiced empirical research. They met again in the late nineteenth century. Now, armed with insights from phenomenology, daring at last to use the first-person as a starting point for scientific inquiry, we are digging deep into our flesh, extracting from its infinitely complex layers the very consciousness that, for so long, seemed always to escape us. In so doing, we are also building the scientific picture underlying what millions look for and experience in practicing yoga and other somatic disciplines. The insights from yoga comport nicely with the discoveries of phenomenology.

    As we can see from this brief history biological reality is never a neutral given: scientists start from preconceptions about the nature of their object of study. The names on the maps are not inherent to the maps. They must be coined, and they are not fixed, either. Scientific research always takes place within a cultural context that informs its priorities. While it is not reducible to its context, bias is always present. And over the last few decades, cultural context has been partially responsible for this increased attention to the body, which, not surprisingly, has also become central within literature and the arts, humanities, and social sciences. Popular culture, certainly, is body-obsessed in complex ways. But more positively, outside the dualist redoubts that persist in some academic circles, some religious communities, and generally in folk psychology, it is acceptable and even necessary to say today that humans are embodied creatures among other embodied creatures, in the sense that our feeling and sensing body is centrally constitutive of what we are. Cognitive sciences now ally also with anthropology to study embodied and extended cognition — how homo sapiens is the begetter and user not only of symbolic languages but of tools and artifacts. We adapt to our environments, while extending our selves, and building cultures of and with things: humans live within artifactual rather than natural settings. 

    This re-centering of the study of humans onto the body — and of the human body onto the natural world — is an aspect also of the ecological emergency. It is now painfully clear that the old notion of our supposed superiority over other animals, and our lording it over the whole of nature, has been arrogant and destructive. (The pandemic was a reminder of this: viruses can stop us if we invade wild territories — such as bat havens — that are not ours to enter.) The very consciousness that seemed to be our privilege — for long in the guise of the old rational soul — is in fact organically based, an outgrowth of a natural process, and indeed we can best understand it in relation to the consciousness of other animals, which is increasingly studied as well. This is a return not only to the late nineteenth-century roots of scientific psychology, but even earlier, to Lucretius, who wrote in On the Nature of Things that “mind and spirit are both composed of matter,” and to the seventeenth-century “libertines” who adopted Lucretian materialism. 

    It is always tempting to balk at this materialist picture and to reify our self-consciousness into an abstract Cartesian entity. But in doing so we forget that our very capacity for self-consciousness, material as it really must be (what goes on in the mind or between minds by definition goes on in the body or between bodies), allows precisely for the capacity to do so, and for us to be taken in by our very capacity. Daniel Dennett suggests something that seems similar, with his notion of consciousness as illusory, but he believes that consciousness is “just” an illusion — and that is emphatically not the point that I am making here. Rather, consciousness as we experience and understand it is a materially and ontologically real capacity, but we are unable to understand how it arises precisely because that is its all-too-human limit in humans. There is only so much that our brains can do. Animal consciousness is also real, but it probably does not extend to this multiplication ad infinitum bred by our self-reflection in a hall of mirrors.

    And so we cannot leave our self-definition there. We are animals, and knowing that we are helps us to understand ourselves. But in virtue of our highly complex brains, we also differ from the other animals in ways that we need to comprehend in order to understand ourselves. Only humans study consciousness: our metacognition, that is, our self-reflexive awareness, does seem to define us and constitute our apartness. That very thought, in turn, is another instance of our self-reflective metacognition, which is the root and stuff of history, philosophy, art, science — of culture, in short, which in its variegated manifestations remains our human prerogative. Other animals can be acculturated, birds may learn specific songs, chimps can learn tool use and mating behaviors — such processes exist throughout the natural realm. But our very nature is defined by the cultural dimension, and the variety of human cultures is potentially infinite: the sciences of the mind are therefore bound by their inevitably cultural structure and mission. 

    No other creature is interested in its brain or has an idea of a mind — in fact, the very concepts of “mind” and “culture” are themselves cultural artifacts, developed within anthropology and related disciplines. And so, the notion that all species may have a kind of consciousness is asymmetrical: we may attribute it to our dog, say, but the dog does not know this in the way that we do, nor will it be aware that, like us, it may in turn be attributing consciousness to us. At any rate, whatever awareness it has is not elaborated further than its experiencing it. It may share with us mechanisms of attention and volition, it may experience anger and joy, but it does not study how, or ask why, these mechanisms and emotions occur. Neither dogs nor dolphins, neither elephants nor octopuses, study anthropology or philosophy — or write diaries or poems or books. What characterizes our species, besides our elaborate tools, is our propensity for introspection and abstraction, for projecting ourselves into the future on the basis of the remembered past, for imagining what is not present — using elaborate symbolic forms to embody the non-present, as Suzanne Langer emphasized — and, finally, for constructing cultures out of our awareness of death. We also make stories, theories, and trouble out of our perceptions and predictions.

    This human prerogative, which in some instances we may call tragic, is also what defines us in relation to the increasingly sophisticated artificial agents that we are creating — the ultimate artifacts, which may seem to act and look more and more like us. There is no turning back: as technology gains in power, we face the need to re-assert the bounds of our humanity, in contrast to the kinds of faculties that are displayed by these artificial agents. There will be similarities, of course, but the differences are what will define us. Certainly these machines are farther away from us than are the non-human animals to whom we may attribute consciousness. They are neither born nor mortal. They are not biological, they have not evolved over millions of years as bodies that flexibly adapt to a fluctuating environment, they are not constituted of billions of proteins, molecules, cells, an immune system, hormones, enzymes, and neurotransmitters, and they are not conceived within the body of another mortal. In short, they are not “wet,” as Siri Hustvedt put it in her important essay “The Delusions of Certainty.” 

    Yet we are increasingly merging large aspects of our identities and activities with the electronic circuits that we have created. Gmail retains traces of our lives better than we do — though that does not mean that it “remembers,” because memory is made of dynamic processes that pertain to a self, a self that forgets as much as it remembers, and therefore differs entirely from the storage system of AI. Memory is emotionally valenced, and AI entities do not have emotions, so far. Granted, robots are multipart systems, and though they are now able to learn how to navigate within changing environments, they begin their “lives” as circuits stored within the artificial body, in contrast to our nervous system, which is the upshot of dynamic embodied processes. Our minds are not incidentally “housed within” a body: as is becoming clearer in our post-dualist age, it is precisely because there are bodies — complex biological systems — that there are minds. What is bewildering today is that we have developed the capability to reverse these age-old bottom-up processes and engineer mind-like mechanisms out of algorithms. We are able to build these machines, though, thanks in part to our understanding of the brain and the embodied mind, and it is no secret that one motivation for the public support of neuroscience over the past decades has been its utility to the AI that increasingly, and with perplexing speed, undergirds all aspects of our private, social, economic, cultural, and political lives. 

    Something else is missing from AI besides emotions — simple ones like fear or complex ones like shame, ambivalence, nostalgia, melancholy. It is the need for meaning, which is inscribed within the narrative patterning of the human mind from birth onward. The sense of both individual and collective history and belonging. And the sense of beauty, the poignancy of finality, physical desire, the pleasure of food; and the sense AI agents do not somatize illnesses, feel pain, suffer from exhaustion, have anxiety attacks or depressive episodes; they do not have eating disorders or develop dementia; they do not enjoy the garden in the spring, have sex, fall in love. They do not feel claustrophobic if stuck too long in one place, or travel for pleasure, or make war or murder or take drugs. They can also lie; perhaps one day they will know how to lie willingly, instead of just proliferating false information. They know” a great number of things, but nothing about the meaningful narrative that makes up a mortal life. 

    It is legitimate — and an obligation of our moment — to be concerned about the growing power of these artificial agents. But to worry that they could replace us is to forget what humans are. The misplaced worry is itself worrisome. It stems from the same faulty intuition that for so long led humans to ignore their potentially ailing bodies, or to suppose that a brain could just as well grow in a vat. We tend to mistake models for reality and brain maps for minds, to scant or entirely neglect how phenomenologically complex and sensorially rich each experienced moment really is. We project onto machines our all-too-human fears and hopes. Yet it is only by understanding ourselves better that we will be able to develop the culture of care that we need to confront the crises of our time. Care was not a priority in the heady days of 1990s optimism, when knowledge of the brain was touted as the route to enlightenment. But it is so today, all the more that the forces of reaction are ascendant. Against these, and in order to advance self-understanding, we should build an alliance of the psychologists, neuroscientists, and engineers with the philosophers, artists and historians. Such an alliance would enable us never to lose sight of our humanity, whose redefinition only humans can forge.

    What the Night Sky Teaches

    Is astronomy the key to our wellbeing? If we “learn the harmonies and revolutions of the universe,” Plato wrote in the Timaeus, we will attain “the most excellent life offered to humankind by the gods.” The pre-Socratic philosopher Anaxagoras was even more dramatic:

    And they say that when someone asked Anaxagoras for what reason anyone might choose to come to be born and to live, he replied to the question by saying that it was “to be an observer of the sky and the stars around it, as well as moon and sun,” since everything else at any rate is worth nothing. 

    For Anaxagoras, stargazing is the only thing worth doing. Without it, we would be better off not existing at all. These days, I’m sure, lots of people would be thrilled to gaze at the stars if the spectacle could offer them respite from what’s going on down here, let alone lead to “the most excellent life.” But can it?

    In 2020, the Nobel Prize in physics went to three astrophysicists — modern-day stargazers, if you like — for their work on black holes: Roger Penrose for showing that black holes, strange though they are, fit squarely with our theory of the universe; Reinhard Genzel and Andrea Ghez for discovering the black hole at the center of our own galaxy. The year before, the first-ever photograph of a black hole was published to great fanfare. Every newspaper showed the dark circle, surrounded by a ring of fire, on the front page. Eight interlinked observatories, from the South Pole to Hawaii to the Chilean desert, turned the earth into a gigantic telescope to capture the supermassive object five hundred million billion kilometres away. 

    Invariably such events are accompanied by a certain rhetoric celebrating mankind’s curiosity and how it pushes the frontiers of knowledge. But imagine how puzzled we would have been if the laureates had announced from the podium in Stockholm that life is worthless unless we study astronomy. Wouldn’t we have dismissed them as mad?

    In my own family, the enthusiasm for astronomy runs low. It took my son and me a couple of hours to screw together the telescope that he was given for his sixth birthday. After dinner we aimed it at the sky. We saw mostly darkness with a few fuzzy flashes. Finally we found the moon. On the pale, stained surface we made out craters. It held his attention for about a minute.

    “Did you see the man on the moon?” I asked.

    “That’s a fairy tale, dad!” he replied. But if I could get him on a rocket ship, he would have loved to jump around there, like the astronauts in a documentary he had seen. Then came his older sister’s turn. “Cool,” she said. “It really does look like cheese.” That was the end of our space exploration. Since then, the telescope has been collecting dust in a corner of the living room. Clearly we have not been heeding Anaxagoras’s and Plato’s counsel. 

    The only constellation I can identify is Orion, thanks to the belt. I wouldn’t make a Thracian maid or anyone else laugh the way Thales did. He was the first Greek philosopher, and Plato tells this story about him in the Theaetetus:

    Thales was studying the stars, and gazing aloft, when he fell into a well; and a witty and amusing Thracian maid made fun of him because, she said, he was so eager to know what was up in the sky, but failed to see what was in front of him and under his feet.

    “The same joke,” Plato says, “applies to all who spend their lives in philosophy.” Plato is keenly aware that ordinary people see philosophers as ridiculous stargazers. He, of course, thinks that the joke is on ordinary people. They don’t comprehend that stargazing is of much greater value than the things they desire: money, fame, pleasure, or, to take it down a notch, ice cream with friends or a night on the town.

    I have spent my life in philosophy: borrowing philosophy books from the local library as a teenager, writing a doctoral thesis in the discipline, and teaching it at a university for two decades. My parents were not enthusiastic. My mother would have preferred to see me become a doctor, a lawyer, or an engineer, like my cousins. My father proposed carpentry: doing something with my hands, he hoped, would ground me in the real world. “Philosophers,” he once explained to my daughter, echoing Plato, “always have their heads in the clouds.” “So are birds philosophers?” my daughter wisely replied. (She was three at the time.)

    More troubling, however, is that after all these years I am still in the camp of the Thracian maid. My head may be in the clouds on occasion, but I have no interest in dusting off the telescope to look at the stars. Does that mean that I’m doing it all wrong? Or is my world just so different from Plato’s world that we cannot conceive philosophy in the same way? The stakes are high: if stargazing is what makes life worth living, I’m living a life that is not worthwhile.

    A few years ago we took my daughter and her friends to a planetarium on her birthday. They were happily nibbling on their popcorn while the moderator explained the size, the age, and the composition of the universe, including, of course, the mysterious black holes. Then he said something about a “sense of wonder.” This provoked me. Whatever wonder this picture of the universe arouses, I thought, it is certainly not what philosophers felt in the past. Had they come across black holes, they would have been terrified. 

    Now imagine a planetarium show moderated by Plato. We would learn about an altogether different universe: a geocentric one with a system of nestled concentric spheres carrying the stars, planets, sun, and moon around the earth. Plato would point out the amazing mathematical precision with which ancient astronomers described the orbits of stars and planets. They turn with flawless regularity, like wheels of a celestial clock: “the moving image of eternity”! On earth things are messier, he would acknowledge, but they are still very predictable: the offspring of oak trees are always oak trees, of horses, horses, of men, men. The seasons follow each other every year. And everything is perfectly coordinated: the intricate order that allows the eye to see; the organs working together to enable all kinds of living beings to thrive; earth, water, air, and fire that furnish sustaining environments for them.

    Consider a random pile of driftwood, swept up on the beach. Then consider the complex mechanism of a clock. Both have causes, but the former’s causes are blind and the latter’s are intelligent. For Plato, the evidence was overwhelming that the universe is like a clock, not like a pile of driftwood. In fact, it is the most breathtaking piece of craftsmanship. He never even entertains the possibility that it could be the effect of blind causes. As a clock requires a clockmaker, Plato reasons, the universe requires a Maker. That Maker, he contends, is a Divine Mind, called Nous in Greek. “All the wise agree,” Plato writes in the Philebus, “that Nous is king of heaven and earth.” In short: for Plato the universe displays an intelligent designer’s intelligent design. And the first to figure that out, Plato claims, was Anaxagoras.

    Aristotle likewise posits Nous, a Divine Mind, as the cause of the universe’s rational order. In his De philosophia, he proposes a thought experiment: imagine people who spend their lives in comfortable caves under the earth. They have never seen the natural world. One day “the jaws of the earth open,” they emerge from their caves, and they are astounded by the spectacle of nature. They first see the earth, the seas, the clouds, and the winds. Then they behold the sun, the moon, the planets, and the stars — “their courses settled and immutable to all eternity.” What does this spectacle teach the cave dwellers? From the universe’s rationality and beauty, they immediately infer that “there are gods and these great works are the works of the gods.” 

    The Divine Mind, according to Aristotle, does not have a body. So how does it move the heavens, he ponders in the Metaphysics, with no arms to push or pull? It moves them “as a beloved” (hos eroton), he suggests in the most poetic passage of all of his writings. Love makes the world go round. The Divine Mind is the “Unmoved Mover.” Eternal circular motion is how the heavens — living, ensouled, and perfectly wise beings, for Aristotle — express their love for God and imitate his eternal and immutable existence.

    We can now see why astronomy was the intellectual summit: it is a gateway to God. By studying the stars, in the ancient account, we connect to the Divine Mind, just as we connect to the mind of the clockmaker by studying the mechanics of the clock. Moreover, the heavens are moral models: they live perfectly rational lives. The more our lives resemble theirs — unerringly loving and contemplating God — the better off we are.

    As we left the planetarium, I reflected on how physicists today would frown at Aristotle’s claim that the desire to imitate God determines the structure of the universe. Most black holes arise from the death of stars. Love of God must be the last thing that these star tombs experience. “Holes,” moreover, is an utter misnomer: they are the largest and most compact masses of matter in the universe. (The one in the 2019 photograph packs more than six billion suns.) Their gravitational pull turns them into the stuff of nightmares. They are often called “monsters” because nothing can escape them, not even light (hence their blackness). Nobody knows where things end up that pass the so-called “point of no return.” Physicists call it “singularity” because at the center of black holes the known laws of physics do not apply.

    If black holes cross my mind at all in day-to-day life, where the sun still rises and sets as if the Copernican revolution never happened, I chase them away with a shudder. Mostly I hope that they will not swallow the earth. They surely do not point to a Divine Architect—as the wheels in a clock point to the design in the mind of a clockmaker.

    I didn’t want to disturb my daughter’s birthday party, so I did my best to hide the gloom that I felt in the planetarium. Her birthday cake was decorated with the solar system. The icing on my piece included fragments of Saturn’s rings. “Isn’t it interesting,” I said to one of the parents in attendance, “that philosophers in the past saw the heavens as the most sublime expression of divine rationality?” She smiled politely but didn’t reply. I’m sure that she thought I was weird. Then she turned away to chat with another parent. The heavens may have fallen silent, but in our busy everyday lives we don’t care.

    Plato knew that his picture of the universe was not undisputed. In the Laws he mentions philosophers who argue that “nature and chance,” rather than “intelligent planning,” explain the structure of the universe. He has in mind, among others, Leucippus and Democritus, the ancient atomists. But the strongest case for the pile-of-driftwood-view was made by the Epicureans. The Epicureans did not deny that gods exist. But why, they asked, would the gods get their hands dirty and craft a universe if they are blessedly happy, as everyone agrees they are? Instead the Epicureans posit blind causes, both mechanical and random: the weight and the natural motion of atoms moving through infinite void, and the notorious “swerve” that makes atoms deviate from their natural downward trajectory. As Lucretius explains in On the Nature of Things: without the “swerve,” atoms “would all fall straight down through the depths of the void, like drops of rain, and no collision would occur … In that case, nature would never have produced anything.” The “swerve,” then, determines the universe’s structure. It takes on the role that Anaxagoras, Plato, and Aristotle assigned to the Divine Mind. 

    Yes, we can grasp the natural order, the Epicureans reassure us. But gazing at the stars no longer connects us to God. So why bother? Because science for the Epicureans is valuable as a means: it dispels false beliefs about the gods, and about death, desire, and pleasure. After studying nature we will not be terrorized by superstitions about vindictive gods, or by wrong ideas about dying and the afterlife. Nor will we be in the grip of baseless, culturally induced desires and the disruptive passions to which they give rise: greed, envy, frustration, anger. Science, then, is still key to a happy and unperturbed life. 

    But if we held no false beliefs, the Epicureans insist, we would have no reason to investigate the universe: “If our suspicions about heavenly phenomena and about death did not trouble us at all … and, moreover, if not knowing the limits of pains and desires did not trouble us, then we would have no need of natural science.” In line with this insouciant attitude to science, the Epicureans offer a range of different explanations to natural events—eclipses, rainbows, clouds—without trying to settle between them. Any mechanical explanation will do, as long as it does not refer to divine agency and is consistent with sense perception.

    The Platonic sage gazes at the stars to get from the intelligent design to the intelligent designer. Knowledge, in this view, has intrinsic value. As Nous, God is the sum-total of knowledge. Every truth that we grasp — the nature of horses, Jupiter’s orbit, the essence of justice — strengthens our bond with the divine and increases our share in the best life. The Epicurean sage gazes at the stars, by contrast, to remove the superstitions that disturb our peace of mind. Knowledge only has instrumental value. The link to the divine has been severed. Indeed, the Epicurean gods, whose bliss does not require dispelling falsehoods, do not possess knowledge! What joy could they get out of contemplating the random configurations of swerving atoms which make up the Epicurean universe?

    The Epicureans didn’t stand a chance in antiquity. The core intuition underlying the Platonic view was too powerful to be seriously challenged. The universe was like clockwork, not like driftwood. And the heavens remained the chief proof for that. Here is how the Stoics, the arch-rivals of the Epicureans, put it (in the words of Balbus, Cicero’s spokesman for Stoicism in On the Nature of the Gods): “What can be so obvious and clear, as we gaze up at the sky and observe the heavenly bodies, as that there is some divine power of surpassing intelligence by which they are ordered?” Consider again how absurd it would be if the Nobel laureates in 2020 had made such a statement. So, then, what changed? 

    Max Weber described the reversal in a lecture in 1917. Stressing the “immense contrast” between science in Plato’s sense and modern science, he declared:

    Who still believes that the insights of astronomy, biology, physics or chemistry can teach us something about the meaning (Sinn) of the world? … If anything, the sciences are suited to completely eradicate the belief that the world is a meaningful place. And science as a path to God? Given its distinctly atheistic nature? That this is its nature nobody today will call into doubt. Deliverance from the rationalism […] of science is the basic condition for living in community with the divine.

    But how, exactly, did the reversal come about? One popular narrative pins the blame on Copernicus. The French philosopher of science Alexandre Koyré offered a classical formulation. The Copernican revolution, he explained in 1957 in From the Closed World to the Infinite Universe, led to 

    the destruction of the Cosmos, that is, the disappearance, from philosophically and scientifically valid concepts, of the conception of the world as a finite, closed, and hierarchically ordered whole (a whole in which the hierarchy of value determined the hierarchy and structure of being, rising from the dark, heavy and imperfect earth to the higher and higher perfection of the star and heavenly spheres), and its replacement by an indefinite and even infinite universe which is bound together by the identity of its fundamental components and laws, and in which all these components are placed on the same level of being. This, in turn, implies the discarding by scientific thought of all considerations based upon value-concepts, such as perfection, harmony, meaning and aim, and finally the utter devalorization of being, the divorce of the world of value and the world of facts.

    The thesis was neat and influential, but it was wrong. Yes, the clockwork-view of the universe was shaken. The heavens, as Koyré noted, ceased to be special: before Copernicus, their perfect circles around the earth were the paradigm of God’s craftsmanship. After Newton, gravity ruled everything from planets to apples. The universe also got a lot more unwieldy; whatever wisdom was manifest in its structure became harder to discern. What, for example, did the Divine Mind need so much empty space for? Yet the clockwork-view wasn’t anywhere close to collapsing. Newton, for one, had no doubt that God set up the laws of the new physics. And Kant, in the Critique of Practical Reason, still extols “the starry heavens above me” as one of two things that “fill the mind with always new and ever-growing admiration and awe, the more often and more intensely we reflect on it.” (The passage was inscribed on Kant’s tombstone when he died in 1804.) 

    But even if astronomy after Copernicus was no longer as reliable a route to the intelligent designer, biology still was. In his Parts of the Animals, Aristotle already noted that while plants and animals may not be as exalted as celestial bodies, they are much easier to study because “we live among them.” They, too, bear witness to purposeful design—to “what is not random but for the sake of something.” The “nature that crafted them provides extraordinary pleasures to philosophers who are able to know their causes.” As he dissected cats and squids, Aristotle quoted Heraclitus: “for there are gods here, too.”

    When Hegel equated “the rational” and “the real,” in his Elements of the Philosophy of Right in 1820, he continued to maintain the ancient belief that reality is a manifestation of the Divine Mind. He called the Divine Mind Geist, and, in the Encyclopedia of the Philosophical Sciences, he explicitly connected it to Nous, the deity of Plato and Aristotle. In the early nineteenth century, in other words, we were still watching God think when we grasped the universe’s rational order.

    The clockwork-view also persuaded the young Charles Darwin. In his Autobiography, he writes how much the “old argument of design in nature … charmed and convinced” him when he was a student in Cambridge. He had encountered the “watchmaker analogy” in William Paley’s Natural Theology, or Evidences of the Existence and Attributes of the Deity Collected from the Appearances of Nature, from 1802, which was required reading for Cambridge undergraduates at the time. Even Richard Dawkins—today’s noisiest atheist—admits that he would have accepted the proof from design if Darwin had not come up with the theory of evolution. But Darwin did come up with it. And once he found “the law of natural selection,” Darwin writes in the Autobiography, the “old argument of design in nature” went down the drain: “We can no longer argue that, for instance, the beautiful hinge of a bivalve shell must have been made by an intelligent being, like the hinge of a door by man. There seems to be no more design in the variability of organic beings and in the action of natural selection, than in the course in which the wind blows.”

    Random causes—the mutations driving natural selection—play a key role in generating what appears like design in living beings: plants, animals, humans. If Copernicus removed divine craftsmanship from the heavens, Darwin did the same for the earth. That was when the pile-of-driftwood-view really came into its own. The nineteenth century, then, marks the great rupture, not the Copernican revolution and its consequences, as Koyré believed, or the new philosophies of the seventeenth century—Descartes, Spinoza, Locke—or the eighteenth-century Enlightenment. 

    Between Hegel and Nietzsche, God dropped out of the philosophical picture. From a universe chiseled by a Divine Mind we move into one without metaphysical meaning. Nietzsche, in The Gay Science, captures the shift:

    The total character of the world is […] in all eternity chaos—in the sense not of a lack of necessity but of a lack of order, arrangement, form, beauty, wisdom, and whatever other names there are for our aesthetic anthropomorphisms. […] Let us beware of saying that there are laws in nature. There are only necessities: there is nobody who commands, nobody who obeys, nobody who trespasses.

    Note that Nietzsche’s universe remains governed by causal necessity that scientists can grasp. (Consider again the pile of driftwood: if you know the laws of physics and the particular causes at work, you can explain exactly how it came about.) What Nietzsche denied is that the universe displays the intelligent design of an intelligent designer: “order, arrangement, form, beauty, wisdom.” Instead it is random—in the sense of “purposeless,” not in the sense of “undetermined.” The downfall of the Divine Mind, moreover, knocks down the human mind as well. Knowledge no longer forms the bond between God and man. This is what Nietzsche means when he writes “how miserable, how shadowy and transient, how aimless and arbitrary the human intellect looks within nature.”

    At the end of the nineteenth century, the idea gained traction that science and religion are at war with each other. Two books in particular popularized it: John William Draper’s History of the Conflict between Religion and Science, in 1874, and Andrew Dickson White’s A History of the Warfare of Science with Theology in Christendom, in 1896. Galileo’s trial and Christian condemnations of the theory of evolution were routinely adduced as evidence for the alleged war. This is not the place to discuss the flaws of the conflict model, which are considerable. Here I am describing something completely different: a rupture within science—reason turning against itself. “All the wise agree,” Plato wrote, “that Nous is the king of heaven and earth.” Following the lead of Anaxagoras, Plato, and Aristotle, biologists, physicists, and astronomers studied the structure of animals, the laws of motion, and the trajectories of the stars as the gateway to the Divine Mind. That framework was still in place in the nineteenth century. And then it fell apart. Darwin’s scientific career is emblematic of the shift: it was sparked by the Platonic framework and ended up sounding the death knell for it. Today “all the wise agree” that intelligent design is outside the boundaries of science. The idea is kept alive mostly by the anti-modern resentment of the fundamentalist Christian fringe.

    Did we bury the Divine Mind prematurely? After all, the greatest scientist of the twentieth century seems to agree with Plato that Nous governs heaven and earth. Einstein called his view of the universe “cosmic religion.” To sign up, he thought, we need to accept only necessity and intelligibility: that universal laws determine everything and that the human mind can comprehend them. But that much even Nietzsche was willing to concede. Did he, unwittingly, leave a door open to bring the God of the philosophers back from the dead? Einstein speaks unforgettably of the “mysterious comprehensibility of the world,” and claims to find God in “the sublimity and marvelous order which reveals itself both in nature and in the world of thought.” Echoing the ancient astronomy fan club, he even celebrates the human mind’s ability to “grasp the mysterious force that moves the constellations.” 

    I am not convinced. For one thing, there is the empirical objection against Einstein’s determinism: the randomness at the heart of quantum mechanics. If God doesn’t play dice, quantum mechanics — which Einstein opposed to no avail throughout his life—suggests that God does not exist after all. But even if we can defend necessity and intelligibility, I don’t see how, on their own, they can ground a cosmic religion. Let us grant that the pile of driftwood is completely determined and completely intelligible. That hardly implies that it is God’s work. Einstein insists time and again that his God is not the personal God of traditional faith “who concerns himself with the fate and actions of human beings.” Fine, but that was also true about the Divine Mind of Anaxagoras, Plato, and Aristotle. Even if God has more important things on his agenda than caring about petty human worries, there still must be an agenda that we can discern if we are to give the name “religion” to such a theism.

    In his Exhortation to Philosophy, Aristotle proposes a thought-experiment: what would we do if we were on the “Isles of the Blessed,” a place where all our material needs — hunger, thirst, shelter, health — are taken care of, so that we would not have to worry about a thing? Would the freedom that we would enjoy in such circumstances be a blessing or a curse? How would we spend all the time on our hands? Hang out idly on the beach until we die? Isn’t a life without purpose a nightmare, even if it comes with every comfort?

    For Aristotle, the answer is plain: if we pay money to watch sports and theater, he argues, all the more should we be keen to contemplate the rational order of the universe. It is the best show on offer, and it’s free. On the Isles of the Blessed we would devote our life to theoria, or contemplation. That is Aristotle’s idea of paradise. It is easy to see the attraction of such a paradise if the universe is like a clock. But what if it is like a pile of driftwood? Even if it is in principle intelligible, why make the effort? What sublimity would we be contemplating? The obvious answer is the one the Epicureans give: knowledge may not have intrinsic value, but it has great instrumental benefits. It is good as a means to other things that we value: controlling nature, finding medical treatments, developing technologies, grounding good social policies, resisting ideologies, fake news, and the lies of demagogues. 

    Unfortunately, things aren’t quite so simple. We may cheer science for making our lives safer, more comfortable, and less vulnerable to decay and manipulation. In this sense, modern science is arguably a stunning success. But there is an existential price to pay. Let me explain.

    After my children were born, my wife and I started to compile photo albums. I have no illusions about them. They are kitsch, as the four of us cry “cheese” for the camera. But they are dear to me anyway, because they are documents of what has become the emotional center of my life. The meaning of photo albums is a paradox: they are treasured by the people whose happy memories they hold, but utterly meaningless to the rest of the world. Would you ever display your neighbor’s photo album on the coffee table? The paradox tells us something: that the meaning our children have for us is distorted, or rather created, by love.

    If I were to take such an explanation to an evolutionary psychologist, he would thoroughly disenchant its emphasis on love. He would explain that what I call “love” is actually an attachment that evolution has selected for, because it increases the chances of my offspring’s survival and thereby of the perpetuation of my genes. Do I want to know this? I feel conflicted. I have no doubt that there is a basis in biological reality for the evolutionist’s disenchanting account. And in general I am all for scientific progress. But in my personal life? There, I feel, I must protect the magic from the truth. Is it anti-intellectual or irrational to draw boundaries around the explanatory power of science in our understanding of our inner lives? Or can we, whatever the impact of our genes upon our inner lives, carve out a space for a different — yet likewise legitimate — way to understand love?

    The same wish for protection from the idea that science should have the last word grows exponentially when I consider the universe at large. In this instance I seek protection not from biology but from astronomy and astrophysics. I cannot detect evidence in the cosmos of divine craftsmanship. I see a vast, mute, dark, mostly empty space that burst into existence fourteen billion years ago, where I spend a short time on a small planet in one of countless galaxies, living a life of no cosmic consequence. That life, moreover, emerged from the primordial soup by means of amoebae, apes, and all the chance encounters of my ancestors.

    Contemplating the universe in this way is a powerful antidote for vanity; and so, to keep myself honest, I look up to the heavens once in a while, briefly, from the corner of my eye. Yet its lesson of humility notwithstanding, what I see in the night sky or through a telescope cannot give my life value and purpose. On the contrary, it threatens to obliterate my mortal and terrestrial reasons for getting out of bed in the morning: family, friends, writing, teaching, a cup of coffee, a glass of wine, a concert, a noble cause. From the cosmic perspective, my goals and my projects seem trivial and pointless. If I want to hold on to what gives my life meaning, therefore, I must shield myself from the universe rather than contemplate it — the exact opposite of what for Plato yields “the most excellent life.” The cosmos as we now understand it is no longer useful for my soul.

    But is the universe not reasserting, at this very moment, its power to dazzle people around the globe, as the James Webb telescope reveals it to us in ways we had never seen it before? Who is not amazed by the spectacular pictures of galaxies dancing and cartwheeling, stars glimmering like jewels in the dark, or Jupiter and Neptune’s enigmatic glow? There is beauty in the light, shapes, and colors. There is mystery as we look at cosmic landscapes such as the “Pillars of Creation” or cosmic cliffs such as the “Carina Nebula” — structures unimaginably far away in place and time. We have no clue what happens in them (or happened in times immemorial). Yes, the pictures may even inspire awe — a secular awe — if we let them pull us out of our human-all-too-human concerns and dare to lose ourselves for a moment in the universe’s vast expanse. 

    But what are we to make of these emotions that quickly fade as we return to our daily lives and our (by cosmic standards) trifling worries, joys, and sorrows? If the Webb telescope does not point to an intelligent designer (or at least the possibility of one), does it point to something meaningful at all? The mystery that we experience is not religious, as when we recoil in the face of an inscrutable divine will, manifest in the universe’s design. The mystery reflects, to put it bluntly, our ignorance. The more that ignorance is lifted — through even more powerful telescopes, new forms of space travel, and so on — the more intelligible the universe becomes. But intelligibility, as Nietzsche stressed, does not translate into meaningfulness. The universe we glimpse through the Webb telescope may be visually mesmerizing, but unlike Plato’s universe it cannot provide us with a purpose in life.

    Plato thought that studying the stars can replace the things that we commonly desire with something much better. If stargazing could lift us out of our mortal existence and put us in touch with something eternal and divine, there would be nothing ridiculous about it, no matter how much a Thracian maid may giggle. But if the stars are not a springboard to the divine, then staring at them for too long risks leaving us with nothing to care for at all. In a universe punctured by black holes, the Thracian maid may have the last laugh.

    Frau Freud

    In memory of Michael Porder

    I

     

    September 29, 1939, 20 Maresfeld Gardens, Hampstead, London: on the first Friday after Sigmund Freud’s death, having accepted more than a half-century’s imposed impiety at her husband’s insistence, the seventy-eight year old Martha Freud started to light the Sabbath candles again. Licht-bentshn, as the ceremony is called. You light a pair of candles just as the sun goes down; circle your hands in a sweeping motion three times to gather the light and savor the candles’ warmth — the spirit of restfulness that they are meant to convey — and then you cover your eyes with your hands while reciting the blessing in which God is thanked for sanctifying us with the commandment to light these candles.

    Enter Shabbat the Queen, as the Sabbath is known in Jewish tradition, a presiding feminine presence in a patriarchal environment where most of the active, time-specific commandments, such as the wearing of tefillin or phylacteries (a pair of small black leather cubes, containing pieces of parchment inscribed with Biblical verses, one of which is strapped around the left arm, hand, and fingers and the other is strapped above the forehead) for the morning prayers, fall on men, since women are presumed to be busy with other priorities, such as housekeeping and childcare. And now here was the widow of one of the most formidable enemies of religion fulfilling one of the few obligations incumbent upon women under Jewish law. It was, surely, a form of poetic justice — or perhaps a testament to the hold of the past, however abjured it may be. 

    She was born Martha Bernays on July 26, 1861 in the German port city of Hamburg, into a highly regarded and intellectually advanced Jewish family to whom such recurrent observances meant a great deal. With her performance of the act of lighting candles at a prescribed moment on the Jewish calendar, one might argue that Martha Freud was being more than assertive: she was being defiant. In doing so, she was re-establishing her autonomy by renouncing a pattern of submission to her husband’s wishes. She was taking a deliberate step backward, toward her family ethos and the traditionalism of her origins before she became the compliant, devoted caretaker that Freud desired her to be, the “adored sweetheart in youth” who became “the beloved wife in maturity.” And she was also taking a step forward, towards the post-spousal woman she would become after the death of her husband, and reclaiming a small part of the ancient ritual-laden religious tradition that had been instilled in her while growing up. 

    It was a tradition that her fiercely anti-clerical husband, whom she always referred to as “Professor,” as though she were his eternal student, ridiculed, forbidding her to light the Sabbath candles when they set up their own home. A cousin of Martha’s once recalled “how not being allowed to light the Sabbath lights on the first Friday night after her marriage was one of the most upsetting experiences of her life.” And Isaiah Berlin, who visited the couple at their house in exile in London, recalled that husband and wife were still arguing the issue of lighting candles, however playfully, as late as 1938: “Martha joked at Freud’s monstrous stubbornness which prevented her from performing the ritual, while he firmly maintained the practice was foolish and superstitious.” 

    The Freuds’ fifty-three years of marriage are reputed to have been exceptionally harmonious — one of their few disputes was said to have been about the correct way to cook mushrooms — but the couple’s divergent attitudes toward Judaism remained a source of underground conflict. On the face of it, they were wholly deracinated Jews in a golden age of Jewish deracination. They celebrated Christmas and Easter, and their son Martin, in his memoir Sigmund Freud: Man and Father, testified that none of the six children had ever entered a synagogue. Freud, a confirmed atheist whose work was dedicated in part to the debunking of the monotheistic worldview as a neurotic illusion, delighted in ribbing Martha about her religious attachment, pretending not to know the Hebrew name for “candelabrum,” for example, in a note he wrote her in 1907 after visiting the Roman catacombs: “In the Jewish [catacombs] the inscriptions are Greek, the candelabrum — I think it’s called Menorah — can be seen on many tablets.” 

    As if he didn’t know that it was called a menorah! He had learned the Bible as a child, after all, and it is doubtful that he lost his grasp of basic Hebrew or religious objects. Freud never denied his Jewishness, and went so far as to credit his religion for his own lack of prejudice and his uncowed single-mindedness. Yet he was always highly ambivalent about his Jewish identity. He demanded that Martha not fast on Yom Kippur, arguing that she was too thin to fast, and the only one of his books in which he referred overtly to his Jewish connection was Moses and Monotheism

    In this regard he maintained a firm distance between his public and private allegiance — insisting, for instance, that psychoanalysis was not in any way a “Jewish science,” or judische Wissenschaft, which is how the Nazis and earlier anti-Semites had disparaged it. In a letter to Ferenczi, Freud wrote that “there should not be such a thing as an Aryan or Jewish science. Results in science must be identical, though the presentation of them may vary.” His awareness of the danger in having a specifically Jewish quality attached to his work, which could lead to antisemitic resistance to the psychoanalytic movement and render it less universally applicable, led him to court the Swiss psychiatrist Carl Gustav Jung, despite Jung’s very different ideas about psychoanalysis — and more curiously, despite Jung’s own anti-Semitism and racial theories. Freud put all his hopes into Jung, whom he called his “son and heir,” until they had a disagreement about the uses of mythology which led to a permanent estrangement. 

    Yet the story of Freud’s Jewishness, the myth of his complete alienation from his patrimony, which is based largely on his non-observance of even the most fundamental of Jewish rituals, is more complicated than has been implied in most of the writing about him. (This misapprehension about Freud’s complete ignorance of Judaism was recently invoked yet again in Adam Kirsch’s essay “Freud as Talmudist” in the Jewish Review of Books). In contrast to the general image of Freud as an am ha’aretz, an ignoramus, severed from his Jewish roots, we have a letter that he wrote to the chief rabbi of Vienna in 1931, for example, in which he passionately declared: “I am a fanatical Jew. I am very much astonished to discover myself as such in spite of all the efforts to be unprejudiced and impartial.” And more fully a few years earlier, in 1926, accepting an award from the Bnai Brith on his seventieth birthday, he told his audience in a letter that he had joined Bnai Brith because 

    I myself was a Jew, and it always seemed to me to be not only shameful but downright senseless to deny it. That which bound me to Judaism—I am obliged to admit it—was not my faith, nor was it national pride; for I was always an unbeliever, raised without religion, although not without respect for the so-called “ethical” demands of human civilization. And I always tried to suppress nationalistic ardor, whenever I felt any inclination thereto, as something pernicious and unjust, frightened as I was by the warning example of the peoples among whom we Jews live. But there remained enough other things to make the attraction of Judaism and Jews irresistible—many dark emotional forces, all the more potent for being so hard to grasp in words, as well as the clear consciousness of an inner identity, the intimacy (die Heimlichkeit) that comes from the same psychic structure. And to that was soon added the insight that it was my Jewish nature alone that I had to thank for two characteristics that proved indispensable to me in my life’s difficult course. Because I was a Jew I found myself free from many prejudices that hampered others in the use of their intellects; and as a Jew I was prepared to take my place on the side of the opposition and renounce being on good terms with the “compact majority.”

     “Raised without religion”? Hardly. A complicated case, clearly.

    [INSIGNIA] To better understand the Freuds’ respective positions on Judaism, one need look no further than their individual backgrounds. “I was born on the 6th of May [18]56 in Freiberg/Moravia,” Freud wrote in a letter to his colleague Paul Federn in 1912. “My father and mother came from Galicia. My mother, née Nathansohn, from Brody, of very distinguished ancestry (the Nathansohn-Kallir family), my father of the merchant class. According to tradition, as he once reported to me, the Freud family is said to sometime have left their hometown of Köln [Cologne] during a period of persecution of Jews and then to have migrated eastward.”

     Throughout his lifetime, Freud, who was born Sigismund Schlomo Freud, went to great lengths to portray himself as having grown up in a deeply assimilated Reform Jewish family, steeped in modernist values and Viennese culture. (His family moved to Vienna when he was four.) It was a family, or so he led his colleagues and relatives to believe, in which Jewish holidays were minimally observed, and his own religious education was scanty, leaving him with but the vaguest understanding of Hebrew or Yiddish. A large part of our idea of Freud as a “godless Jew” — the term coined by the historian Peter Gay, himself an assimilated Jew who translated his last name from “Froelich” to “Gay” after becoming an American citizen and wrote extensively about Freud — derives from Gay’s insistence that Enlightenment values had completely displaced religious and ethnic ones. (This was consistent with Gay’s simplistic view of the Enlightenment itself.) Gay’s description of Freud’s father Jakob’s position on Jewish matters shows how the notion of total secularization was absorbed unquestioningly by Freud scholars: “Jacob Freud had emancipated himself from the Hasidic practices of his ancestors; his marriage to Amalia Nathanson [his second wife and Freud’s mother] was consecrated in a Reform ceremony. In time, he discarded virtually all religious observances….”

    It is worth pointing out that ritual observance is hardly the only measure of Jewishness. The truth about Jakob Freud’s relationship to his religious background is richer, as was his son’s. The father, too, was a complicated case. The Freud family consisted of transplanted Eastern European Orthodox Jews — Ostjuden, or Eastern Jews, looked upon with disdain as primitive and uneducated by German and Austrian Jews — who were only slightly assimilated, if at all. According to Emanuel Rice, a psychiatrist who closely examined this subject in his book Freud and Moses: The Long Journey Home, Jakob had studied for years in a yeshiva in Tysmenitz, Galicia and was referred to in his youth as a “yeshiva bocher,” a yeshiva student. Rice also cites a granddaughter of Jakob’s who lived with him toward the end of his life and remembered him “reading the Talmud (in the original) at home.” If that is so, then Jacob possessed a considerable degree of Jewish literacy and cultivation. 

    Then there is the famous and much debated issue of the Phillippson family Bible — a German translation of the Tanakh — that Jakob gave his son on his thirty-fifth birthday, with an inscription in Hebrew that included a skillful pastiche of ancient quotations composed in the traditional manner from various Jewish sources. The inscription — “To my dear son Shlomo” — is written in a hand that is clearly comfortable with writing Hebrew. Although this dedication has been parsed by hoch analysts who assumed that Freud could not read Hebrew (an impression fueled by Freud himself), scholars such as Rice and Yosef Hayim Yerushalmi, in his Freud’s Moses: Judaism Terminable and Interminable, have shown that Freud had a considerable Jewish education and would have understood the inscription. (And indeed, one might ask, why would his father have inscribed such an important gift in a language that his son could not read?) Similarly, Freud’s insistence that he could not understand Yiddish — the “jargon” of the Ostjuden — is dubious, because his mother Amalia regularly spoke Yiddish. In addition to which, Rice argues, based on the testimony of one of Freud’s grandsons, Sigmund’s mother remained religiously observant until her death in 1930. This would go some way to explaining why Freud arranged for his mother to have a strictly Orthodox funeral and burial — although it would not explain why he chose not to attend it, sending his daughter Anna in his stead. 

    Martha Bernays’ background, on the other hand, was indubitably Orthodox (“frum wie ein Stecken,” or “religious as a stick,” as my mother, an observant German Jew, used to say), one in which hidebound observances were scrupulously maintained. Her mother, Emmeline, wore a sheitl, or wig, which was required by the rabbinical tradition to preserve the modesty of married Jewish women, and kept a strictly kosher house. Hers was a tenacious and domineering personality, though outwardly she came across as mild and soft. These traits would antagonize her future son-law; he described her as “alien” and wrote his fiancée that “I seek for similarities with you, but find hardly any.” (At the same time he conceded that Emmeline was “a person of great mental and moral power standing in our midst, capable of high accomplishments, without a trace of the absurd weaknesses of old women.”) 

    Martha’s family was renowned in the Jewish community for their scholarship and their leadership. Isaac Bernays, her grandfather, was the chief rabbi of Hamburg in the 1830s and 1840s, and was respected for his combination of secular and religious knowledge, expressed in his sophisticated philosophical views, his linguistic skills, and his superior grasp of Torah, Midrash, and Talmud. He was a distant relative of Heinrich Heine; he appears often in Heine’s letters, and upon his death in 1849 he was acknowledged by Heine to have been an extraordinary personality. Bernays served in this important pulpit in the early years of the Reform movement and opposed it bitterly, formulating in response an approach in which it was possible to live, with certain limits, in both the religious and secular worlds. Bernays’s conception greatly influenced his student Samson Raphael Hirsch, the rabbi and theologian who provided Modern Orthodoxy with its guiding principle of Torah Im Derech Eretz, or Torah and the way of the world. (Hirsch was my great-great-grandfather.) 

    Despite Bernays’ strict commitment to Orthodoxy, he was also considered to be something of a religious modernizer, known for his innovative sermons given in German, and for bringing secular subjects — German, natural science, geography, and history — into the curriculum of the Talmud Torah charity school, which had formerly been limited to Hebrew and arithmetic. Hirsch and Bernays became acquainted when Bernays, after attending the University of Wurzberg and studying at the yeshiva of Rabbi Abraham Bing, the chief rabbi of Wurzburg and a well-known Talmudist, became a private tutor in the house of Hirsch’s father. Hirsch followed Bernays’ practice of fusing the Jewish and secular realms with the hope of keeping the tidal wave of Reform Judaism at bay. (Family lore has it that at the Hirsch school in Frankfurt boys did not wear yarmulkes during secular classes.)

    Two of Bernays’ sons were university professors. The eldest, Jacob, was a prominent philologist and classicist, a man of prodigious learning who was one of the pioneers of Quellenforschungen, or source criticism, according to which the primary method of classical studies was the intense study of the surviving texts of the ancient world for the purpose of coaxing from them knowledge of all that did not survive. He was famous for a controversial interpretation of Aristotle’s concept of catharsis, which he read not in moral terms but in medical ones — thereby making himself a kind of precursor to Freud’s own approach to the subject. His adherence to Jewish religious convictions prevented him from becoming a full professor at the University of Bonn, and so in 1853 he helped to found the Breslau Jewish Theological Seminary, which became one of the great institutions of modern Jewish scholarship. There he taught classics, history, German literature, and Jewish philosophy. In 1866 Jacob was finally appointed an assistant professor and chief librarian at Bonn, but he remained involved with the seminary at Breslau. His younger son, Michael, was a Goethe and Shakespeare specialist who was professor of German literature at the University of Munich. He converted to Christianity (as did Isaac Bernays’ brother Adolphus) in 1856 and was baptized, which led his family to break with him at the same time as it furthered his career. 

    Isaac’s other son, Berman Bernays — Martha’s father — was a merchant, as were the parents of his wife; Berman later became secretary to the well-known economist and constitutional law expert Lorenz von Stein, a great liberal who may have been the earliest theorist of the welfare state. When Emmeline (née Philipp) married him in 1886, his profession was given as “journalist.” One of four children (three older ones died in quick succession), Martha was born in 1881 and grew up in Hamburg in fairly modest circumstances. When she was six years old, her father served a stint in prison for bankruptcy; she is said never to have spoken of this incident. Freud’s uncle, meanwhile, was imprisoned for trading in counterfeit rubles and rumor had it that his father was implicated in the scandal. The writer Jenny Diski suggested, in a review of a biography of Martha Freud by Katya Behling, that Martha and Sigmund were united by a shared legacy of public shame. 

    Martha’s family moved to Vienna when she was eight, but she, as well as her mother and sister Minna, never lost their attachment to Hamburg. “Neither she nor Minna ever made the slightest concession to the spirit and lifestyle of Vienna,” Behling observes in her biography, “and even after fifty years in Austria they still spoke perfect standard German.” To refuse the spirit of Vienna was to live the life of a stubborn traditionalist. One of Martha’s two elder brothers, Isaac, died at the age of sixteen, when she was eleven. (This was another loss that she shared with Freud, who also had a brother who died at a young age.) It is worth noting that these extraordinary lineages lasted more than one generation: Freud’s sister Anna married Martha’s brother Eli, and not long after they moved to the United States their son Edward Bernays was born, who, with his pioneering studies of public opinion and its manipulation, became the father of press relations, mass marketing, and psychological warfare — in other words, a formidable shaper of modern life.

                     

    II

     

     Despite being portrayed in later years as intellectually indifferent (in particular to her husband’s theories), the young Martha developed an interest in art and literature during her years at school. She had a keen appreciation of music and was an avid reader who knew the German classics (Goethe, Schiller, and so on); she had a special fondness for Stephan Zweig and Thomas Mann. Although the time she could allocate to reading was severely cut back during the busy years of her marriage, running a large household according to her high standards, she would return to her love of books after her husband’s death. During their courtship, Martha frequently wrote Freud letters in verse, and he shared his thoughts on John Stuart Mill with her. His first present to her was a copy of David Copperfield — Dickens became of one Martha’s favorite writers — although he warned her off the rude parts in Don Quixote, stating that they were “no reading matter for girls.” By the time he met Martha in April 1882, her sharp mind, slim and attractive figure, and coquettish charms had attracted many suitors. She had already turned down one proposal of marriage.

    The first time Sigmund Freud spotted the almost twenty-one-year-old Martha Bernays was at his family’s dining table; she was visiting his sisters together with her sister Minna. Martha was peeling an apple during their conversation: a decorous feminine activity, suggestive of industriousness and nurturing. The twenty-six-year-old Freud was an anxious, somewhat self-important medical student with next to no experience of women, despite being the favored brother of five sisters and his mother’s goldener Sigi, who was given a room of his own in his family’s small apartment. One wonders whether things might have gone differently if he had glimpsed Martha in a different guise, less the domestic woman in a genre painting and more like her sister Minna, eager to compete intellectually and inclined by nature to take up more air. “Since I learned that the first sight of a little girl sitting at a well-known long table talking so cleverly while peeling an apple with her delicate fingers, could disconcert me so lastingly,” Freud wrote to Martha in June, 1885, “I have actually become quite suspicious.”

    In any case, the young Martha presented a winsome picture, with her hair worn in a center part and pulled back in a chignon, elegantly clad in a high-necked dress with a lace collar and lace-up ankle boots. Soon, although rather awkward and shy, Freud was sure of his feelings for Martha and began calling her “Princess” and sending her a red rose every day accompanied by a poem in Latin or another foreign language. By the middle of June in 1882, a mere two months after they met, the couple was secretly engaged despite Emmeline’s opposition to the match — she considered Freud’s financial prospects to be dim. Freud began writing his “darling girl” and “darling Marty” long rhapsodic letters over the next four-and-a-half years of their engagement — the famous brautbriefe, as their correspondence came to be called. It was only after Martha’s eldest brother Eli became engaged to Freud’s sister Anna on Christmas in 1882 that the couple felt comfortable in announcing their own engagement, although Freud never officially asked for Martha’s hand in marriage. In Freud’s first fervent letter to her, he wrote, “Dear Martha, how you have changed my life”; but he was also an incorrigibly jealous suitor, expressing absurdly patriarchal horror that his fiancée had travelled on holiday with only her younger sister for company: “Fancy, Lubeck! Should that be allowed? Two single girls travelling alone in North Germany! This is a revolt against the male prerogative!” 

    Sigmund would later tell Martha that theirs was an instance of leibe am ersten blick, love at first sight, although it is unclear whether this was wholly mutual; Martha appears to have warmed up a bit more gradually. He would go on to observe to her in one of the nine hundred and forty letters that he wrote to her during the four and-a half years of their courtship and engagement (they also collaborated on a secret journal, a Geheime Chronik) that she was not “in the strict painterly sense” a beauty, but that she had qualities he considered more important, such as generosity, wisdom, and tenderness. One wonders what his fiancée, a young woman just coming into a sense of her attractiveness to the opposite sex, made of this faint praise. (Freud’s own mother had been considered a great beauty in her day and the appeal of female comeliness was never lost on him.

    During the period in which he experimented with cocaine, Freud, worried about her pallor, sent Martha a small dose to put color in her cheeks, and referred jauntily to the disinhibiting effect that cocaine had on him. “Woe to you, my princess, when I come,” he wrote to her on June 2, 1884. “I will kiss you quite red and feed you till you are plump. And if you are forward you shall see who is the stronger, a gentle little girl who doesn’t eat enough or a big wild man who has cocaine in his body.” On February 2, 1886, toward the end of another letter, he wrote: “Here I am, making silly confessions to you, my sweet darling, and really without any reason whatever unless it is the cocaine that makes me talk so much.”

    “I was told,” writes Sophie Freud, Martha’s daughter-in-law, in her memoir Living in the Shadow of the Freud Family, “that her greatest attraction for the young Sigmund Freud had not been her slender grace or charming features but her inner peace and serenity. She radiated calmness; and he sensed instinctively how wonderful it would be to have her near him after a day of hard work.” As for the scrutinizing suitor himself, Freud had good features, a thick beard and a penetrating gaze, but it seems that Martha initially found him too short and a bit intimidating.

    It is difficult to cobble together an image of the pre-Freud Martha Bernays because we have few accounts of her before he enters the picture (and not many more after). From the few impressions that have been documented, she seems to have been both self-contained and curious, a dutiful daughter who retained some independence of mind. She loved to read whenever she found the time, went to plays, and was a demon for needlework of all kinds. Most of all, Martha was marked by the North German sense of discipline, by a horror of shoddiness and leaving things half-done. Her daughter Anna would later observe to her own biographer, Elizabeth Young-Breuhl, that “my mother observed no rules, she made her own rules.” 

    Although there were those, such as his brilliant Hungarian disciple Sandor Ferenczi (whom Freud eventually broke with as he did with so many of his followers), who contended that Freud’s unresolved connection to his narcissistically controlling mother Amalia left him with a fear of intimacy and of sexually passionate women in particular, it seems that, at least at the beginning of their involvement, Freud couldn’t get enough of his “deeply beloved, most ardently worshipped Martha,” as he described her in 1882. As long, that is, as she lived up to the very particular image that he had of a desirable mate. During the course of their epic engagement, undoubtedly fed by Freud’s anxiety as to whether Martha loved him with the same ardor that he loved her (Martha was by nature more reticent) and by his fiercely suspicious nature, Freud bullied her into becoming more of the docile and governable woman he was seeking. Despite maintaining that he did not want her to be a malleable toy doll, he sneered at her efforts to put her foot down and openly disliked what he called her “tartness.”

    Somewhere along the way it seems that Martha lost some of her moxie — her unfettered and even feisty spirit. One can see a glimpse of that spirit still peeking through late in their engagement, when she wrote Freud in irritation: “You now always only write once about each thing, and then nothing more however much I ask. I’m not used to this, my good man, it is certainly high time I brought you to heel, otherwise I’m quite sure to go completely thin and green for sheer annoyance and exasperation.” But by this time Martha could not have been left in any doubt as to precisely what it was that her husband-to-be expected in his partner: a certain docility, and clear and separate spheres of influence. “I will let you rule [the household] as much as you wish,” he decreed, “and you will reward me with your intimate love and by rising above all those weaknesses that make for a contemptuous judgment of women.” Perhaps in keeping with Martha’s understanding that her fiancée wanted to remake her into a more subservient personality, some of her letters show her posing as an intellectual innocent in need of Freud’s assistance: “I finally have read your postcard with Max’s help because it was difficult to read. Yes, that’s how stupid your dear girl is.” 

    The critic Frederick Crews, a disenchanted Freudian and ardent enemy of psychoanalysis all the same wrote perceptively about the Svengali-like attitude toward Martha that hovered right beneath Freud’s almost fulsome expressions of adoration. “When he wasn’t complaining about his present aliments and future neglect,” Crews observed in Freud: The Making of an Illusion,

    the unhappy fiancé was instructing his beloved in how to become a properly deferential mate. He made it clear that she would have to change some of her ways, and the sooner the better. It was precisely Martha’s most admirable qualities — unself-conscious candor and spontaneity, a trusting nature, freedom from class prejudice, loyalty to her family and its values — that struck him as in need of revision. Thus he rebuked her for having pulled up a stocking in public; forbade her to go ice skating if another man were along; demanded that she sever relations with a good friend who had gotten pregnant before marriage; and vowed to crush every vestige of her Orthodox faith and to turn her into a fellow infidel.

    Although Crews is focusing here exclusively on the dictatorial aspect of Freud’s attitude toward his future wife, it is nonetheless a fairly accurate and unattractive picture — conventionally masculine for its time, perhaps, but especially disappointing in one of the great free-thinking apostles of modernity. 

     

    III

     

    How did the Freuds’ marriage negotiate the age-old problem of combining sexual passion with enduring love? Is there evidence in Freud’s writings of his view of the institution of marriage, and of the possibility of lasting erotic attraction? And how confidently can we infer from his “scientific” remarks on conjugal life to the character of his own marriage? 

    There is not much to go on. Curiously enough, the index of the Standard Edition of the Complete Psychological Works of Sigmund Freud, the Strachey edition, has no entry for “wife” and only a smattering of references under “marriage.” Still, he famously wrote about the disjunction between romantic affection and carnal desire in 1912, in a paper titled “The Most Prevalent Form of Degradation in Erotic Life,” a psychoanalytic exploration of what we call the Madonna-Whore Complex, in which he observed that “where such men love, they do not desire, and where they desire, they cannot love.” 

    Freud attempted to explain this phenomenon by looking to the restrictive cultural mores of his time and the inhibition on both men and women to delay sexual engagement well beyond the age of maturational readiness — “the long period of delay between sexual maturity and sexual activity which is demanded by education for social reasons,” which resulted in a “lack of union between tenderness and sensuality.” He went on: 

    In very few people are the two strains of tenderness and sensuality duly fused into one; the man almost always feels his sexual activity hampered by his respect for the woman and only develops full sexual potency when he finds himself in the presence of od a lower type of sexual object; and this again is partly conditioned by the circumstance that his sexual aims include those of perverse sexual components, which he does not like to gratify with a woman he respects. Full sexual satisfaction only comes when he can give himself up wholeheartedly to enjoyment, which with his well-brought-up wife, for instance, he does not venture to do. Hence comes his need for a less exalted sexual object, a woman ethically inferior, to whom he need ascribe no aesthetic misgivings, and who does not know the rest of his life and cannot criticize him. 

    Freud’s own marriage was conspicuously de-romanticized and then rather quickly desexualized after its emotionally impassioned beginning; the letters, which include overt references to erotic longings on both sides, seem to confirm the fatalistic analysis of conjugal love and desire in his paper. Although Ernest Jones, Freud’s British colleague and hagiographic biographer, deemed this vast collection of correspondence “a not unworthy contribution to the great love literature of the world,” the analyst Martin Bergmann once quipped: “We have wonderful courting letters before marriage. After marriage we only get laundry letters. It’s all practical. We don’t have a single love letter after marriage.” 

    In an earlier paper, from 1908, called “’Civilized’ Sexual Morality and Modern Nervous Illness,” Freud provided a larger framework, a civilizational framework, for his dour view of marital life, in which he presented a somewhat disheartening view of the damage to intimate relations that is inflicted by the process of socialization that humans must go through in order to co-exist peacefully with others. “Experience teaches us,” he observed, “that for most people there is a limit beyond which their constitution cannot comply with the demands of civilization. All who wish to be more noble-minded than their constitution allows fall victim to neurosis; they would have been more healthy if it could have been possible for them to be less good.” And so he asks “whether sexual intercourse in legal marriage can offer full compensation for the restrictions imposed before marriage.” And he answers:

    There is such an abundance of material supporting a reply in the negative that we can give only the briefest summary of it. It must above all be borne in mind that our cultural sexual morality restricts sexual intercourse even in marriage itself, since it imposes on married couples the necessity of contenting themselves, as a rule, with a very few procreative acts. As a consequence of this consideration, satisfying sexual intercourse in marriage takes place only for a few years; and we must subtract from this, of course, the intervals of abstention necessitated by regard for the wife’s health. After three, four, or five years the marriage becomes a failure insofar as it has promised the satisfaction of sexual needs….The spiritual disillusionment and bodily deprivation to which most marriages are thus doomed puts both partners back in the state they were in before their marriage, except for being the poorer by the loss of an illusion, and they must once more have recourse to their fortitude in mastering and deflecting their sexual instinct….Women, when they are subjected to the disillusionments of marriage, fall ill of severe neuroses which permanently darken their lives….A girl must be very healthy to tolerate it, and we urgently advise our male patients not to marry any girl who has had nervous trouble before marriage.

    Many critical points could be made about the dark conjectures that Freud offers in this paper, which seem to be based more on personal experience than on scientific findings or cultural observations. Why, for instance, must married couples content themselves “with a very few procreative acts,” unless one believes that the sole purpose of sexual intercourse is procreation? What about sexual pleasure, a subject about which Freud extensively theorized? And his hypotheses about women’s fragility when faced with “the disillusionments of marriage” seem both ill-conceived and misogynistic — a blinkered attempt to understand female sexuality, which he regarded as murky and mysterious, in keeping with his idea of women as the “dark continent.” 

    In any event, Martha and Sigmund were married on September 13, 1886 in Hamburg. Frau Bernays became Frau Freud; she was twenty-five and he was thirty. Since a civil wedding on its own was not officially recognized at that time in Austria, the couple had to marry a second time under a chuppah with full Jewish ritual, despite Freud’s annoyance. The ceremony included the groom giving the bride a ring as well as crushing a glass underfoot in remembrance of the destruction of the Temple in Jerusalem — a memory of sadness in the midst of happiness, which in other contexts was a dissonance that Freud often studied. 

    Freud seemed to have felt abandoned within minutes of casting in his lot with Martha. “Once one is married,” he opined, “one no longer — in most cases — lives for each other as one used to. One lives rather with each other for some third thing, and for the husband dangerous rivals soon appear: household and nursery.” He added that, “despite all love and unity, the help each person had found in the other ceases. The husband looks again for friends, frequents an inn, finds general outside interests.” Hardly a chipper forecast for what lay ahead, but then Freud, despite his inner reserves of strength, was often the one who expressed anxiety. The task of reassurance fell to the unflappable Martha, who learned how to soothe him.

    The union produced six children in nine years, by which point Martha was thirty-four. After three children had been born, the family moved to Berggasse 19, near the university quarter in Vienna, where their apartment occupied an entire floor but was rather small and dark. Martha, who was in charge of their finances, set about looking after her new husband with the utmost attention and care for both his appearance and his comfort. She laid out and brushed his clothes for him — which were, as Martin, Freud’s eldest son, reported in his reminiscences, “cut from the best material and tailored to perfection.” It was said that such was the diligent nature of her caretaking that she put the toothpaste on his toothbrush. 

    While her husband worked up to sixteen and even eighteen hours a day, Martha, carried around an enormous bunch of keys, the better to oversee a household that included, despite the family’s relative lack of money, a cook, a governess, two nannies, and a chambermaid. She ran the large family’s schedule like a well-oiled machine. Lunch was served promptly at one o’clock every day, a formal meal often featuring tafelsptiz or Rindfleisch, boiled beef and vegetables, with a horseradish sauce, a favorite of Freud’s. Sophie Freud, Martin’s daughter, recalled in a memoir that Martha maintained impeccable standards: “At each meal Mrs. Freud has a pitcher of hot water and a special napkin at her place, so that if anybody made a spot on the tablecloth she could hurry to remove it. Only her husband was permitted to make as many spots as he wished.” Dinner was at seven, after which Freud usually worked until midnight. 

    The children, who were Martha’s domain when young although of keen interest to their father as they grew older, seem to have been suitably well-behaved, their parents having instilled in them the importance of their father’s work. As Martin recalled, “There was never any waiting for meals: at the stroke of one everybody in the household was seated at the long dining-room table and the same moment one door opened to let the maid enter with the soup while another door opened to allow my father to walk from his study to take his place at the head of the table at the other end.” 

    Jenny Diski, in her review of Behling’s biography, observed that the exemplary bourgeois surface that Martha helped to provide — “the rigid table manners, ordered nursery, and bustling regularity” — enabled her husband to organize his “deeper, hardly thinkable thoughts” into “something that looked like a scientific theory.” By polishing that surface and keeping the clocks ticking in unison, Diski grandly concluded, “Martha was as essential to the development of Freudian thought as Dora or the Rat Man.” This may have a grain of truth to it, in the logistical sense that an orderly environment allowed Freud to concentrate on his work, but it strikes me all the same as something of an exaggeration as though Mrs. Einstein were to be credited with facilitating her husband’s ideas about energy and mass.

    The simple truth is that Freud never had any plans for Martha to be an intellectual partner or to participate in his intellectual life in any way. He was happy for her to take care of his every need and to view him as the genius of his age, the equal of Newton or Darwin, just as she was happy to call herself Frau Professor after Freud was given his title in 1902. Although he seems to have initially been drawn to Martha’s cultural sophistication, Freud quickly felt the need to downplay her braininess, a demotion in which she willingly acquiesced. Early in their engagement he referred condescendingly to “the charming confusion in your dear sentences.” In his memoir Martin Freud recalls that when his parents had distinguished visitors over for dinner and a learned guest began to recite from The Iliad, Martha had already departed the premises. “My mother,” Martin writes, “who knew no Greek and, in consequence, was without any admiration for Homer’s immortal epic, had quietly withdrawn earlier.” 

    Then, too, there was the slight puzzlement that she expressed at her husband’s choice of profession, as though his high-flying speculations were beyond her ken. “I must admit,” she said, “that if I did not realize how seriously my husband takes his treatments, I should think that psychoanalysis is a form of pornography.” The Viennese analyst Theodor Reik reported that, based on conversations with Martha during walks that they took together, “I got the decided impression that she not only had no idea of the significance and importance of psychoanalysis, but had intensive emotional resistances against the character of analytic work. On such a walk she once said, ‘Women have always had such troubles, but they needed no psychoanalysis to conquer them. After the menopause they become quieter and resigned.’” That sentence, so dismissive of the real problems faced by herself and other members of her sex, is painful to read. 

     

    IV

     

    After the birth of their sixth child, the Freuds — or more precisely, Freud — decided to practice abstinence as a means of contraception. While he believed that pregnancy “is a normal state in a young woman,” he also held that coitus interruptus led to neurosis, a view that was based on some misbegotten Darwinian notion about the right and wrong “discharge” of semen. In some ways, of course, Freud was very much a man of his time and place, fascinated by the notion of “perverse” desires but also cautious and somewhat sexually inhibited. Freud professed to dislike Vienna, the hothouse capital of the Hapsburg empire, writing to his colleague Wilhelm Fliess that “I hate Vienna with a positively personal hatred.” But Vienna was all the same the center of intellectual life in Europe — a bubbling cauldron of ideas about literature, music, art, architecture, science, and philosophy — and therefore a stimulant to his thinking. It was also a city whose culture was intensely preoccupied with sex. While modernist painters and writers dug deeply into erotic life, the bourgeoisie had a more prurient and censorious attitude toward sexuality, especially as it applied to women and children. Women were expected to be chaste before marriage, and the youthful exploration of sexuality through activities like masturbation was vehemently discouraged. (Freud’s fine sense of humor did not desert him on even on salacious subjects such as these. The problem with masturbating, he once observed, is knowing how to do it well.) 

    In his research and his theory, Freud indicted these Victorian mores as a source of neurotic conflict, and his views on infantile sexuality (“one of the sources of Freud’s enduring appeal, I believe,” observed Paul Roazen in his book Meeting Freud’s Family, “is that he so often took the side of the suffering child”), were remarkably forward-looking — and yet those same Victorian mores were reflected in some of his own constricted views on the subject of carnal pleasure. “I stand for an infinitely freer sexual life,” he wrote in a letter, “although I myself have made very little use of such freedom. Only so far as I considered myself entitled to.” Having proposed that the sexual drive was necessarily self-divided (as he believed all the drives were), he took the view that sex could never be completely gratifying.

    His ability to sublimate erotic desire in his work was remarkable, and in a small book about Leonardo da Vinci he observed that Leonardo’s apparent asexuality set him “above the common animal need of mankind.” In a letter written to Fliess in 1897, when he was forty-one, Freud made a reference to ceasing connubial relations entirely. “Sexual excitation is of no more use to a person like me,” he wrote, although he attested to some incidents of sexual intercourse with Martha later on, recording in his diary at the age of sixty that he had “successful coitus Wednesday morning.” He also wrote to Fleiss that he often suffered from impotence. (Some scholars have argued that Freud’s decision to abstain from sex, although ostensibly to avoid having more children, may have stemmed in part from an unconscious desire to get back at Martha for her sexual reticence during their prolonged engagement.) According to Oliver Freud, who was born fourteen months after Martin, neither parent thought to talk their sons about the birds and the bees. A family doctor was enlisted to teach the boys about sex. 

    Freud’s own sexual behavior reminds us that he was not only the champion of psychological and sexual enlightenment in his work, but also the champion of the rewards — and demands — of sublimation and repression. In 1936 he characterized his married life with a startling degree of restraint in a conversation with Princess Marie Bonaparte, one in a bevy of his female friends that included Minna Bernays, Princess Bonaparte, Lou Andreas-Salome, Hilda Doolitle, and Helene Deutsch, with whom he shared his thoughts. “It was really not a bad solution of the marriage problem,” he said, “and she is still today tender, healthy, and active.” When one compares this wan statement to his impassioned declaration to Hilda Doolittle, the poet H.D., who spent several years in analysis with him, that “I am an old man and you don’t think it worth your while to love me,” they almost seem to come from two different men.

    To his son-in-law Max Halberstadt, he conveyed his relief that his children had turned out well and that Martha “has neither been very abnormal nor often ill.” (Recall the patronizing passage in the paper of 1908 about “regard for the wife’s health.”) This was a far cry from the sentiments that he felt during their engagement, when he clashed with Martha’s mother about who had the greater claim to her daughter: “Marty, you cannot fight against it; no matter how much they love you I will not leave you to anyone, and no one deserves you; no one else’s love compares with mine.” Martha, by contrast, sounded more enthusiastic when describing her marriage to her granddaughter Sophie: “I wish for you to be as fortunate in your marriage as I have been in mine. For during the fifty-three years I was married to your grandfather, these was never an unfriendly look or a hard word between us.” With accommodation and compromise on her part came harmony in their conjugal relationship, whatever it may have lacked in the way of higher communion.

    This brings us to the sensational theory, originated by Jung and fanned over the decades by Peter Swales (also known as “the guerilla historian of psychoanalysis”), that after ceasing to sleep with his wife Freud embarked on an affair with Minna Bernays, Martha’s smart, witty, and acerbic younger sister. The two had corresponded while Freud was pursuing Martha, and clearly they had a companionable relationship. Among other things, they were both avid card-players. They lived together in the same household for forty years — first on Berggasse 19 in Vienna, where Minna moved in in 1896, and then at 20 Maresfeld Gardens in London. Indeed, the sleeping arrangements in Vienna were weirdly intimate, as I saw for myself when I visited Berggasse 19. Minna’s small sleeping quarters were right next to Sigmund’s and Martha’s bedroom, and the only way Minna could get to her room was to go through the bedroom that the Freuds shared.

    The two also took trips together, and the rumors of their illicit liaison were fueled in 2006 by a German sociologist who found a yellowing hotel ledger entry written in Freud’s distinctive scrawl at an inn in the Swiss alps where the psychoanalyst, then forty-two, and Minna, then thirty-three, stayed for two weeks in 1898. The couple had registered as “Dr Sigm Freud u frau” — as husband and wife. They took the largest room at the inn, which had the equivalent of a double bed. This last detail persuaded some Freud loyalists, such as Peter Gay, of the veracity of the rumors, although I myself remain dubious. For one thing, they might have checked into a single room because of Freud’s frugality; they were anyway used to being in close quarters and it is unlikely, given the Victorian ethos of the era, that they could have rented the room if their actual unmarried relationship had been made clear. For another, despite his heretical approach to religious strictures and his theoretical advocacy of greater sexual freedom, Freud strikes me as a man fairly haunted by guilt, and he would have been disinclined to cheat on his devoted wife. There was also the fact that Minna was not particularly attractive and female appearance was important to Freud. In one of his early courtship letters he told Martha that her nose and her mouth were shaped “more characteristically than beautifully, with an almost masculine expression, so unmaidenly in its decisiveness.” Such a microscopic analysis of his future bride’s less than ideal features suggests that he was a critical observer of female appearances. Then too, he himself had once noted to his future wife that “similar people like Minna and myself don’t suit each other specially.”                       

     

    Who, then, was Martha Freud? Why is she so hard to find amid the obsessive interest and research that swirls around her husband? Was she really just a contented Hausfrau, an efficient manager of a busy household, a firm, undemonstrative, but affectionate mother, and a devoted wife who “tried as much as possible,” as she wrote in response to a condolence letter after her husband’s death “to remove the misère of everyday life from his path”? Assuming that Freud’s ideas about everything from female psychology to wayward sexuality to neurotic conflict were drawn even slightly from his own experience, what influence did Martha’s personality and her interactions with him have on psychoanalytic theory? It is hard to imagine her living with him for more than half a century and not having had some impact on him beyond making sure that his boiled beef was served on time. In addition to which, as Sophie Freud points out in her book, “some of Freud’s most fundamental discoveries were made by observing his own children. Mrs. Freud was his assistant in helping to transform the nursery into a psychological laboratory. But the children were not to know they were being used as guinea pigs. ‘Above all, the family must be normal,’ she said.” 

     It is all the more surprising, then, that Martha has been of so little interest or consequence to the many biographers of her husband. In recent years there has been Katya Behling’s biography of her in German, as well as a novel called Mrs. Freud by the French writer Nicolle Rosen. There is also a short memoir by the Freuds’ long-standing housekeeper, Paula Fichtl, but it does not add much to the overall picture except for the author’s own adulation for Herr Doktor and her unstinting admiration for Martha’s capabilities and resilience. As Behling recounts in her biography, Martha was astonishingly courageous. When, shortly after the Anschluss, a group of armed SA men showed up at Berggasse 19, sending Paula into a tizzy, Martha is said to have maintained her composure, suggesting that “the gentlemen” might wish to deposit their rifles in the umbrella stand for the duration of their visit. And when another phalanx of Nazis stormed into their apartment a few days later, Paula’s upset was met with an ironic comment: “Surely, Paula, you did not expect the Nazis to come with flowers.”

    Is Martha’s featureless, sphinx-like presence an odd gap in the story, a glitch in the hermetic, all-consuming narrative of male genius? Or does it point to some deeper absence, some way in which Martha willingly went along with being sidelined from her husband’s larger concerns the better to ensure a peaceful home from which Freud could venture out with his unconventional, often alarming ideas? One might argue that in a certain fashion she was her husband’s muse — not a particularly glamorous or inflaming one, but a steadfast, earth-bound figure who helped him roam freely in his head. It was perhaps Martha’s very ordinariness — her “fully developed and well-integrated” personality, as Ernest Jones put it — that cast into relief the neuroses and the pathologies that Freud found everywhere he looked.

    Freud’s attitude to Martha, which verged on the fondly dismissive, is not irrelevant to the sense that psychoanalysis missed out on some of the big questions, particularly about women, and fell short of its liberating aspirations. Yet it is too easy to dismiss her as a martyr, unless one adds that she was a willing and seemingly contented one. Indeed, who is to say that she wouldn’t have played the role of helpmeet to a lesser figure as well, to a man who was not a genius? Or that despite her intelligence and sensibility, that she, like many people, was simply not driven to live up to what might have been her potential? Not every wife, no matter how intelligent or talented, wishes to compete with her husband. The competitive impulse, which looks to us invariably like a strength, can also derive from weakness and an infirm sense of self. Martha clearly knew who she was. Her dignity is undeniable. Her power derived from being the ultimate caretaker and ur-wife, presiding over the circumstances that facilitated Freud’s work. One might even see Martha’s abnegation of self — if abnegation it was — as an adult example of “altruistic surrender,” which was the term that her daughter Anna Freud coined for the children she worked with at the Tavistock Clinic who sacrificed their own well-being in the service of another child.

    In any case, Martha seems to have gone through something of a sea-change in the wake of her husband’s death at the age of eighty-three in September 1939, after years of excruciating jaw cancer. Aside from returning to lighting the Shabbos candles, she took to reading again, often sitting on the stairs or on a chair on the half-landing between the ground floor and the first floor at Maresfield Gardens, and even developed a curiosity about Anna’s patients, marveling at how expensive child analysis was. Although she remarked that life had “lost its sense and meaning” without her husband, she carried on in exile with her energetic and orderly existence, and appears to have relished being at the center of a crowd of doting and often celebrated visitors who came to see her. She might be said to have embodied the spirit of Goethe’s das Ewig-Weibliche, or Eternal Feminine, a concept that is profoundly alien to us but was a pillar of Martha’s culture. As a frail but vivid old woman who had been the lifelong companion of an undeniable visionary, she must have aroused curiosity of her own accord. Frau Freud died in London on November 2, 1951 at the age of ninety, and was cremated and her ashes joined with her husband’s ashes in an ancient Greek vase in something called the Freud corner in the Golder’s Green crematorium, taking her mystery — her hopes, her disappointments, and her regrets — with her.

    The Poet Misak Medzarents, and Two Poems

    He was born in 1886 in Armenia, in a remote mountain village called Pingyan above the Aradzani River. It was not the typical Armenian village of the Ottoman Empire, subjugated by Turkish authorities and terrorized by marauding Kurdish tribes in the guise of tax collectors. Pingyan was an unusual place: it was secure and very nearly free, a place where life could be happy. After the Moslem conquest of Anatolia began in the seventh century, Armenians struggled to preserve their liberty in princely states that juggled alliances with larger powers and tried to hold their heads above the flood of invasion by Turkish and Kurdish nomadic groups. After the fall of the Armenian Bagratid capital Ani in the east, the extinction of the Armenian Cilician kingdom in the south in 1375, and with that, the end of national sovereignty, little strongholds of freedom endured to which men might make their way — the mountain fastness of Sasun above Lake Van, Artsakh (today’s Nagorno-Karabagh) in the east, Zeitun in the southwest, and, in the northwest of historical Armenia, the village of Pingyan. (The name derives from the diminutive, Benik, of its founder, a prince named Benjamin.) 

    The houses, churches, schools, mills, and monasteries of the village clustered on the steep mountainside, below a well-defended pass; the villagers went to their fields on the other side of the river across a bridge with a great iron gate that was locked at night. The name of Misak’s family, the large Medzadourian clan — the young poet was to shorten the name to Medzarents — suggests they were descendants of a noble “great house” (medz dun) who had heard of the fortress village and made their way there across Armenia, centuries earlier, from Ani or even farther east. The villagers spoke Armenian, not the Armeno-Turkish of much of the Armenian community in Anatolia. They used metal tokens inscribed in the Armenian alphabet for trade, and maintained a school in which the Modern and Classical forms of Armenian were taught. They were horsemen and marksmen, and in their homes books shared the walls with guns. Some families owned businesses in the distant Ottoman capital, Constantinople, and were prosperous; workingmen sent remittances home.

    It made for a happy boyhood, for a time. Misak learned Armenian classics and foreign languages at school, read poetry, rode horseback to the fields, heard work songs, dozed and dreamed under trees, listened to his mother’s prayers and to legends about water spirits, and played with his friends. The Armenian massacres that began in 1894 and were to culminate in the Genocide of 1915 affected even Pingyan, and the family moved for safety first to the city of Sepastia (Sivas), then in 1902 to the capital, where Misak’s father had a business. In Sivas, a Moslem butcher’s son stole up from behind and stabbed Misak in the street. He survived the attack, but it traumatized and weakened him. The family chose the comparative safety of Constantinople, with its large Armenian community: Misak went to school, made friends, frequented the offices of literary journals, read widely, and was a prolific writer. When he was twenty-one he published two small volumes. But it was the year before his death, in 1908, of consumption: his life, like that of his precursor Bedros Tourian, was destined to be short. 

    Tourian had invented modern Western Armenian poetry almost singlehandedly, in the short years before his death in early 1872. Armenians closely followed European literary trends, and in the period between the lifetimes of the two poets Symbolism had become the dominant trend in poetry, music, and the arts. Through the use of dream imagery, indistinct allusions, exotic colors, and magical patterns of sound, Symbolists sought to open the doors of perception to an emotional and aesthetic sensibility towards a supernatural reality that, they believed, lay just beyond the everyday. The French poet Stephane Mallarmé and the composer Claude Debussy most famously exemplify the movement; but it can be argued that its beginnings were much earlier, and that William Blake and Edgar Allan Poe were proto-Symbolists. I will have more to say about Poe presently, in the discussion of what I consider to be Medzarents’ greatest poem, which I will give in translation. 

    Medzarents was described by his contemporaries, and sometimes derided, as a Symbolist, and he retorted defensively, in versified satire. The characterization is fair for some of his lyrics, but it is not complete — his work is not confined by narrow categories and definitions. There is evidence, in the form of a few fragments of poems, that Medzarents was developing a new style, sharper, harsher, more vivid, that reflected political events and a revolutionary consciousness. An analogous evolution from early Symbolist verses to a raw and jagged, sharply strident, revolutionary kind of verse typifies the work of the greatest Eastern Armenian poet, Yeghishe Charents, eleven years Misak’s junior, one of the great early non-Russian poets of the Soviet Union. Charents lived longer, but not by much: he was killed in November 1937 in the Stalinist purges. He wrote homoerotic verses that were unpublished in his lifetime and that still arouse controversy among the ultra-nationalist establishment in post-Soviet Armenia. We cannot know with any certainty what Medzarents would have written  had he lived on in the turbulent twentieth century: he died on the eve of the Ottoman revolution and just a few years before the Armenian genocide. It is almost certain that he would have been murdered with the other two-hundred-and-fifty-or-so Armenian luminaries of the capital at the start of the Genocide in April 1915. The life was far too short; the future, far too dark. 

    Let us consider one poem in detail, with its far-reaching ramifications. It is called Gaydzer, or “Sparks,” and was published September 10, 1905 in the journal Masis with another verse and the heading Yergu sirerk, “Two Love Songs”; and it was reprinted in the poet’s first volume of verses, Dziadzan, “Rainbow,” two years later. The political activist, publisher, and literary scholar Aram Andonian wrote the preface to the book. The poem consists of four quatrains; each line is seven syllables in length. (Armenian stress is regular: the accent falls on the final syllable of a word except for enclitics — short unstressed words, of one syllable, following a longer one.) The rhyme pattern of the poem in the original is a conventional one: ABCB BCCD BCBA ADDD. Here is my translation.

    The drumbeat of my soul and its tambourine’s

    Trill this night descend in laughter.

    Like cymbals clashing, they delight:

    My memories clap their hands together. 

     

    Accompanying the castanets’ song

    Your falcon’s eyes’ flame,

    Purple-born and fire,

    Burn within my soul again.

     

    Drunken on that intangible ambrosia

    With kisses redolent of flowers

    Sway there in mad dances

    The regal lady’s undulations.

     

    The dark night gently wears away!

    Oh, just once more, just once again!

    My soul intoxication craves

    In the rivulets of fire flowing from your gaze. 

    The poem is a reverie, an induced, dream-like act of imagination by the author at night within the four walls of his room. There are frequent images of fire, but the title, “Sparks,” suggests that the fire could ignite but has not yet; and before dawn the passionate but insubstantial vision fades, even though the poet pleads for it to stay. The first three stanzas progress through the five senses, each more physically immediate than the one before it. The first stanza bursts on the reader with vividly percussive sound: drums, tambourines, cymbals, and hands clapping. The second stanza moves to the sense of sight: fire and flame evoke bright red and rich gold, and the poet also uses the epithet dziranedzín to describe the fire in his imaginary beloved’s eyes. This word is a calque, that is, an exact translation of a foreign word according to its parts, of the Greek and Latin adjective porphyrogenitus, literally “born to the purple,” meaning “noble.” In Armenian it is, serendipitously, richly alliterative. It is a word that describes a quality while also making one think of a color, and part of the way it makes the connection is through the repetition of a sound. That is a game that certain special words can play in our minds, making us see and hear in a new and wider way. 

    The third stanza combines the remaining three senses of smell, taste, and touch: the poem overwhelms with voluptuous imagery of flowers, intoxication, and kisses. The poet is drunk on nectar — the Greek word, whose variant and equivalent is ambrosia — means, literally, “immortal,” and is echoed by the compound word dzaghg-anúysh. Dzaghíg is “flower”; anúysh is a Classical Armenian loan word from pre-Islamic Persian meaning “immortal nectar” again. In Modern Armenian, with the diphthong reduced, anúsh also means “sweet.” The Armenian for “kiss” is hampúyr, which means literally a sharing of fragrances, such as sweetness. Through Medzarents’ choice of words, in sum, the senses all blend into one another.

    In the final quatrain of the poem, the sensuous vision fades as morning dawns, though the poet begs it to linger and prolong his self-induced intoxication. The final word of the poem, a Classical Armenian compound most familiar from the Hymn of Vesting of the Divine Liturgy, is hrahosán, “flowing with fire”— used here of Misak’s beloved’s gaze. The word here recalls the poet’s palette of crimson, purple, and gold; but in its liturgical context it alludes to the fire of the Holy Spirit that descended upon the Apostles in their upper room and conferred upon them the gift of tongues. Linguistic inspiration is precisely what this poem is about in the first place; but the final stanza also reminds one plaintively that the sumptuous scene conjured so richly by the poet’s imagination is insubstantial as a dream. The reader of English will be reminded here of Prospero’s words in The Tempest:

    Our revels now are ended. These our actors, 

    As I foretold you, were all spirits and 

    Are melted into air, into thin air: 

    And, like the baseless fabric of this vision, 

    The cloud-capp’d towers, the gorgeous palaces, 

    The solemn temples, the great globe itself, 

    Yea, all which it inherit, shall dissolve 

    And, like this insubstantial pageant faded, 

    Leave not a rack behind. We are such stuff 

    As dreams are made on, and our little life 

    Is rounded with a sleep.

    As I was writing the first lines of this essay, the sun was beating down and the branches on the pomegranate tree in our California garden were bent to the ground with fruit. They reminded me of the words in poetry that are heavy, dense with many meanings; we have already seen how Medzarents chooses ripe words bursting with juicy seeds. There are yet others in this poem that harken back to antiquity, to the archaic pleasures of noble hunters, to regal feasts. Shahení, “falcon-like”; pampish(n), “queen”— Armenian is an ancient language with roots in the Thraco-Phyrgian akin to proto-Greek, layered with the vocabulary of many centuries, and these words are redolent of the Parthian age, the heroic epoch of the fourth century chronicled in the Epic Histories of P‘awstos Buzand. My teacher and friend Nina Georgievna Garsoïan, who passed away last year at the age of ninety-nine, published the definitive translation and study of that work, in which it is related that the Sasanian Persian Shah of that time, Shapur II, captured his perennial rival and enemy, the Armenian Arsacid Arshak II. 

    The story is taught in every Armenian school: Shapur had his servants sprinkle Armenian earth on the ground of his banqueting tent. When Arshak trod upon alien Iranian soil, he meekly professed fealty and submission; but when he stepped on the earth brought from his native land, he angrily promised rebellion. At the royal feast later that fateful day, he derided Shapur as the usurper of the throne of his own clan, the Parthian Arsacids, and audaciously demanded his rightful place at the head of the table. Arshak was clapped in irons and imprisoned in a place called the Fortress of Oblivion, from whose dark confines no inmate ever emerged. The prisoners’ very names and memories were expunged from official records and forbidden by law to be spoken. But the Armenian king’s faithful eunuch Drastamat (a Parthian word meaning “welcome”) secured permission to entertain his liege lord one last time, with royal viands and dancing maidens. At the end of the revel, Arshak seized a fruit knife and plunged it into his own heart, lest he live past the end of the entertainment and return to the dim existence of a captive. Thus did the voluptuous vision end; and from his choice of words and images it is all but certain that Medzarents had the famous episode from P‘awstos’ history in mind. 

    Epameron— ti de tis, ti de ou tis? Skias onar anthropos, declared the poet Pindar in a celebratory ode to the victory of an ancient Hellenic athlete. “Thing of a day. What is somebody? What is he not? Man is the dream of a shadow.” Yet when glory rests upon a man, the moment in his life’s span is sweet as honey, he adds. But when the laurels wilt, the dream fades, the revel ends, the vision flies away, we have the poem, the play, the historical saga, the ode. What is it that gives these written and spoken words power over millennia? What is it they immortally capture? What can they do that the reality of an ordinary day cannot?

    Let us take Medzarents’ word, dziranedzín, in the second stanza of the poem discussed above. It means “born to the purple,” and thus combines a color with the idea of nobility. It has a particular sound-signature, a musical quality, in the poem, too, for it resonates very strongly with other words the poet has already used in the first stanza: dzidzágh, “laughter,” dzĕndzghá, “cymbal,” and dzap‘, “clap”. Armenian dziraní means “purple,” but it is also the color of Homer’s wine-dark sea, for thus we find it as dziraní dzóv (the latter word meaning “sea”) in the earliest Armenian poem, the Song of the Birth of Vahagn recorded by the historian Movses Khorenats‘i. The word dzirán means also “apricot,” a fruit originally from China that the ancient Romans called the Armenian plum. Apricots, the fruit a medieval writer in Asia praised as “the golden peaches of Samarkand,” were an expensive commodity, even as cloth dyed in the royal purple was precious. The Russian linguist Pyotr Kocharov has convincingly argued that the Armenian word is a very early loan from Old Iranian zaranya-, meaning “golden.” 

    That origin goes far to explain the semantics of the word in its development through time, its wide array of meanings and associations. As a thing, a dzirán is a choice fruit of fiery, red, and purple hues. As a color, dziraní evokes a range of hues over the spectrum; and as a quality, it confers nobility. (The Biblical and later Hebrew word for purple, argaman, is likewise freighted with the implication of nobility and great value. It is derived from Akkadian argamannu, which has the dual meaning of “purple” and “tribute.” An ancient Anatolian derivation is possible, but I would suggest again a very early Iranian origin, comparing for instance the Iranian-in-Armenian name Argawan, “precious.” In Hebrew magical texts, argaman serves as an acronym for the names of the angels Uriel, Raphael, Gabriel, Michael, and Nuriel, its meaning as a word on its own doubtless conferring additional nobility upon its celestial reference.) 

    Now, one can call a word — say, dzirán — a signifier. That is, it signifies, refers to, names an object, a thing in the physical universe. The thing that the signifier refers to — in this case, an apricot — is the signified. In general, the signifier is arbitrary: there are different words in different languages for an apricot, and none of them has a provable relationship to the object it denotes. (I say generally, because most languages also have onomatopoeic words that echo the perceptible sound or quality of the thing or action that they describe.) It is also the case that the signifier is inadequate fully to express the reality of the signified. I can say “apricot,” but the colors of the fruit in the sun, its silky feel, the juice, the pit within — obviously one word cannot carry all these features, and even a page-long description would not be the same as an immediate experience. A long shadow thus falls between signifier and signified, over the centuries of human speculation about language: we feel instinctively that there must be a relationship, but there is not. To compensate, our ancestors crafted the myth of an Adamic language, the primordial, perfect speech in which the first man gave each of the animals its true name. 

    But look what Medzarents has done! As we have seen in the analysis of his chosen term, dziranedzín, his signifier is more, not less, as a word sparking various mental and aesthetic associations. It is a signifier that has more to it than the signified. I think that this can serve as one good definition of a poem (not to the exclusion of other definitions, of which there are many): a literary form in whose lexicon a signifier is to be encountered that is greater than the signified. 

    That definition has implications worth further thought, but I don’t want to say farewell just yet to Medzarents’ magical word dziranedzín, which means literally “to the purple born” but also much more. We encounter the English form of its Greek parent, porphyrogenitos, in 1839 in the poem “The Haunted Palace” by Edgar Allan Poe:

    In the greenest of our valleys 

    By good angels tenanted, 

    Once a fair and stately palace — 

    Radiant palace — reared its head. 

    In the monarch Thought’s dominion, 

    It stood there! 

    Never seraph spread a pinion 

    Over fabric half so fair! 

     

    Banners yellow, glorious, golden, 

    On its roof did float and flow 

    (This — all this — was in the olden 

    Time long ago) 

    And every gentle air that dallied, 

    In that sweet day, 

    Along the ramparts plumed and pallid, 

    A wingèd odor went away. 

     

    Wanderers in that happy valley, 

    Through two luminous windows, saw 

    Spirits moving musically 

    To a lute’s well-tunèd law, 

    Round about a throne where, sitting, 

    Porphyrogene! 

    In state his glory well befitting, 

    The ruler of the realm was seen. 

     

    And all with pearl and ruby glowing 

    Was the fair palace door, 

    Through which came flowing, flowing, flowing 

    And sparkling evermore, 

    A troop of Echoes, whose sweet duty 

    Was but to sing, 

    In voices of surpassing beauty, 

    The wit and wisdom of their king. 

     

    But evil things, in robes of sorrow, 

    Assailed the monarch’s high estate; 

    (Ah, let us mourn! — for never morrow 

    Shall dawn upon him, desolate!) 

    And round about his home the glory 

    That blushed and bloomed 

    Is but a dim-remembered story 

    Of the old time entombed. 

     

    And travellers, now, within that valley, 

    Through the red-litten windows see 

    Vast forms that move fantastically 

    To a discordant melody; 

    While, like a ghastly rapid river, 

    Through the pale door 

    A hideous throng rush out forever, 

    And laugh — but smile no more. 

    The poem is an allegory: the palace is the poet’s head; the two luminous windows, his eyes. Long ago the king of Thought reigned there in serenity; but madness invaded the fortress of the mind and now all is chaos within: the windows that were once bright are now “red-litten” (or, as a variant of the text has it, “encrimsoned”). Jerome McGann, in a study of Poe’s poetry, asserts that the key word of “The Haunted Palace” is “porphyrogene,” which he considers both noun and adjective, and “fundamentally, a synaesthetic figure, both chromatic and phonetic,” part of Poe’s “musical architecture.” That is, the word “porphyrogene” has manifold functions in the poem. It is both a word describing the king who sits in the palace, and his title. It evokes a color, royal purple, while also serving as a central chord of the poem’s music — and most of all, it sounds just right. 

    Poe stressed the sound-structure, the musicality of poetry, some have said, even more than its overt verbal meaning, and built his great and final poem, “The Bells,” around the single tantalizing word “tintinnabulation.” It was a poem in a chrysalis, ready to open its wings and fly as music: the Russian translation of the poem by the Symbolist Konstantin Dmitrievich Bal’mont became Rachmaninoff’s Third Symphony; and Phil Ochs, the great American protest singer-songwriter of the 1960s, took its tintinnabulation to the guitar. Medzarents would have loved it. McGann, who studied “The Haunted Palace,” is mistaken, I believe, in thinking that Poe coined the word “porphyrogene”: we have reviewed its long and noble pedigree. But he is right to stress the centrality of the term: the word brings together, like its Armenian cousin but perhaps not as variegatedly, different concepts and different kinds of realities and perceptions. It is one of those poetic signifiers that are more than the signified. The word leads the reader, as Poe wrote in his essay “The Poetic Principle,” “to perceive a harmony where none was apparent before.”

    Valerii Bryusov, a Russian Symbolist poet of the turn of the twentieth century, was enamored of Armenian culture and in 1916 he edited a volume of translations by various hands including his own, called The Poetry of Armenia from the most ancient times down to our own days. The book, whose proceeds went to the relief of Armenian refugees from the Genocide, includes several of Medzarents’ poems, though not “Sparks.” (The poem was translated into Russian, badly and not by Bryusov, in an anthology published in Erevan in 1987.) But in 1924 Bryusov did translate Poe’s “The Haunted Palace” into Russian. The translation is of interest here for two reasons: first, the poem has an affinity to Medzarents’ “Sparks,” and it is worth knowing how a poet so strongly attracted to Armenian verse approached it. The other reason is perhaps less intuitively obvious and requires some explanation.

    Why are Medzarents’ poems so chromatic? If color was so important to him, wouldn’t it have been more sensible for him just to paint a picture? It is not an unreasonable question: in the years before reproducible media such as photography and cinema attained prominence in the arts, painters were a much more visible presence than they are today. This was no less the case for Armenian Constantinople or Tiflis than for Paris or St. Petersburg. Martiros Saryan’s palette blazes with the gorgeous colors of the Armenian landscape; Hagop Kojoyan’s delicate hues evoke Symbolist reveries. I think Medzarents intended for us to read his poems and then make the mental effort to see his colors, which are also sounds and emotions, within our own minds. Every viewer of a painting sees it differently because his mind is taking in the picture and organizing it in a way that is particular to him. In a sense he is participating in the creative process of the artist, supplying colors. Even more so, the reader of a poem with a rich chromatic lexicon is making a multi-faceted creative effort, provided that he is attentive, perceptive, and engaged. The neuroscientist Eric Kandel has argued, following the pioneering art historians of twentieth-century Vienna who employed the findings of psychology in their research, that an important part of what makes a work of art great is that it is ambiguous: it forces the viewer to consider and select various possible meanings. 

    As we have seen, the factor of ambiguity as a component of artistic mastery is eminently applicable to literature, as well as to painting. In the former case, the act of translation reveals the beholder’s share explicitly. It is a window into the laboratory of his mind. A great translator is himself an artist, and the choices he makes when rendering a particular term from one language into another enhance our perception of the poem. Bryusov in his translation of “The Haunted Palace” devotes particular attention to Poe’s palette: he chooses izumrúdnaya, “emerald,” to lend extra scintillation to Poe’s superlative, “greenest,” in line one. Poe’s “yellow, glorious, golden” of becomes púrpur, zláto “purple, gold”: two words have replaced three, with purple in its metaphorical sense of regal rendering “glorious” and adding chromatic variation to the scene. For “gold,” Bryusov has selected the archaic zláto, with its aura of storied antiquity, over modern Russian zóloto. Poe’s “Pearl and ruby” are reversed to lal, zhémchug, with a marked Arabo-Persian loan-word for the ruby, lāl, standing in for the more common rubín. The color red is of unique significance in Russian art and symbolism — the common word for it, krásnyi, originally meant “beautiful.” As for Poe’s “Porphyrogene,” all Bryusov has to do is to reach into Russia’s own Byzantine heritage and retrieve the well-known Slavonic calque upon the Greek original: Porfiroródnaya. (It is a queen, not a king, because thought is a word of feminine gender in Russian.) And so the original, with all its fertile ambiguities, is there in its perfection, with only an alteration in dress. 

    This discussion of a poem of Misak Medzarents has treated the complexity and depth of his vision and language; and these subjects have prompted further considerations in brief about the definition of poetry itself and even the nature of the perception of literary art. After a brief sketch of the historical setting we turned from the dark prospects that lay beyond the short life of the poet to that which endures unshadowed, the work. That work draws upon the millennial resources of the Armenian language to express intricate visions; and sometimes the verses of Misak Medzarents are suffused with a pantheistic joy. Here, in my translation, is his greatest poem, his ars poetica, an invitation to the reader to travel farther into the realm of delight.

    With what intoxication… 

    To my friend Kegham Parseghian

    With what intoxication! The trees, in the light,

    Trees in the wind and the rain,

    Shaggy-tressed trees, trees that to the heavens strain,

    And saplings green, as sea waves

    Collapsing to the bosom of the corn strewn,

    Dazed, all drink of the swelling sunburst of life.

    With what intoxication! The grass above the soil rising

    Opens to the light, amazed,

    For the moment of its life the dewdrops that are its eyes.

    With what intoxication! Flowers in the dew,

    Flowers in the light, accustomed to the hand,

    Swoon in their expectation.

    With what intoxication!

    Every field and hill upon green brow

    Bind the flowers’ multicolored wedding band. 

    With what intoxication! From the lovely plains

    And his bride, the dale, the red foot stork

    Returning home imbibes his longing’s satiation.

    With what intoxication! Blackbirds

    Drink the light and whisper it, alert,

    In orchards’ leafy fastnesses.

    With what intoxication! Snow-white jays afloat

    On high seem to swim as they perambulate upon the sky,

    Taking wing, gilded on the glowing firmament.

    With what intoxication! The turtledove her nuptial

    Bed arranges in the shady cover of a tree

    And waits for her husband in expectant passion.

    With what intoxication! The butterfly unfolds upon

    The tiny sparkling lakelet of its leaf

    And with its milky wings constructs its canopy.

    With what intoxication! On the purple plain

    To scarlet flowers hies the bee, 

    To suck upon the little female nipples, luxuriating.

    With what intoxication! The seas are blue;

    River waters, abundant; springs, brimming;

    The rill, swiftly purling; lakes, azure —

    The rill, his locks green-fronded, tossing,

    Passes intimate among the willows, blue as the moon.

    With what intoxication! Clouds shake their heads,

    The wondrous liquid massing in their breasts;

    Which like a snaking thread descends

    To slake the hot gold thirst of earth. 

    With that intoxication drink their fill

    Upon the parched soil’s universal burn

    All creatures born, all flowers grown:

    Drunk, the wave laps in embrace the perfumed tree;

    Thyme and mint and basil growing wild

    And storax, frankincense, aromas teeming

    Are embracing, drunken, every thing,

    All shapes and forms, all colors gleaming,

    All essences and elements: He

    Whose rainbow every thing reflects, returning,

    God! Who from God knows where has come to them. 

    What Flaubert Taught Agnon

    Agnon and Flaubert: the conjunction is, at first blush, altogether unlikely. Their background and the kind of language in which each wrote could scarcely have been more different.

    Agnon, the commanding figure in Hebrew fiction in the twentieth century and the recipient of the Nobel Prize in Literature in 1966, grew up in an Orthodox Jewish home in Buczacz, Galicia, in the eastern end of the Hapsburg Empire. Yiddish was his first language, and he wrote a few stories and some poems in Yiddish when he was in his teens. He had no formal general education, but his mother read the classics of German literature with him, for German was the language of cultural prestige under the Hapsburgs, even where, as in Galicia, it was not the vernacular. In any case, the focus of his early education was on traditional Hebrew and Aramaic texts — the Bible, the Mishnah, the Talmud, Midrash, and the plethora of commentaries on all four of those hallowed works. He decisively turned from Yiddish to Hebrew because Hebrew was for him, as he wrote in one of his stories, “the language of all the generations that had gone before us and all the generations to come.” In keeping with this idea of the eternality of the language, the Hebrew that he wrote was essentially the Hebrew of the early rabbis in idiom, lexicon, and grammar, though it exhibited some elements of later strata of the language — but very few from modern Hebrew. To invoke a counter-factual analogue, it would strain the imagination to think of Flaubert writing in the French of the medieval fabliaux.

    One must add that Agnon, with scant exceptions, was repeatedly coy and evasive about his relation to European writers. He like to present himself as a traditional Hebrew teller of tales, which sometimes he was. The often proposed link with Kafka — especially after Agnon began writing dreamlike or surrealist stories in the 1930s — is a case in point. When he was asked, in an interview at the Schocken Library in Jerusalem after he received the Nobel Prize, whether he was influenced by Kafka, he was clearly nettled. “Kafka? Kafka?” he replied. “I have barely read one book by him.” (But how many books, after all, did Kafka write?) Then he added, archly, “Of course, my wife has the complete writings of Kafka on her shelf.”

    Agnon came to Palestine toward the end of his teens, in 1908, and stayed in the port city of Jaffa. During this time, adapting to the secular Zionist milieu there, he abandoned religious observance. In 1913 he went to Germany, evidently with the intention of immersing himself in European culture as an autodidact — reading through all the books in a large library, as he extravagantly claimed to Gershom Scholem, who became his friend during his German sojourn. At some point in this period, he returned to Orthodox practice, which he would maintain until the end of his life. Scholem shrewdly observed in an interview on Israeli television after the writer’s death that art was paramount for Agnon and that he was attached to religion because it served his purposes as an artist, his outward devotion to ritual and Jewish law confirming, among other things, the finely crafted rabbinic prose in which he wrote.

    Early during his decade in Germany that ended with a return to Palestine, Agnon discovered Flaubert. On December 17, 1916, in a letter to Zalman Schocken, the department store magnate who had become his patron, he wrote: “Flaubert and everything about him touch me deeply. He is a poet who mortified himself in the tent of poetry…. It is fitting for every writer to read about him before he writes and after he writes. Then no book would be blighted.” (All the translations from the Hebrew and the French are mine.) Note that he speaks of reading about Flaubert, not of reading him. The phrase, “mortified himself in the tent of poetry,” plays on the rabbinic “mortified himself in the tent of Torah,” suggesting a certain equivalence between the two, that is, between the devotion to the sacred text and the devotion to art. In calling the French writer a “poet,” Agnon is thinking of the German Dichter, which can refer to anyone using language creatively. 

    The allusion to the French writer’s biography reflects one important way in which Flaubert was important for him. The stories that he had written in Jaffa certainly evinced a prodigious talent, but the lyric prose used for them was often excessively florid. Flaubert, with the many hundreds of draft pages that he honed down to the compact masterpiece that was Madame Bovary, showed Agnon what a serious writer needed to do. In fact, during his German years, Agnon took many of the stories that he had written during the first six years of his career and extensively pruned them — with a Flaubertian discipline, one might say. In a few instances he reduced an effusive paragraph to a single telling sentence of six or seven words. The result was beautifully concise fiction of the first order of originality.

    Yet it is inconceivable that Agnon would not have read Madame Bovary and in all likelihood Trois contes, or Three Tales, though perhaps not The Sentimental Education, arguably Flaubert’s most original book. His closest connection with Flaubert is his novel A Simple Story, published in 1935. Four years earlier, the first Hebrew translation of Madame Bovary, by the short-story writer Devorah Baron, had appeared, and after his early acquaintance with the German version Agnon surely would have at least leafed through it and probably read it all the way through. Thus, as he began work on A Simple Story, Flaubert’s novel would have been relatively fresh in his mind. The connection between the two books has been duly noticed in Hebrew criticism, which for the most part emphasizes themes and social setting. What may be more instructive in regard to the cross-fertilization between literatures is to consider what Agnon may have picked up from Flaubert about the novelistic representation of experience. 

    A Simple Story, set in the first decade of the twentieth century in a town much like Buczacz, is pre-eminently a novel of bourgeois life. It is Agnon’s most perfectly wrought novel, though in regard to breadth of concerns and to formal innovation not necessarily his greatest. Although the protagonist, Hirshel Hurvitz, and his parents, Baruch Meir and Tsirl, are Orthodox Jews (we see him, for example, attending daily worship), this is no more than an external expression of their culture, for their lives are through-and-through bourgeois. The Hurvitzes are the owners of a general store, and the accumulation of wealth is their main preoccupation.

    At the beginning of the book, a relative named Bluma Nacht (“Night Flower”) shows up in their home to throw herself on their mercies after losing both her parents. The mercies prove to be far from tender as the domineering Tsirl, a habitual driver of hard bargains, agrees to allow Bluma to stay as a household servant without salary, only room and board provided. Tsirl immediately casts doubt on Bluma’s competence for the job, but it quickly emerges that, having maintained her parents’ home while her mother was failing, she is a skilled cook and baker and adept at keeping a household clean and orderly. At the time Hirshl is just sixteen, and Bluma is presumably about the same age. The two are drawn to each other in an attraction that is powerfully erotic, however discreetly it is intimated by Agnon. When it becomes clear to Tsirl that a serious relationship between them threatens to emerge, she banishes Bluma from the house, for she will not countenance her son, an only child, marrying a penniless relative.

    Instead Tsirl arranges a marriage for him with the daughter of a wealthy Jewish farmer. The passive Hirshl submits, but he continues to be obsessed with Bluma. After the wedding he is desperately unhappy with his wife, recoiling from the smell of her perfume, feeling the words that she speaks to him as sharp nails being driven into his flesh. The birth of a son — as it turns out, the baby is sickly — does not improve matters. Hirshl begins to pay nightly visits to the house in which Bluma is now living. He stands outside in all weathers, dolefully looking up at the window where he imagines the woman he loves is standing. All this compulsive behavior culminates in a severe psychotic breakdown, and his father takes him to the city of Lemberg (Lviv today) for a cure in a residential facility with an eccentric psychiatrist who patiently edges him back to sanity by telling him stories. At the end Hirshl is reconciled with his wife — on the surface, happily, though Agnon hints in the way the conclusion is framed that this is not really a happy ending.

    The plot, of course, is quite different from the plot of Madame Bovary, but it shares with the French novel a sense of the pervasive oppressiveness of bourgeois materialism joined with an unhappy marriage and the allure of a romantic connection — for Hirshl, manifestly impossible to realize — outside of marriage. The great nineteenth-century novels are often about lives that end in trainwrecks, and A Simple Story conforms to this pattern of the genre in its realist phase, using an oblique, understated version of the concluding disaster.

    Agnon obviously did not need Flaubert to fashion a story of this sort. The real-life counterparts to the Hurvitz family and other characters in the book were observed by him when he was growing up in Buczacz, and they would have been sufficient to fuel his imagination. What he could have picked up from Flaubert was certain clues for how to articulate the novelistic representation of such a world.

    To begin with, Flaubert exhibited a sure sense of how to pull together the disparate elements of a long narrative through the deployment of recurring motifs. The color blue, for example, is an important thread running through his novel. When Charles first sees Emma on a sunny summer day, she is holding a blue parasol that casts a blueish (bleuâtre) shade on her face. She has blue dresses; her extravagant romantic fantasies unfold against imagined blue horizons. Agnon is quite likely to have noticed the recurring motifs in Flaubert, though he also could have seen them in Thomas Mann. It is a device he employed in most of his novels and in many of his stories. He makes ample use of it in A Simple Story through recurrences of food, coins, cigarettes, and many other motifs.

    One of the most significant of these is the blind singing beggar. That figure may be directly taken from Madame Bovary: Emma, we recall, repeatedly hears the singing of a blind beggar on her departures for her assignations with Léon, her second lover. The moment before she dies, she again hears his song coming from outside her house. Details in literature sometimes draw on multiple sources, and as the Israeli critic Nitza Ben-Dov has persuasively proposed, there is a blind singing beggar in one of the tales of the early nineteenth-century Hasidic master Rabbi Nahman of Bratslav that Agnon surely would have remembered. In A Simple Story the blind beggar is associated with the world of song and romantic love that is the antithesis of all the mercantile values that Tsirl promotes, and when Hirshl encounters the singer in rags at the end of the novel, he gets rid of him by tossing to him an unusually large coin — clearly, the coin of his mother’s realm. Emma’s tragedy is underscored when she perishes from the arsenic that she has swallowed listening in anguish to the beggar’s song that has marked her departures for the love affair ending in disaster; Hirshl’s more hidden tragedy is to banish with a coin the singer and the dream of love evoked by the song, subsiding into a materialist bourgeois life devoid of melody.

    Flaubert is often identified as the writer who perfected the technique of le style indirect libre, uninformatively referred to in English as free indirect style. This simply means the narrator’s conveying to us the unspoken speech of the character in the third person, with tenses switched from present (which the character would use to herself) to past. Dorit Cohen, in her luminous book Transparent Minds, had a better term for it; she calls it narrated monologue, but since academics prefer rebarbative language, her apt designation never caught on. Agnon made abundant use of free indirect style in The Hill of Sand, a brilliant novella that he created through extensive revisions of an earlier story after he came to Germany. The young protagonist is an aspiring poet constantly pulling back from engagement in eros, and Agnon gives a Freudian spin to free indirect style by exposing through the character’s unspoken words sexual desires that his conscious mind represses. Madame Bovary in fact employs le style indirect libre only from time to time, but always with striking effectiveness, and the same is true of A Simple Story. Instead Flaubert utilizes a range of different procedures for representing consciousness, and in this, too, Agnon follows suit.

    Let us look at a few brief examples of free indirect style from both novels. After her first experience of adulterous love with the libertine aristocrat Rodolphe, Emma returns to her home, and “seeing herself in the mirror, she was amazed by her face. Never had she had eyes so big, so black, nor of such depth. Something subtle spread through her whole being and transfigured it.” Everything here about the bigness and blackness and depth of the eyes, her transfiguration, is of course Emma talking to herself, and these sentences are a study in self-deception. What Emma sees in the mirror is not her actual image, but her image as she imagines it transfigured by her experience of romantic love. Flaubert than moves from narrated monologue to monologue proper as Emma calls out, “I have a lover! I have a lover!” To this the narrator adds his own analytic comment about Emma’s train of thought, which is devasting through the force of the simile it uses (Agnon, as we shall see, does this, too): “relishing the idea as if at another puberty that had befallen her.”

    Now consider two instances of this technique in A Simple Story, one involving Tsirl and the other her son. Early in the novel, Tsirl explains to herself why it is legitimate for her to employ Bluma without salary.

    Seemingly, a denial of payment was entailed in this, but whoever looks into the heart of the matter sees that Tsirl was right, for when Bluma comes to marry, will Tsirl run around to charities and say, “Provide a dowry my relative”? No, she herself will give according to the years she has served. Besides, what payment could Bluma expect? Why, she had never been a servant to others, and so she has been learning about housework from Tsirl.

    From beginning to end, the passage is an exercise in self-justification. The expression “whoever looks into the heart of the matter” translates as whoever would follow Tsirl’s self-serving train of thought. Tsirl casts herself here as a heroine of generosity in the prospect of eventually paying Bluma the amount that she would have earned through years of labor. The remark, moreover, about her learning housework from Tsirel is a blatant lie to herself. She is in no way involved in housework and knows precious little about how to do it, whereas Bluma arrives at her home thoroughly experienced and skilled in managing a home. 

    There is a linguistic complication in this and related passages. Tsirl, of course, is thinking in Yiddish, which Agnon conveys in Hebrew, at most occasionally giving a Yiddish turn to the phrasing, and Tsirl would scarcely have known any Hebrew. The phrase “according to the years she has served” approximately recalls the formulation of the law in the Torah pertaining to the so-called Hebrew slave, actually an indentured servant for a period of seven years. Tsirl is unlikely to have been familiar with the Hebrew of this verse, and so we have a kind of surreptitious intervention of the narrator who is mediating Tsirl’s unspoken speech in Hebrew, an intervention that suggests Tsirl is disposed to regard Bluma as a virtual slave. This biblical echo is a small illustration of how Agnon Hebraizes the Flaubertian technique as he adopts it. 

    And here is a fairly straightforward deployment of free indirect discourse for Hirshl. The baby boy he has begotten, as I have noted, is sickly. Hirshl, contemplating the ailing newborn, expresses his concern in the following manner: “If Bluma were caring for him, he would regain his strength. And so Hirshl would imagine himself standing on one side of his son and Bluma on the other and his son recovering. God in heaven knows that Hirshl’s sole intention was for the sake of his son.”

    The first sentence is free indirect discourse. The second sentence is the narrator’s report of Hirshl’s imagining. Such switches back and forth in modes of representing thought are a common feature of the technique, as we saw in Flaubert’s move from narrated monologue to actual monologue (“I have a lover!”) to the narrator’s summarizing judgment of what the character feels. Both these sentences here reflect the lovestruck protagonist’s perception, or fantasy, of Bluma as a healer, a comforter, an all-around bestower of blessings. The third sentence slides back into free indirect discourse. “God in heaven knows” is a virtual refrain in this novel, a formula invoked by characters who are actually not paying much attention to God in heaven — who are, one might say, taking His name in vain. Hirshl has to insist that God would confirm the purity of his motives because he guiltily senses that they are really not about concern for the child.

    Free indirect discourse is a technique that is particularly effective for exposing self-deception, as it follows the characters talking to themselves while we as readers can see that what they are saying to themselves patently lacks credibility. In the passages I have reviewed, this is clear in Emma’s telling herself she has been visibly transfigured through adulterous sex, in Tsirl’s congratulating herself on her magnaminity toward her poor relative, and in Hirshl’s avowing to himself that his sole concern was to imagine his son getting better while we understand that his imagining is all about Bluma, not about his son. In all these instances, it is evident that free indirect discourse is a beautifully efficient and effective instrument of characterization. We all sometimes tell lies to ourselves for the sake of our own self-regard or to avoid discomfiting thoughts. The novel, with its commitment as a genre to exploring the complexities and contradictions of character, often takes advantage of this technique that deftly achieves precisely the end of representing the ambiguities and the little hypocrisies with which people live. In this regard, free indirect discourse works especially well for characters lacking self-knowledge, which is obviously the case for Emma, for Tsirl, and for Hirshl.

    Another way of representing consciousness, in which Flaubert may have been a pioneer, is the visual rather than the verbal evocation of what is going on in the character’s head. Here is Emma, running through a stand of trees, utterly distraught because she has just been abruptly cast aside by Rodolphe: “It seemed to her suddenly that fire-colored globes were bursting in the air like incandescent balls, flattening out and turning, turning, to melt into the snow, among the branches of the trees. In the center of each of them, Rodolphe’s face appeared.” By now, I suppose that passages like this have come to seem familiar in serious novels, but in the middle of the nineteenth century, when this was written, it was innovative. There is something going on here inside the character’s head that is not words, neither the narrator’s summary of what someone is thinking or words that the character is saying to herself, but rather a visual sequence of images spinning through that highly distraught head, the evocation of a hallucination.

    There is little in the evolution of literature that is entirely new. The technique, for example, of unifying extended stretches of narrative through recurring motifs that we observed in both writers was vividly evident in the Hebrew Bible long before Flaubert invented Emma’s blue horizons and Agnon the blind beggar and his song. Think of the significant recurrence of garments, especially ones used for deception, in the Joseph story, or of the reiterated presence of cloaks in the Samuel narrative, from the “little cloak” that his mother would make him every year when he was a priest’s acolyte as a young boy to his cloak torn by Saul and turned into a dire symbol by Samuel in his wrathful maturity. Even le style indirect libre, so often associated with Flaubert, can be found as far back as in Madame de Lafayette’s La Princesse de Clèves, written in the seventeenth century.

    What we see, then, in Flaubert’s visual rendering of hallucinatory experience is not an absolute first but a technique with precursors — there are also occasional anticipations of it in Dickens, for example, when a character is drunk or otherwise mentally confused. Flaubert works up this narrative procedure to a fine burnish — those fire-colored globules that seem like incandescent balls, each showing Rodolphe’s face to Emma, which is thus the perfect visual representation of a woman’s mind maddened by disappointment in love. A recurring motif, moreover, sneaks its way in here: the “turning, turning” is a precise echo of the “turning, turning” of the ball at the La Vaubyssard château that intoxicates and dizzies Emma as we move here from a turning that is a high to a turning that is a low. It also points to the constant turning of the carpenter’s lathe in the village where Emma feels trapped. In regard to the visual representation of consciousness, then, it is not that the thing has never been done, but rather that it has never been done so well.

    As a consummate literary craftsman in his own right, Agnon would very likely have noticed such moments as he read Madame Bovary, first in German and probably, much later, in Hebrew. His early fiction does not show instances of this sort of visual representation of the movement of the mind. Let me offer two examples of it in A Simple Story. After Hirshl’s engagement to Minna, the rich farmer’s daughter, has been announced, he attends a Hanukkah party, secular in character, where people are playing cards. As he sits at a table holding his hand of cards while looking at the players around him, this is what runs through his head: “The cards leaped from the hands of the card-players with a strange rapidity, until the hands could not be seen among them. Finally, the cards, too, disappeared, and black and red faces could be seen in the room, dancing before him and mocking.”

    This is not an extreme experience of derangement, like that of Emma running through the woods, distraught, desperate, panicked, after her love affair has suddenly collapsed. In fact, it is the sort of thing many of us are likely to have experienced at one time or another. Say you are persuaded by a friend to attend a political debate on a topic that is not of compelling interest to you, or perhaps you don’t like such debates in general because you feel they generate more heat than illumination. As the voices of the debaters drone on, you mind begins to drift away from deciphering the meaning of the words, and everything around you dissolves into a cacophony of sounds that might suggest to you a flock of crows emitting their raucous cries. Something of this sort happens to Hirshl in this scene. He does not really want to be at this party. He has not been comfortable with the announcement of his engagement. In fact, he does not really want to marry Minna, but he sees it as a fate imposed on him by his parents that he cannot escape. And so at the card table his mind drifts off from the game, begins to see the cards dealt assuming a phantasmagoric velocity and then detaching themselves from the hands of the players.

    Finally the red and black faces on the cards take on a hallucinatory autonomy, staring at him in mockery — in a way, every card is now a joker. Perhaps their mocking gaze reflects Hirshl’s sense that he has done something embarrassing, even shameful, in agreeing to marry Minna. At this point in the novel Hirshl still retains his sanity, but the transformation of the cards into ominously derisive faces is a striking adumbration of the moment in which he will flip into insanity, running through the forest making the croaking sounds of a frog.

    The way-station to that moment is his nightly vigils outside the house where Bluma is staying. Here, too, a visual evocation of consciousness is brought into play: “A veil had fallen over all the world and you can’t even see yourself. But the image of Bluma broke through and rose before you as on the day she caressed your head when you came into her room and she fled and came back.” The veil that has fallen over the world would seem to be both a result of night and fog and the anguished Hirshl’s mental confusion. Through this murky haze, Bluma’s image bursts forth and rises like a sun. That image of rising triggers a memory of the moment that he most cherishes in their unconsummated love, when she caressed his head as he sat on her bed, the only time she actually caressed him. Agnon uses the second person singular in order to bring us more directly into Hirshel’s mind because this is him addressing himself, a variant form of narrated monologue.

    The final sentence of this brief passage, “Hirshl rested his head on the handles of the lock,” is a striking instance of how Agnon introduces a Hebraizing element in whatever he may have learned from Flaubert. This is the narrator’s report, so we are no longer inside Hirshl’s head. The lock is the door-lock, indicating that Hirshl is right up against the entry to the house, not looking up at it from a certain distance. The phrase “on the handles of the lock” is a direct quotation of the Song of Songs 5:5. It is one of the most poignant moments in that Biblical book and is beautifully apt for Hirshl’s plight. The beloved in that chapter has teasingly put off her lover’s plea to unlock the door. When she then goes to open it, her hands dripping liquid myrrh onto the lock, for she has perfumed herself for love, she finds that he has gone, and in desperation she runs out into the dark night streets to try to find him. The resonance between this lover’s despair and the anguish of Agnon’s protagonist is clear — the night, the tears, the seemingly lost lover (though for Hirshl the loss will be unending). As has often been observed, literary Hebrew is a constant echo-chamber, and at strategic junctures Agnon deftly call up a resonant echo. However Flaubertian he may sometimes seem, this is not a resource that Flaubert could have deployed in his French.

    Neither novel restricts its treatment of character and theme to the representation of consciousness. Flaubert is particularly good at introducing concise and arresting judgments of his characters, usually through the vehicle of a striking simile. Early in the marriage of Emma and Charles, he is blissfully happy, sexually happy (though she probably is not), and happy in his illusions about the splendid woman he now has as his wife. This is how Flaubert summarizes the tenor of his contentment: “he went out, chewing on his happiness, like those who still masticate, after dinner, the truffles they are digesting.” And here is Emma’s experience during this same period: “As for her, her life was as cold as an attic with a skylight facing north, and boredom, silent spider, wove its web in the shadows into the corners of her heart.”

    A more complicated simile is occasioned by Rodolphe contemplating his mistress’s protestations of love: “human speech is like a cracked cauldron on which we bang melodies to make bears dance when one would want to reach the stars.” In this instance, Flaubert segues from Rodolphe’s cynical disbelief in any woman’s vows of love to a general reflection, of which Rodolphe would scarcely have been capable, of the painful inadequacy of all human speech. This simile of banging on a cracked cauldron while wanting to reach the stars has been justly celebrated.

    Two instances of the narrator’s incisive judgment of character through simile, both from Hirshl’s desperate nights outside the house were Bluma is staying, show the connection with the Flaubertian procedure. “As soon as he reached that place, he hid so that no one would see him, like a drunk who pours himself a drink and fears they will come and take it away from him.” The aptness of the simile is evident: it suggests that Hirshl’s attachment to Bluma is a hopeless and damaging addiction. My second example is another case of Hebraizing a technique, for the simile is drawn from traditional Jewish practice: “Like a man standing on the night of Tisha B’Av when the heavens open and he raises his eyes upward to plea for mercy, so Hirshl stood looking up at Bluma’s window. What did Hirshl want? Hirshl wanted Bluma to open her window and see him.” Tisha B’Av, the day of fasting in commemoration of the destruction of the two ancient temples and other calamities of Jewish history, occurs in midsummer. The folk belief is that in the middle of the night on Tisha B’Av the heavens open, and at that moment pleas for mercy may rise unimpeded to God. The point of the simile is precisely its inappropriateness. Bluma up above in her servant’s bedroom has displaced “God in his heaven.” That displacement can cut two ways: either Hirshl’s desperate longing for even a glance from Bluma is a kind of idolatry, or the overwhelming intensity of his longing for the woman he loves can be conveyed only by comparing it to a wretched person’s plea for mercy from God.

    What does this comparative scrutiny of two writers from utterly different literary traditions tell us about the broader issue of literary influence? In recent decades, scholars have justly tended to avoid the term “influence” as too crude and misleading a representation of what happens when one writer interacts with another. The term is not inappropriate, I think, when a workmanlike writer reads an original one: the stylistic effect of Hemingway’s innovative prose on American hardboiled writers can properly be characterized as an influence. But something different happens when a great writer encounters on the page a great writer who came before him: it is a kind of alchemy. The earlier writer strikes a spark in the later one, gives him certain ideas about how things might be done, which he will then proceed to do in his own way. Agnon read Flaubert at a relatively early stage in his career — “before writing and after writing,” as he worded it in his letter to Zalman Schocken. Then, I imagine, he would have said to himself something like this: “That is really good. I could in some way use that. But how would I handle it? How could I make it work in the style and the invented world in which I have been forging my own literary way?”

    When a writer of the first order of originality discovers another writer of manifest mastery, something more deeply interesting than influence occurs. Agnon would have been Agnon in most respects if he had never read a word of Flaubert, but it seems safe to say that as he was gaining artistic maturity, Flaubert helped him, if even modestly, to become the writer whom he aspired to be.

    In A Simple Story, Agnon was writing, a little anachronistically, a nineteenth-century realist novel, for which Madame Bovary could serve as a useful model. Yet he was too restless an artist to remain content with this mode of fiction. During this same period in the 1930s, as I have noted, he had begun to produce radically experimental short stories. In the 1940s and 1950s, he would explore new and anti-realist directions. In Only Yesterday, in 1945, he devoted numerous chapters to the interior monologue of a dog named Balak, who proves to be the book’s most philosophically reflective character, and its most engaging one.

    Toward the end of the 1940s and into the 1950s, Agnon would fashion two remarkable dreamlike novellas replete with symbolism that were unlike anything he had written before. His last, uncompleted novel, Shira, which he was still working on at the time of his death in 1970, is a fascinating fusion of the symbolism of the novellas — leprosy is the most prominent symbol — with tormented sexuality and resonant reflections on art and truth. So Flaubert cannot be regarded as the dominant or decisive force in Agnon’s career. But still the French master made a difference. He had spoken profoundly to the Hebrew writer, and played a role in his evolution as an original artist at a moment when he was just coming into his own.

    What’s So Funny?

    If you read this essay you will not become a better person. I will not delineate the most progressive stance that you could take on a recent development in politics or culture, taking into account the various relevant social justice considerations and concluding on a rallying cry. And neither will you be presented with a set of arguments advancing the liberal positions that you already support, but which a less well-informed, or perhaps simply more selfish, person theoretically would not. I will not invite you to feel a sense of personal satisfaction, maybe laced with anguish, about your own right-mindedness compared to that theoretical other person. 

    At the same time I will not hash out a supposedly controversial, but in fact well-trodden, stance on an element of progressive culture. Likewise I will not lament the overreaches of political correctness through a series of exaggerated or otherwise dubious examples. You will not be made to feel risqué and rebellious for holding a garden-variety regressive view which, particularly considering the influence of both demographic factors and self-interest, it is perfectly predictable that you would hold. 

    This will not be an essay in which the likes of Foucault and Kant are quoted liberally, to remind you of my steely academic credentials, because in such writing that is where my authority, and hence your interest in continuing to read, derives from. The fact is I don’t have steely academic credentials. No PhD or Ivy League anything or Oxford or Cambridge. Actually I don’t even have an undergraduate degree in this essay’s subject. This is not a piece of writing in which I will use credentialism to flatter your sense of yourself as a highbrow intellectual type. If that is the kind of thing you’re into. 

    There will be no disclosure of personal trauma, or of my dramatic emotional response to an occurrence which, to a less emotionally wrought observer, may not seem really so bad. I will not describe myself as being wracked with sobs or petrified or likewise in order to make you feel either titillated, or gratified as the kind of person who tends to present themselves similarly, or mildly heroic, or as the kind of stoical person who does not, but tends to sympathize greatly with people who do. There will not be the sense that by reading this and buying into my narrative you are a Good Person, siding with a Good Person who had something bad done to them by a Bad Person. I am not a good person. Well, I am sometimes. But I am more than happy to admit that sometimes (often) I am not. 

    I do have one thing to offer you if you keep reading, though: I am pretty confident I will make you laugh. This is not a big promise, maybe. But look around, there’s not much of it on offer elsewhere lately. 

    The offbeat, weird humor in the novels of Percy Everett, Gwendoline Riley, Nicole Flattery, Monica Heisey, Sally Rooney (particularly in her dialogue) and Joshua Cohen is an antidote to what Parul Seghal identified last year as the dominance of the trauma plot in fiction, and its accompanying dour tone. But elsewhere, solemn accounts of distressing events have come to be seen as the quick ticket to producing novels which possess gravity and emotional heft, even now that this mode feels rote. As Seghal put it: “The invocation of trauma promises access to some well-guarded bloody chamber; increasingly, though, we feel as if we have entered a rather generic motel room, with all the signs of heavy turnover.”

    In the comment pages the same endlessly reiterated positions on culture war topics dominate. Dark clouds creep closer on every front (the pace of technological development, gender relations, race relations, the books taught in schools, mental health, attention spans, standardized testing and so on) as the storm of an impending apocalypse rages. AI is one topic getting the “four horsemen of the apocalypse” treatment a lot of late. Although actually it also had a moment in the spotlight not too long ago, in 2019, when columnists were saying that if GPT-2 (a text generator prone to writing chaotic, nonsense sentences with a bent towards the discussion topics popular on Reddit in 2017) was released, we would be “hurtling towards the cheering apocalypse,” but it didn’t happen that year either. The world of “personal essay” writing may no longer be orientated solely around courting mindless clicks with lurid and hyper-salacious tales of incest or embarrassing trips to the ER. But now it’s all dramatic retellings of lackluster boyfriends, and models complaining about the rampant excesses of capitalism that they witnessed among the rich people they observed on all-expenses-paid holidays. Self-aware self-mockery, humor as a means to deflate tricky subject matter or give a sense of perspective, and dry asides, are all scant. 

    Twitter and TikTok are forums which ostensibly traffic in the exchange of jokes. But I would venture that the nature of content on both platforms, and the enjoyment of it, is more related to the comfort found in the repetition and recognition of certain tropes than in genuinely engendering laughter. Besides, part of what makes Twitter funny is how poe-faced and resolutely determined not to get the joke many who use it are — the fact that you can earnestly be called a fascist for saying you prefer not to use a dishwasher (that happened to me) and so on. Actually one of the funniest things about social media in general is the sense that you are frequently receiving solemn moral instruction from some of the absolute worst, most craven and cynical people in the world.

    Technically there has been a glut of a certain kind of ostensibly funny big budget production on cinema and television screens lately. Triangle of Sadness, The Menu, White Lotus, Succession, and Glass Onion all use the grammar of satire (if not really the language, thanks to an often somewhat chaotic sense of message or target) in skits that purport to send up the habits and excesses of the obscenely wealthy. But even here, where humor is supposedly a selling point, it is devalued by the fact of it being teamed with an easy, popular, “fist pump”-style political message, as if to make an argument for the worthiness of the project. Because simply making something funny, with an interesting comment on, say, different types of character — that is apparently no longer enough by itself. The idea that mass market culture could be, on its surface at least, apolitical is deeply unfashionable presently — even if this could yield work with something more interesting to say about people or ideas, or a more sophisticated political message. 

    And it’s no wonder these films argue for their existence by telegraphing an easily digestible (frankly often muddled, once you scratch the surface) political message. This is a response to how many people engage with cultural products now. Garth Greenwell wrote recently in The Yale Review, about his current cohort of fiction students: “When I work with students now, graduate or undergraduate, their primary mode of engagement with a text often seems to be a particular kind of moral judgment, as though before they can see anything else in stories or poems they have to sort them into piles of the righteous and the problematic.” I would take this further and say that the frantic pace at which the adjudication must be made of whether something is righteous or problematic makes confusion over this judgment not only likely, but endemic. In this way a glib, digestible, superficial message which masks a darker underlying sensibility will see a work lauded for its progressive credentials, in a manner I find quite troubling.

    In Triangle of Sadness, for example, a film about a luxury cruise ran aground, slapstick displays telegraphed a cozy, by this stage very familiar “Eat the Rich” message, which the film was celebrated for. Rich people were thrown wildly around the ship, vomiting and having diarrhea everywhere, when the boat hit turbulence; a rich elderly pair of arms manufacturers were blown up by one of their own bombs; rich people demanded that the beleaguered staff all queue up to go down a wet slide; and so on. (Women Talking is another example of a film in which a heavily signposted progressivism masks a borderline reactionary sensibility, but it is not trying to be funny.) But about halfway through, when the ship sank and an assortment of passengers were marooned on an island, the message gave way to a nihilistic parable about the inherent awfulness of human nature. One of the ship’s maids, the only person with any practical skills left standing, was the most powerful among them and, of course, set about merrily abusing her new position. Anyone with power would behave badly, the message went. It was celebrated for its anti-capitalist sentiment, but it didn’t exactly make an argument for redistribution. 

    This point about the obviousness versus the underlying political sentiment in popular art may seem unrelated to a wider thesis about the humorless state of our culture at present, but it relates to a general cultural calamity about subtlety, unexpectedness, and risk, which I think underpins this tendency. Humor, you realize when you start thinking about it in any depth, is a very mysterious thing. “No one knows why we laugh,” John Carey wrote in an introduction to The Joke and Its Relation to the Subconscious by Freud. It also isn’t clear why some people are funny and others are not. “The joke work is not at everyone’s command,” as Freud put it, in wording which demonstrates the immense comedic value to be found in the inherent fustiness of a book which tries to explain what jokes mean to us. 

    There is no formula which explains why a certain thing should be funny, or guarantees that it will be funny. Actually the lack of formula seems a definitional component of humor, since so much of what is funny depends on an element of surprise or unexpectedness. As it does on mysteriousness: the more you spell out a joke, or explain why something is funny, the less so it becomes. And this sits directly at odds with our risk-averse culture, where everything must be obvious, hyper-caveated, and over-explained. And the formula of any successful cultural product must be repeated until it feels tired and rote. If the first of the “funny rich people” films was funny, is the seventh? Or the eighth? There is a roteness to so many things now. A rote way to write about trauma, or romantic relationships, to deal with this or that uncomfortable interaction, to talk about progressive politics, to be unprogressive. But predictability and explanation are the mortal enemies of humor. I read recently something that seems to sum up so much of what feels missing from our culture currently: Jeff Bewkes, a former HBO boss, remembered himself telling the creator of Sex and the City: “I don’t want ratings. I want a better show…stop explaining jokes.” I sent it to a few friends who work as novelists and TV writers. The replies were all a version of: “Better times!” 

    Risk aversion and obviousness now dominate perceptions of what constitutes substance in art. We have become unsubtle in the way we understand emotional currency. Seriousness is considered the terrain of mawkish sentimentality, while humor is hokey and cozy. The two don’t mix; one doesn’t mask the other, as is often the case in life. This seems like a depressing thing to have to argue against. So I will share a poem by Rene Ricard instead, which, in a few unlyrical lines, renders an almost disquieting quantity and depth of emotional states, and an overall truthfulness that no amount of melodramatic adjectives about bad boyfriends could ever evoke: 

    I am young 

    And I am beautiful 

    And I will fuck you 

    Over just like everybody else 

    I have noticed this tendency towards risk aversion in interpersonal interactions, too. This is anecdotal and, depending on your circles, you may not have noticed the same. But I have become interested in the way in which people in their twenties and early thirties use the word “canceled” colloquially, among other related observations. I am aware that it has become hard to talk seriously about anything related to the idea of canceling because so much of the discourse around it is so shrill and, besides that, because the discourse is the equivalent of walking across a room with a floor coated entirely in mousetraps. But I think there is something important in what I have noticed so I want to try anyway. 

    The instances I am thinking of relate mostly to young men. Not because I think this is a trend among young men only, but because I am a young woman and recently I have been trying to speak to young men about their views on masculinity, which can be a tricky topic for men to discuss at present (and at other times too, for different reasons). And maybe especially tricky to discuss with a woman. I began instigating these conversations for a mixture of professional and personal reasons. I am often drawn to writing about contemporary male-female relations, the muddled state of communication between men and women at present, and certain flattening narratives concerning stereotypes of female personality traits, and thus the form that heterosexual relationships tend to take. The more I have written about these things, the more I have found myself seeking male perspectives to better inform my understanding of all this. In a personal capacity, too, I have detected a certain mood among many of the young men with whom I interact — a mood that I would define as a combination of listlessness, confusion, and despondency. I find this sad and a little unsettling so I am trying to understand it better. (I wouldn’t say that life is so easy for young women either, but I don’t really need to ask around to find out what that’s like.)

    Anyway, this has resulted in me instigating a lot of conversations that can be pretty uncomfortable. They tend to start out as fairly, even very, awkward. Also a lot of the men think that I am hitting on them (we tell ourselves stories in order to live), which then begets a whole other minefield of potential awkwardness and misunderstandings. The best response I have received so far was from a man who told me that, because he has a girlfriend, we would have to meet during the workday, by his office. I gathered that our meeting was to be clandestine. And my assumption was verified when we did meet. We were walking around and someone he knew happened to pass by and say hello. He sort of shrieked and elbowed me into a nearby alley. 

    When we get past all that, though, and we can talk, I learn a lot. I have noticed that the idea of being “canceled” comes up often, as a casual reference rather than an in-depth discussion of the phenomenon. A representative quotation: “It’s nice to talk about this stuff. You can’t really a lot of the time. People will think you’re into Andrew Tate. You’d be canceled.” (Andrew Tate is a misogynistic internet celebrity from Britain who has amassed a large international audience preaching hyper-masculinity, and has also been arrested for human trafficking.) I think you could use the fact that people are talking in this way to argue for or against the existence, the effectiveness, or the righteousness of cancellation as a “thing.” Or actually, for or against any of the endless strands of argument that surround this topic. But I’m more interested in thinking about what exactly these men are using this word to mean. There is not a widely agreed definition of what being “canceled” consists of. But let’s say that a functional definition is this: a social media (or otherwise public-facing) campaign to accuse a person of holding unacceptable views (rightly or wrongly) which has successfully stripped them of certain (but maybe not all) professional opportunities and seen them ostracized from certain (but, again, maybe not all) social circles. 

    Right now you are thinking about everything that is wrong and terrible about my definition and how you could write a better one instead. Never mind; my real point is that I don’t think that this is what these men are using this word to mean. And I don’t think they are using it to mean a less or more severe version of this either. For a start, because I’m not really sure how they could be canceled. These are broadly not people who must maintain a social media presence for work (with the exception of a few influencers among my sample, who tend to have a far more sophisticated understanding of how to manipulate these dynamics anyway). They mostly don’t use social media much at all, other than as passive consumers. They don’t inhabit particularly progressive social circles or professional worlds where their views are really anyone’s concern. Besides, nothing we end up discussing is objectionable by a reasonable person’s definition. I can’t see that there would be even minor professional consequences for a carpenter who was known to have gentle disagreements with the tenets of liberal feminism, for example. 

    So what do they mean when they say “canceled”? Well, to go back to the earlier quote: “It’s nice to talk about this stuff. You can’t really a lot of the time. People will think you’re into Andrew Tate. You’d be canceled.” To me, the interesting thing is that the fear here is not that he would be discovered to secretly support Andrew Tate, but that he might accidentally say something which implied that he did. If I think about the possible bad consequences that could result from that, that whoever he is talking to might make an accusation, then he would probably try to explain what he meant and perhaps struggle to do so (some people are better at semantic precision than others). By this stage he would have lost control of the conversation. Then other people might be told and a misrepresentation of himself that he would find difficult to correct may spread. I’m not sure this would have professional consequences or the like. But maybe the idea of being misrepresented or misunderstood when trying to talk about something you feel deeply about is unpleasant enough by itself.

    I may be wrong. But I can’t see what else they would mean. And, if it is true that these men (and other people, I think) are scared to talk about how they really feel in case they are misunderstood or misrepresented, it may not matter. But I think it does. There are bigger problems in the world, of course. But I am a writer, not a social worker, a politician, or a doctor; and I insist that there is enormous value in trying to understand how other people really feel. As a person, too, I am starting to wonder if the mood of listlessness, confusion, and despondency that I have noticed is connected to this fear — to this jittery nervousness about the consequences of speaking honestly with other people. 

    This may seem like a digression from my point about humor, but I believe that it is a fundamental component of how we communicate with each other at present. Hence the blandness of the cultural products which are made and put in front of us. There is a base level of anxiety about misrepresentations and misunderstandings which might, at any point, send everything sliding down into a ravine. In such a climate nobody wants to take a risk.

    No one knows why we laugh, but when I was thinking about how humor works in a social setting I kept coming back to the idea of trust. When you make a joke, you take the risk that the other people you are talking to may not get it. You would look stupid if they didn’t get it, and everyone really hates to look stupid. So you need to feel there is at least a base level of good faith in the group. And when they laugh, their laughter demonstrates that you share an affinity of sorts, and there is a slightly strengthened bond between you, since you have all experienced something enjoyable together. The best outcome is that you trusted people and you were rewarded for doing so. 

    Something else I have noticed about humor, throughout my life, is that it can make hard things easier to talk about. I am from Belfast, born just before the Troubles ended, so I suppose I have close relationships with more people who experience untreated PTSD than maybe the average Western millennial does. Contrary to received wisdom about how trauma manifests, my observations have been that humor is a large part of the response. I notice it all the time. Just the other week, my mother told me that she had seen an interview in a magazine about a celebrity who talked about her memories of growing up, shopping for second-hand furniture in boutiques. “When I was growing up we’d have to do that every week because of the British army, but it was hardly boutiques,” she said. “Actually no, that’s not true,” she went on, laughing. “It was closer to once a month.” And actually it was very funny, although it loses something in the translation. 

    There is something about the nature of remarks such as these, which invite someone else to laugh along with you at what is essentially your hardship, that has long fascinated me. When someone frames their suffering through laughter, it seems to communicate a mysterious, hard to describe, quality of human nature. A resilience, a sense of lightness to be found in almost any circumstance, and a generosity too. A comment such as this one seems to say: I have seen it, the thing which you fear, and here I still am. It wasn’t really all that bad. 

    The grittiness, the hardiness, the sort of blemished quality that I perceive, from my experiences, as fundamental to people can seem at odds with a view of human nature which is softer and more straightforwardly sentimental. But I sometimes wonder if my impression, in which we are all sort of down in the mud but together, wading haphazardly towards each other, is really the mushier of the two. I wonder too, if there is a deeper kind of sentimentality in the generosity inherent to communicating traumatic experiences this way. In a way that makes them easier to connect with, if not to fully understand. 

    The funniest person I know is my grandmother, who has lived a strange and hard life, by the standards of today. She tells the most entertaining, the most hilarious, stories about bleak situations and events that I find extremely hard to imagine. I don’t know how we would have those conversations if they had to be told in grave terms. I don’t know if she would want to have them, either. And she sort of sees everything in life this way too. Discussing a friend of hers who has had a lackluster thirty-year marriage, she said: “If she’d have killed him she’d have been out by now.” On another man she does not consider to be handsome: “He could be coming up the road with the lottery under his arm and it would still be a no from me.” 

    It is sad, and bad for our culture, that the weirdness and the energy and the vitality of remarks such as those feels so uncontemporary, so beyond us. We need to get better at taking risks again, and not only for the sake of humor. Did I make you laugh?