Large Empty Bowl

    sitting in the bower
    after lunch with
    my sadness 

    like unto Magdalene
    our defectiveness known all around
    the town 

    (a passion for extravagant apology)
    (flimsy promise to do better from now on) 

    I knew the crowd had stones
    heating the hollows of their hands

    (the teacher has always shown me
    the underlying structure of

    a situation, though often
    years after I no longer cared, was
    no longer there, at all)

    stone and bread are alike
    in the hands of the just

    bread and stone the same

    Two Concepts of God

    For Moshe Idel

    Since the very inception of their discipline, scholars of Kabbalah, or Jewish mysticism, have tried to define the object of their study based on its supposed relationship to myth. Gershom Scholem viewed the rise of Kabbalah in the Middle Ages as the return — with a vengeance — of myth. After having been repressed by Biblical and rabbinic traditions, it reemerged, cloaked in mystery and veiled in esotericism, to insinuate itself into the heart of Judaism, from which it would dictate the future of Jewish thought and history. Another school of thought, led by Moshe Idel and Yehuda Liebes, has denied that myth had ever been absent from Judaism, and has traced lines of continuity from the kabbalistic mythos to elements already present in the canonical texts of ancient Judaism. Whatever the precise relationship between myth and Kabbalah, medieval Jewish philosophy has been presented in no uncertain terms as Kabbalah’s nemesis. Under the influence of Maimonides, who defined much of its agenda, Jewish philosophy sought to purify God of any trace of anthropomorphism and to systematically eradicate any mythic elements from Jewish tradition. According to this account, then, the twelfth and thirteenth centuries saw the rise of two movements with diametrically opposed relationships to myth: Kabbalah and Jewish philosophy. The kabbalists are said to have had the upper hand because they tapped into the primordial wellsprings of myth and irrigated every corner of Jewish existence from it, while the philosophers uprooted Judaism from its source of vitality and turned it into an alienated and abstract religious culture.

    Scholem conceived of this contest as a medieval precursor of the great cultural clash between Romanticism and the Enlightenment in the nineteenth and twentieth centuries. The conceptual framework that he used to analyze Kabbalah borrowed considerably from the way Romanticism imagined itself in relation to the Enlightenment — that is, as a more authentic and exciting alternative to the aridities of reason. In this way, a broad cultural conflict from modernity was projected backward onto medieval Judaism and considered to be the key to elucidating its driving forces.

    This historical account has become the dominant way of thinking about Kabbalah, even as the debate over the connection between kabbalistic myth and earlier strands of Jewish thought continues. There is no question that its broad outlines are well grounded in the sources and that it has contributed greatly to our understanding of Jewish intellectual and cultural history. But it is not the whole story. It seems to me worthwhile to present another way of thinking about these issues, an alternative conceptual framework that will help us get a fuller view of the Kabbalistic phenomenon itself, and will situate Kabbalah’s relationship to medieval Jewish philosophy in a different and added context. Indeed, the implications of this different view extend beyond the confines of Jewish religion to the more universal and fundamental question of how we may think about the nature of divinity itself. 

    I would like to propose a contrast between two conceptions of the deity: God as a personality and God as a being. Each one provides utterly different ways of accounting for existence and for the religious posture. When categorizing the two medieval Jewish movements according to this criterion, a surprising result emerges: philosophy and Kabbalah, hitherto regarded as spiritual and intellectual opposites, turn out to be part of the same massive conceptual and religious shift. Together they reject the notion of God as a personality for that of God as being. To put it another way, they replace God with the Godhead. One may have personal relations with God, but not with the Godhead, which in Kabbalah is a multifaced and dynamic structure, a system of divine reality.

    In the biblical tradition, God is a relational subject who enters into a covenant with the Israelites. The history of this people, with all its ups and downs, is interpreted in terms of the straining and strengthening of a complicated relationship between the divine personality and Israel, which is marked by love and tested by betrayal. (The Biblical conception of God as a personality was most richly explored by the late Yohanan Muffs.) Similarly, for all its vastness and its multiplicity of genres, rabbinic literature everywhere exhibits the same basic perception of God as a personality entangled in a web of relationships with humans and the Jewish people. Not only does the biblical anthropomorphic conception of God not trouble the Sages, but they broaden and deepen it. 

    A midrash in Mekhilta de-Rabbi Yishmael, an ancient collection of rabbinical commentaries on the Book of Exodus that was redacted in the third century CE, helps to highlight the monumental disparity between the concerns of the ancient rabbis and those of their medieval successors. It concerns verse twenty-one in chapter thirteen, “The Lord would go in front of them by day [in a pillar of cloud, and by night in a pillar of fire],” about which it remarks:

    Is it possible to say this? But it already says, “Do I not fill heaven and earth — declares the Lord” (Jer. 23:24); and it is written, “And one called to the other and said, ‘Holy, Holy, Holy [is the Lord of Hosts], His glory fills the entire earth’” (Isa. 6:3); and it says, “Behold, the glory of the God of Israel came from the way of the east; and His voice was like the sound of many waters; and the earth was illuminated by His glory” (Ezek. 43:2). What is the meaning of “The Lord would go in front of them by day”? 

    Rabbi [Yehuda Ha-Nasi, or Judah the Prince] said: Sometimes the Emperor Antoninus would hold court on the dais until it grew dark, and his sons would remain with him. After leaving the dais, he would hold the lantern and light the way for his sons. Imperial dignitaries would approach him and say, “Let us hold the lantern and light the way for your sons.” But he would say to them, “No. It’s not that I have no one to hold the lantern and light the way for my sons, but that I am showing you how dear my children are so that you accord them honor.” Thus did the Holy One show the nations of the world how dear Israel is, so that they would accord them honor.

    At first glance, this midrash seems troubled by the kind of metaphysical problem that exercised medieval Bible commentators: How can a deity who fills the entire world be constricted to a cloud or a fire? In the twelfth century, Abraham Ibn Ezra sought to defuse the problem by claiming that God was not actually within the cloud or the fire, and that Scripture only makes it seem so because those phenomena are manifestations of His power. 

    A closer look, however, reveals that Rabbi Yehuda’s parable about the Roman emperor — the ancient Jewish tradition frequently compares divinity to kingship — is addressing another question altogether. Instead of wondering how an infinite deity can be confined to a finite physical space, the midrash wants to know why almighty God decided to lead the Israelites in a pillar of cloud and fire, when He could have sent any of his hosts of messengers to do His bidding or gone forth Himself in all His resplendent glory. Rabbi Yehuda answers that God, like the Emperor Antoninus, chose to show His children the way Himself, not because He had no capable agents but to demonstrate His burning love for His children. And if he had brought His royal retinue, the message would have been lost on the overawed observers. 

    To elucidate the parable, let us consider a more contemporary story. A country is hosting a foreign president, and instead of booking him a presidential suite at its most luxurious hotel, it puts him up in a hostel. The resulting scandal about the country’s treatment of the president does not broach any questions of metaphysics; it’s not as if they tried to defy the laws of nature and squeeze the president into a matchbox. It is, rather, the unseemliness of it that makes headlines. As far as the above midrash is concerned, God can obviously manifest Himself in any way that He deems fit, even in a pillar of cloud or fire. It is asking instead about the appropriateness of the chosen form. One would think that a god whose glory permeates the cosmos would make more of an impression. This is the difficulty that Rabbi Yehuda’s parable addresses.

    Mekhilta de-Rabbi Yishmael raises a similar question about God’s revelation to Moses through the burning bush. Again, the Sages desire to know why God would reveal Himself to Moses through a homely, prickly bush in the wilderness instead of in a more grandiose display befitting His majesty. One of the answers given is that God wanted to convey that He, too, suffered the pain of slavery experienced by the Israelites, like a bird caught in a thorny thicket:

    Moses was tending the flock… And the angel of the Lord appeared to him in a blazing fire out of the midst of a bush” (Ex. 3:1—2). Rabbi Shimon b. Yohai says: Why did the Holy One reveal Himself from on high and speak to Moses out of the midst of a bush? Just as this bush is more brutal than every other tree, so that a bird which enters it cannot escape intact, having been cut to pieces, so too the slavery of Egypt is more brutal for God than every other slavery. 

    A different interpretation: “Out of the midst of a bush” (Ex. 3:2). 

    Rabbi Yehoshua says: Why did the Holy One reveal Himself from on high and speak to Moses out of the midst of a bush? For whenever Israel is in distress, it as is if He is distressed, as it says, “In all their troubles He was troubled” (Isa. 63:9). […] And thus would R. Yehoshua say: Come see how great are the mercies and beneficences of the Holy One upon Israel! They went down to Egypt, and the Shekhina (divine presence) was with them, as it says, “I will go down with you to Egypt” (Gen. 46:4); they went up [from Egypt], and the Shekhina was with them, as it says, “and I will bring you back up” (ibid.); they went down to the sea, and the Shekhina was with them, as it says, “And the angel of God traveled” (Ex. 14:19); they came to the wilderness, and the Shekhina was with them, as it says, “and in the wilderness, where you saw how the Lord your God carried you” (Deut. 1:31); until they came to the Temple. They experience suffering, and He, as it were, experiences the suffering with them, “out of the midst of a bush”; they experience relief, and He experiences relief with them, as it says, “To behold the prosperity of Your chosen ones, to rejoice in the gladness of Your nation, to glory in Your inheritance” (Ps. 106:5).

    This midrash contains one of the important innovations in the anthropomorphic religious thinking of the Sages: God suffers when the Jewish people suffer. Attuned to the nuances of divine self-revelation across Scripture, the midrash discerns in the unusual appearance of God in a scrubland bush an expression of divine empathy for the Israelite plight.

    The Bible maps the relationship between God and humans onto a few hierarchical human relationships: father and son, husband and wife, judge and accused, king and subject. The Sages then contribute new relational categories to this anthropomorphic foundation, including teacher and disciple, homeowner and laborer, patron and client. These capture and accentuate subtleties in the religious relations. Even more so: while in Scripture, God is the more powerful figure in the hierarchy and Israel the less, at times the Midrash brazenly reverses the balance of power in its human analogies, so that Israel or a human is the master, husband, or judge and God the servant, wife, or accused. In addition, the midrashic imagination does not limit itself to the Bible’s verticality. It envisions horizontal relationships that are non-hierarchical and intensely intimate, such as exist between twins or friends. The sheer creativity of the ancient sages is not evidenced, then, by exegetical acrobatics to minimize God’s anthropomorphism, but by the endeavor to look exhaustively at all relations between human beings to better grasp and give meaning to the religious relationship with God. 

    This core religious sensibility of the Bible and the Midrash, which perceives God as a relational subject, met fierce opposition in the Middle Ages. The Maimonidean philosophical concept of God denounced as idolatrous any attribution to God of emotional states and personal characteristics. In this account, God’s transcendence defies any analogy to the human realm, a transcendence that reaches its height in the denial of the capacity of language altogether to serve as a proper medium for the attainment of any knowledge concerning God. This philosophical conception of God, which was tremendously influential among Jewish thinkers (and non-Jewish thinkers too, such as Aquinas) had the effect of usurping a robustly personal God with an austerely metaphysical God. Dramatic narrative was replaced by abiding ontology. What has not been sufficiently documented, and what I will try show below, is that the dramatic shift from God-as-personality to God-as-being was also the hallmark of medieval Kabbalah. This is true, to borrow Moshe Idel’s broad characterizations, both of ecstatic-prophetic Kabbalah and theosophical-theurgical Kabbalah.

    Ecstatic-prophetic Kabbalah focuses on the development and use of mystical techniques to attain unification with God. As Idel has shown, the pinnacle of this kabbalistic striving is the experience of unio mystica, the absolute assimilation of the individual into the Godhead, to the point of ontological identity. The aspiring mystic achieves this by shedding his individuality and personality, by turning the ego (ani) into naught (ayin). (The Hebrew words are anagrams of each other.) In the Kabbalah of Nahmanides in the thirteenth century in Catalonia, union with God depends on a complete negation of the human will, because the will is what individualizes a human as a being separate from God. In other traditions, the negation of the ego is achieved by completely emptying the consciousness of certain particularizing elements. And to match the mystic who strips himself of all personal qualities, the deity with whom he merges is not personality but an all-encompassing being. The crowning attainment of the ecstatic kabbalist is the dissolution of the ontological distance between humans and God, the very space necessary for interpersonal relations to exist. For this reason, in his moment of spiritual ecstasy, as he is enveloped by God, the mystic does not — cannot — interact with God in any human sense. 

    These mystical traditions that reject the conception of God as a personality clothe their longing for union with God in the traditional notion of devekut, or “cleaving” to God — but this never before meant a nullification of the self or a complete melding with the divine being. In Scripture and in classical rabbinical literature, cleaving to God entails limitless devotion to the word of God and following in His footsteps. Drawing close to God, for the Sages, means being on the most intimate terms with God, but it certainly does not mean overcoming the ontological gulf that separates humanity and divinity. Note how the Midrash interprets the verse in Numbers about the human who drew as close as humanly possible to God:

    “Not so with My servant Moses; he is trusted throughout My household. With him I speak mouth to mouth, in a vision and not in riddles, and he beholds the likeness of the Lord.” (Num. 12:7—8) Rabbi Pinhas said in the name of Rabbi Hoshaya: “And he beholds the likeness of the Lord” — it is comparable to a king who reveals himself to a member of his household in his linen garment. 

    Moses’s extraordinary closeness to God is expressed in the language of familiarity and intimacy. He is like a member of a royal household who is privy to see the king in his undershirt, divested of the trappings of royalty and with his guard down. But the kabbalistic notion of cleaving, in which one cannot tell where the mystic ends and God begins, does not allow for intimacy, because intimacy is something that exists between distinct persons. (Many centuries later this preference for being over personality was reversed by Martin Buber in his dialogical account of human interaction with the divine.) The fact that the ultimate religious attainment for these kabbalists has changed from intimacy with God to losing oneself in Him perfectly encapsulates the reconceptualization of the divine personality as an apersonal being. 

    A similar and no less significant transition occurs in theosophical-theurgic Kabbalah. This school of mystical thought introduces two new fundamental ideas about God, both of which end up effacing the divine personality. The first is a dualistic conception of God: the hidden and unknowable deus absconditus, and the manifest divine potencies emanated from the former’s innermost depths. The concealed dimension of the Godhead, which is termed “Absolute Privation” (ha-afisa ha-mucḥletet) in Nahmanidean Kabbalah and “the Infinite” (Ein-Sof) in other kabbalistic thought, is utterly apersonal and has no characteristics. It is being so pristine and pure that the particularizations of language and the categorizations of cognition slip right off it. It is important to stress that this aspect of the Godhead, which scholars generally identify with Maimonides’ philosophically purified God, is nowhere to be found in rabbinic literature. The ancient sages consider God a personality through and through; they posited no apersonal dimension in His being.

    A fine illustration of this significant gap between the rabbinic Midrash and the understandings of the Middle Ages can be seen in their respective interpretations of the very first words of the Bible: “In the beginning (be-reshit), God created” (Gen. 1:1). One would expect God’s book to begin with a reference to Himself: God created, in the beginning,…. So why, the question is asked, doesn’t God appear first? According to the kabbalists, this semantic ordering signifies that the true Creator God, the hidden dimension of the divine Naught, cannot be expressed in language. Out of this infinite Nothingness the sefirot, the elements of the godhead, emerge, and the first word of the verse, be-reshit, is actually an allusion to the first knowable sefirah called Chokhma, or Wisdom. The rest of the words in the verse allude to the other sefirot according to their order of descending emanation and increasing particularization. A critical step in transforming God from a personality to a being is the positing of a hidden, unfathomably pure existent of sui generis oneness, which serves as the primordial substrate of all existence. Yet the Midrash, by contrast, knowing nothing of this divine schematic, uses this as yet another opportunity to glean insight into the divine personality: 

    “In the beginning, God created” (Gen. 1:1). Rabbi Yudan in the name of Aquilas: It is befitting to call such a one “God.” The common practice is for a king of flesh-and-blood to have his praise proclaimed throughout the city without having built them public works or baths. 

    Shimon ben Azzai says: “You have increased Your humility for me” (2 Sam. 22:36). Flesh and blood mention their name and then their praise: so-and-so followed by their title. But the Holy One does not. Only after creating the needs of His world does He mention His name: “In the beginning [created]” and after that “God.” 

    The Sages do not read the absence of God’s name from the opening of the verse as an ontological statement about any Absolute Privation that grounds the rest of the Godhead and the universe. They take it instead as a reflection of God’s humility — of an aspect of the divine personality. In stark contrast to the self-important emperors, God declares His primacy only after doing something for humankind. 

    The effacement of the divine personality can also be seen in the manifest aspects of the Godhead, which brings us to the second major innovative doctrine of theosophical Kabbalah: the emergence from the divine Naught of a complex of divine potencies variously called attributes, existents, utterances, or, most familiarly, sefirot. These potencies are the manifest aspect of the Godhead and mediate between the hidden dimension of the Godhead, on the one hand, and the universe and humans, on the other. In this picture, the complexity of God’s qualities and attributes found in Scripture and rabbinic literature are reconceived as distinct existents or hypostases. God is remolded into the Godhead, a being that elaborates itself into particularity via a process of emanation, and the potencies that emerge therefrom form a complex, quasi-organic association with one another. 

    The ancient rabbis coined many epithets for God that signify qualities of the divine personality, but they have been transformed in kabbalistic literature into the discrete entities of the Godhead. The names that the rabbis created — for example, Gevura, signifying God’s omnipotence, and the feminine name Shekhina, signifying God’s immanence — did not refer to separate dimensions that exist side by side within the Godhead. The Shekhina in Rabbinic literature is not an ontological dimension forming part of the totality of the divine structure; it is God’s name. And therefore, when the rabbis describe the Shekhina going into exile with the Jews, they mean that God accompanied them. For the kabbalists, however, the Shekhina going into exile meant that the “lowest” sefira was pulled away from the rest of the divine structure. The Shekhina in rabbinic literature is not God’s female companion; it is God’s totality perceived as feminine. By externalizing God’s qualities and names into discreet dimensions emanating from the divine naught, the kabbalist deconstructed the divine personality into a schematized system of ten nodes of divinity that can be visually represented, often in the form of a kabbalistic tree. 

    It bears noting that despite this fixed order and arrangement, the sefirot remain dynamic and in flux, because the manifest part of the Godhead yearns to return to its concealed source. The configuration of the sefirot exists as a fragile unity and separation. This dynamism of the kabbalistic Godhead is incompatible with Maimonides’s Unmoved Mover, a perfect and disengaged deity of the Aristotelian kind, and its internal multiplicity runs counter to the strict and simple unity of God insisted upon by Maimonideans. And yet, in spite of these tremendous theological differences, both Kabbalah and medieval Jewish philosophy fundamentally consider God as being. By depersonalizing God, the two medieval movements catalyzed a transformation in the religious consciousness that would remake the face of Judaism. 

    Theosophical Kabbalah gained sufficient traction because it endowed Jewish rituals with theurgic meaning. The precarious unity of the divine configuration must constantly be maintained, in order for the sefirot to be properly nourished by the divine efflux that emerges from the depths of Naught, and to transmit it thence to the entire universe. Safeguarding the balance and unity of this multifaced system is entrusted to humans. Performing positive commandments unifies the divine structure, and transgressing negative commandments disunifies it. This notion, which scholars term theurgy, views the observance of the commandments as fulfilling a divine need. While the religious act can still fulfill the goals delineated in rabbinic literature, such as improving human nature or appeasing God, the theurgic explanation attributes to the doer a causal effect on the foundations of all existence. The rabbinic explanation of commandments takes for granted that there are two persons, the one giving orders and the one carrying them out, whereas the theurgic reason is distinctly impersonal, it is not a relational activity but rather a casual act on a complex being on a dynamic and manyfold entity.

    And so the doctrine of theurgy further drove a wedge between the old and new conceptions of God. Consider the rabbinic and kabbalistic interpretations of Moses’s role in the Israelites’ battle against Amalek. The Biblical verses report:

    Whenever Moses held up his hand, Israel prevailed; but whenever he let down his hand, Amalek prevailed. But Moses’s hands grew heavy, so they took a stone and put it under him and he sat on it, while Aaron and Hur, one on each side, supported his hands; thus his hands remained steady until the sun set. (Ex. 17:11—12)

    The Mishna — the earliest codified work of rabbinical teachings which was edited in the end second century CE rejects this straightforwardly magical account of the event and raises an incredulous objection: 

    “Whenever Moses held up his hand etc.” Do the hands of Moses make or break a battle? Rather, it tells you that whenever Israel would look upward and make their hearts subservient to their Father in Heaven, they would prevail, and if not, they would fall. 

    According to this mishna, when Moses lifted his arms heavenward, it signaled that the people were rededicating themselves to God and looking to Him for salvation. This reading explicitly rejects any causal explanation — magical or otherwise. Turning to and pleading with one’s Father in Heaven is, of course, a deeply interpersonal act. 

    In kabbalistic thought, though, matters are reversed. In his Biblical commentary, Nahmanides doubles down on the causal explanation: 

    By way of truth, he raised ten fingers to the Height of Heaven to allude to the ten sefirot, in order to make it cleave to the Faith that fights for Israel. This explains the matter of the uplifted palms during the priestly blessing and its secret. 

    By raising his hands, Moses was not praying. He was performing an ontological action and influencing the configuration of the Godhead: he induced the cleaving of the sefirot to “Faith,” another name of Shekhina, the lowest sefira that is most prone to detachment from the rest of the configuration. When Moses lifted his hands, he caused a unification of the upper sefirot with the Shekhina, which was doing battle for Israel, so as to energize it with the divine efflux. The impact of his action was felt cosmically, not humanly. The phrase “by way of truth,” in Nahmanides, denotes an esoteric kabbalistic reading. According to Nahmanides, the same theurgic technique lies behind the commandment for the Kohen, or Jewish priest, to raise his hands during the priestly blessing, as is still commonly done in traditional synagogues. Thus, an interaction between the sons and their Father turns into the metaphysical operation of a technician cranking up the power of the divine machine.

    There is perhaps no better example of this change in the meaning of the religious act than prayer. In the history of Judaism specifically, and of world religions more generally, prayer is the ultimate relational encounter between humans and God, an intimate moment pregnant with vast interpersonal potential. The supplicant’s attitude towards God can vary widely based on their perceived relationship with God and their life circumstances. Prayer can be a humble entreaty, a cry of despair, a brazen demand, a fierce protest, and much more. 

    When the kabbalists turn prayer into a theurgic act, demanding a heightened focus on each word to intend it to act on a specific sefira, they strip this encounter of its dialogical and relational quality. When one examines the massive collection of kabbalistic prayers with their manifold techniques and “intentions,” one is left to wonder, where is the son pleading with his Father? There is indeed a good reason why one of the earliest critiques of Kabbalah, penned in fourteenth-century Spain by the renowned jurist Rabbi Isaac b. Sheshet, touches on prayer: 

    I have also informed you that my teacher, Rabbi Peretz Ha-Kohen, would neither speak about nor give prominence to the sefirot. I also heard him say that Rabbi Shimshon of Chinon, who was the greatest rabbi of his generation, … would say, “I pray with a mindset of a child.” That is, to the exclusion of the kabbalists, who pray one time to this sefira and another time to that one.

    Now, Shimshon of Chinon was not a Maimonidean philosopher who considered kabbalistic prayer an unconscionable contradiction to God’s absolute unity. He was an exceptional jurist and legal authority from Talmudist circles (known as Tosafists) in France, for whom Maimonidean philosophy was as foreign as kabbalistic doctrine. His objection to such complex and technical prayer “intentions” is that they remove the personal quality from the prayer setting; the abstract hyper-awareness that they require necessarily stanches any natural outpouring of emotion. While the kabbalist intently sends off every word to its appropriate supernal destination, Shimshon prefers to pray with the blissful innocence of a child. 

    This analysis of the shift from God as personality to God as being within the two central streams of Kabbalah offers a new perspective on the meaning of Jewish mysticism. Like any general observation, however, it overlooks the sheer diversity of kabbalistic thought. The ascription of a personality to God was part of the fabric of the canonical Jewish tradition that the kabbalists inherited. The concise, systematic character of Nahmanidean Kabbalah is worlds apart from the sprawling and dynamic Kabbalah of the Zoharic literature. And yet, even in the sections in the Zohar (the canonical medieval work of Jewish mysticism composed in Spain in the thirteenth century but attributed to Rabbi Shimon bar Yohai of the second century CE) known as the Idrot, which are considered the most anthropomorphic texts in Zoharic literature, the effacement of the divine personality is apparent. 

    The idrot provide a detailed depiction of God’s facial physiognomy. They zoom in on the divine skull, forehead, eyes, nose, mouth, cheeks, facial hair, and beard. It is important to note that the depiction of God in a human form shows a strong lack of any aversion to an embodied vision of the divine, but such a vision is common in Kabbalah. Yet what marks God as a personality is not having a body, but being a relational subject with a complex emotional life. A body does not necessarily imply a personality. An early medieval text called the Shiur Komah, orDimensions of [His] Stature,” records the fantastically large measurements of God’s body. (“The soles of his feet cover the whole universe….”) The author is clearly in awe of the vast dimensions of the deity, but one sees little evidence of any personal connection to this entity. What relationship can there be with a body that spans light years in the vast emptiness of space? 

    In the Idra Rabba, or theGreat Assembly,” a mystical text incorporated into the Zohar, the gigantic face of Arikh Anpin, or the Long Countenance, pours down mercy unceasingly, and yet it is a completely static face. A face with no facial muscles; white eyes with no pupils, no eyebrows, no eyelids. Can one humanize a shining face that never changes its expression? Can one engage it? The divine face has an iconic, inanimate quality to it. Compare this to the simple meaning of the second verse of the priestly blessing, that God should illuminate His countenance for His people (Num. 6:25). The hope is that God’s countenance will glow radiantly for His people, in the way that someone’s face lights up when they meet or are reminded of someone they love. But the divine face of Arikh Anpin in the Idra is always shining — without intention or interruption, without prior prompting or pleading. This “face” has far more in common with the Sun than with anything living. 

    In the Idrot, God’s interiority is translated into external existents. The Thirteen Attributes of Mercy said to be aspects of Arikh Anpin are attributed to various regions of the divine beard. Whereas human compassion and empathy are psychological and emotional states that move a person to help someone else, divine compassion and kindness in the Idra are a liquid essence that flows from the divine skull down through the strands of the wondrous beard to sustain all of existence. The elite kabbalists who have convened in this “great assembly” do not relate to this immutable and unmoved divine countenance as the face of a person. For them, it comprises yet another collection of divine facets in need of mystical rectification. They are a maintenance crew whose job it is to ensure that the conduits of the divine efflux are in good working order, that the strands of hair have no split ends, so to speak. 

    Unlike Arikh Anpin, Ze‘er Anpin, (or the “Lesser Countenance”) can change, but its range is severely restricted to only two states: fury and grace. The default setting is the red face of fury, but when it is mixed with the whiteness of grace that flows from Arikh Anpin, the face changes. Be that as it may, the members of the mystical assembly still do not address Ze‘er Anpin directly. They work theurgically to try to align Ze‘er Anpin with Arikh Anpin, so that it can receive the divine efflux that will dilute the anger. 

    Even this anthropomorphic God is not a personal God. Without a doubt, the Idrot represent one of the most imaginative products of the medieval kabbalistic imagination, and there is no denying that they humanize aspects of the Godhead. But it is hard to shake the feeling that the Idrot grafted an apersonal Neoplatonic worldview, in which a monad emanates existence to a multiplicity, onto the received biblical-rabbinic tradition of God as a personality. They turned the subjectivity of an engaged God into an objective metaphysical structure and process.

    Later stages of Jewish mysticism also exhibit this turn towards God as a being in their thinking about God’s presence — what scholars call immanence. A person’s presence is acknowledged by his attentive care or sensed by his physical proximity. In rabbinical literature, the Shekhina ascends or descends, moves away or draws close. But when God is depersonalized, His presence means something completely different: the embeddedness of the divine entity, or aspects thereof, in our world. This can be expressed in varying degrees of ontological blending. Some doctrines talk about divine “sparks” captive in or sustaining our reality, while others, especially those espoused in the modern era by Habad Hasidism and Rabbi Abraham Isaac Hacohen Kook, posit a greater and somewhat pantheistic identity between the Godhead and reality. For our purposes here, there is little difference between pantheism (the doctrine that God and the universe are the same) and panentheism (the doctrine that the universe exists in God but that God is still greater than it), for in both the Godhead is an entity, a “thing,” thoroughly entwined with the universe, such that there is no place devoid of His existence. Such a presence does not entail attentiveness or proximity. It is for this reason that in these systems of thought the theurgic religious act is aimed at being in touch with the segments of the Godhead enmeshed in this world for one of two purposes: to collect them and to gather them, so as to make the fragmented divine entity whole again, or to cleave to the divine realm through the contact with its spark within mundane world. This is a conceptual revolution in and of itself, but it generates new types of religious experience and action that are inconceivable under a personal God. This is not a God one addresses or quarrels with.

    The transition from God to Godhead, from a personality to an array of potencies, became more and more elaborate with time, to the point that one could only attempt to navigate what had become a divine labyrinth with immense erudition in kabbalistic lore. The God to whom one could turn as a son or servant had been lost in the mammoth engines of Lurianic Kabbalah in the sixteenth century and later; its God was a kind of secret to be rediscovered. Yehuda Liebes has argued that the esoteric religious doctrine of Shabbetai Zvi, the false messiah of the seventeenth century, was that there is in fact a personal God who can be communicated with directly, and with whom he claimed to be in unique intimate relationship. His secret theology sought to rehabilitate the personal God known to any elementary school pupil who is versed in the Jewish tradition. The disappearance of this personal God into an esoteric teaching has been due to the rise of Lurianic Kabbalah, and in response to it Shabbetai Zvi tried to resurrect the personal God. As Liebes says: 

    Understanding the meaning of Shabbetai Zevi’s behavior requires us to delve into the essence of his Mystery of the Godhead. Rather than limited ability, his conscious opposition to the technical and impersonal style adopted by Lurianic Kabbala — in which an advanced, multifaceted machine had replaced the personal God — guided Shabbetai Zevi. Replacing the Lurianic machine with a personal God led Shabbetai Zevi to abandon Lurianic devotional prayer and pray “as someone who prays to His King,” as attested by his fellow student R. Moses Pinheiro. 

    The romance of Shabbetai Zevi, the manic mystical messiah who traumatized his people when he converted to Islam, deserves to include the poignant fact that he sought to replace the vast systems of doctrine about the Godhead with the experience of a personal God like the God of Abraham, Isaac, and Jacob. 

    It would appear, then, that conventional view of Kabbalah as the reinvigoration of myth, which has significantly contributed to our understanding of Kabbalah, somewhat obscures a monumental shift in Jewish religious consciousness. The shift is away from the Biblical and Rabbinic religious consciousness, and it took place in both principal types of Kabbalah. The distinction between philosophy and Kabbalah is not to be located in their respective relationship to God as a personality. Both movements perceived God first and foremost as a being rather than as a relational subject; the great difference between them focused on what kind of a being God is. Of course, each one’s concept of God was informed by the epistemological methods that they considered valid: philosophy relied on strict logical reasoning, while Kabbalah was nourished by visions and mystical speculation. 

    In the twelfth and thirteenth centuries, the great Jewish esoteric movements emerged, and they all shared discomfort with the personalist perspective of the Bible and rabbinical literature. The source of this discomfort is in the rejection of the understanding of the fate of humans as a reflection of their lived relations with God. Personality was replaced by being as part of the dominance of the idea of nature, an idea that perceives causal structures as the ground of being and as the guide to human action. The search for the specific causal model was shared by all the major Jewish schools of thought that burst onto the scene in the Middle Ages: Kabbalah, philosophy, and astrology (of the variety propounded by Abraham Ibn Ezra). They favored different causal models, including the Neoplatonic, Aristotelian, and Hermetic-astral, but they all shared the same basic account of nature, causality, and a divine entity. Each in their own way effaced God’s personhood. 

    The deeper one probes into the most esoteric strata of these doctrines, the more one finds that God’s personal features are shed and God turns into the Godhead. If the contest between Jerusalem and Athens is between a personal God and a natural causality, then Athens won the hearts and minds of the great medieval Jewish esotericists. The only figures to remain faithful to Biblical and Midrashic traditions were those who did not propound esoteric lore, who in their interpretation of the canonical Jewish texts did not identify a realm of hidden secrets which they have the spiritual privilege to uncover. Rashi is a perfect example. Throughout his renowned commentary on the Bible, God’s personhood is plainly evident. No wonder that it was one of his disciples’ disciples, Shimshon of Chinon, who would pray like a child — with wonder, with petulant demands, with ordinary intention behind the words — to show that the master-bearers of the secrets had it wrong.

    On Seeing Old Skis in the Garage

    So many slopes they touched, and once
    leaned outside while I tromped into the parlor
    of an alpine monastery, clattering boots, my bluster
    welcomed to dine silently with the brothers
    who had also vowed to get to the powder
    of what is daily fused with life: to glide, to carve,
    to schuss and float with what the spirit clamors for —
    even though my body’s sluggish, slow, it remembers
    mountains, glory in the snowfell
    hill, its bluebell kindred skills — a rough jouissance
    is what I brought, in all my choices good
    and not so good, the might-have-beens
    and new offerings from the range
    I’m entering, something milder — I’d still strive
    for the milk of kindness, hold out my simmering
    so the fat might rise like broken proteins
    to the top, to be skimmed off.

    Meditation with a Gash in the Natural Order

    I like parking at the big box store, watching people come out and go in.
    Swaying winter grasses in the median, sky that brigand Saturday blue.
    I’m waiting to pick up my son from his guitar lesson. Already masterful,
    he doesn’t quit. Even Jimi Hendrix continued with a vocal coach,
    up to the very day he died. I have so much useless knowledge.
    Like what the monk said about meditation: if while sitting on your cushion
    you have the best idea you’ve ever had — stunning, complete —
    you mustn’t get up to capture it. Also, a lot of what we call miraculous
    is just the way things work: a monarch always emerges from its chrysalis
    as a radically different worm. A miracle is living flesh restored to reeking
    corpse. It’s the man sitting calmly, shaved and dressed, after he’d raved
    at city gates for more years than anyone who stepped around him
    could remember. The monk said: when wondering what to do in life,
    do what will cost you the most. Commit to watching the gorgeous
    bubble evanesce. That’s the only way this works.

    An Occasion

    Our bones will touch in the water
    one day after the supernova,
    or maybe it’ll be an Electromagnetic Pulse
    we bought the old Volvo to outsmart— 

    we escaped the need for computers
    to govern coffee makers,
    and made our own kombucha—
    but one by one the streaked coyotes,

    wimpled foxes picked off
    the rooster and our stupid hens. A cascade
    of tiny choices to occasion
    an implosion.                         

    The mushroom log left wildly fruiting
    in the hand-hewn springhouse afternoon.

    The American Strategic Imagination: An Agenda

    Depending on how history is written, Russia’s invasion of Ukraine may be looked back on as the beginning of a third world war. President Zelensky’s government, along with its advocates in allied governments, has been making this argument since the war’s inception. They frame Ukraine as one battlefield in a larger global struggle, one that pits a growing axis of authoritarian nations against the Western-oriented liberal democracies that have dominated the post-Cold War world order. In this version of history, the war in Ukraine is not the Ukrainians’ war alone but the West’s war, too, an existential struggle for all freedom-loving peoples. There is plenty of evidence that lends credence to this argument. Had Putin’s initial invasion gone according to plan, a year later we would be talking about a similar invasion of Taiwan — as we anyway already are envisioning — and then the question of whether we were in the midst of a third world war would hardly merit debate. 

    Conditions remain ripe for an upheaval of the global order of the type induced by a world war. These upheavals, in the modern era, have occurred approximately every century. The First and Second World War should more properly be categorized as a single conflict with Versailles more of a ceasefire than a peace, and the twenty years of the Napoleonic Wars that birthed a century of continental stability certainly qualify as a world war. One indicator that we might already be in a world war — or that one is imminent — is that the generation that can remember the last one has died. Without memories to restrain us, we become reliant on our imaginations, not only to prevent war but, if one begins, to help us navigate its exigencies, and to win.

    Whether the war in Ukraine is part of a third world war, in which liberal democracies must beat back a rising tide of authoritarianism, or whether it is an isolated territorial-philosophical conflict is not a question of semantics. Defining a war’s scope is essential for any war planning, and for any victory. The role of imagination in the making of strategy has too often been under-appreciated. The conclusions that planners and officials will draw from analysis and data will always be circumscribed by the limits of what they can imagine about the future, by their sense of historical possibilities. If it is true, as the old adage has it, that generals always fight the last war, that is in part because they have not trained their imaginations to picture the next one. Inspired by its disgust with the Iraq war, the Obama administration drew the conclusion, and enshrined it in Pentagon doctrine, that land wars are a thing of the past. Tell that to the tank officers in Ukraine.

    Innovation — of concepts and weapons, of everything — always involves imagination. And the imagination of future warfare is essential also for another reason: it forces us to conceive of the war from our adversary’s point of view as well as from our own. The strategic imagination is a significant deterrent to the other side’s greatest advantage, which is strategic surprise. While the strategic imagination can certainly run wild — remember General Buck Turgidson — the greater danger is that it not run at all.

    In the past fifty years, America’s two great military defeats —Vietnam and Afghanistan — were the result of misunderstanding the scope of the wars that we were fighting. In the former, American policymakers believed we were engaged in, as President Kennedy put it in his inaugural address, “a long twilight struggle” against transnational communism, when in fact the Vietnamese were fighting a war of national liberation. In Afghanistan, we believed we were fighting “a different kind of war,” as President Bush said to Congress ten days after September 11, a war against transnational terrorism. Yet like the Vietnamese, the Taliban were also fighting a war of national liberation with no objective greater than expelling foreigners from their homeland so that they could impose their theocracy upon the population. Their sympathies with al-Qaeda were nauseating, but not their reason for being. In both incidents, a failure to imagine our adversary’s psyches and define the true nature of their objectives and of the very war we were fighting led us to disaster.

    Although it would be easy to discount the Kremlin’s absurd narrative of the war in Ukraine — one in which Zelensky is a Nazi, the West is the aggressor, and there is a genocide against ethnic Russians — it would be a mistake to ignore this narrative entirely, no matter how ridiculous, both when formulating a strategy to defeat Russia and when creating an agenda for our own strategic imagination. The first item on this agenda must be a robust understanding of the conflict from our adversary’s point of view. Data alone may not be able to depict our enemy. The specifics of such an understanding will be fluid, it will involve imaginative interpretation, and it will consistently clash with our own narrative. 

    A war is like a coin. It has two sides, and what we call the casus belli is really a debate as to what side of the coin we are on: whether a revolution is in fact a civil war; whether an invasion is in fact a liberation. Irreconcilable political narratives — or imaginaries, to use the academically popular term — are not a contradiction. The war itself becomes the very process through which these narratives will be resolved. But any strategy that does not consider an adversary’s counter-narrative — no matter how odious that narrative might be — is destined to fail.

    Although war is waged in the consciousness of peoples and nations, it is also a craft that requires a tradesman’s skill. Both the Napoleonic Wars and the world wars of the twentieth century resulted in societal and technological advances that few could have predicted. Perhaps the greatest innovation of the Napoleonic Age was the levée en masse, in which Napoleon crafted a new citizen army from a body politic mobilized for military service. His new army, which few could have imagined before the popular revolutions of the eighteenth century, supplanted the long-serving professional and mercenary armies that had dominated the continent for centuries. The primacy on the battlefield of the citizen-soldier, which cemented national identities, would have implications well-into the twentieth century and even into our own, as both Ukraine and Russia adopt strategies to keep their societies mobilized for war. 

    The issue of how to keep a nation — and allied nations — mobilized has direct bearing on what we now call influence operations, a discipline which must sit atop any strategic agenda. Influence operations — sometimes known as psychological warfare or disinformation — are as old as war itself. Clausewitz’s renowned dictum that “war is a continuation of politics by other means” is an articulation of the first principles of influence operations. Politics do not stop when the bullets start. Nations at war must target popular opinion in their adversary’s country as well in their own to achieve victory. This does not mean that a government should lie to its own people. 

    Authoritarian nations possess an intrinsic advantage over open societies in the control of information, but this does not mean that they will ultimately succeed in swaying popular opinion, particularly over the course of a long war. Eventually the truth has an almost miraculous way of getting under the door. Speaking about information warfare during the Cold War, Adlai Stevenson once described our preferred policy toward the communist powers this way: “When you stop lying about me, I’ll stop telling the truth about you.” But the truth must be aggressively deployed — for our adversaries, inconveniently and damagingly deployed — in influence operations. Truth may be the first casualty of war, but it is also one of its most potent weapons. Unfortunately, current American attitudes toward influence operations often seem analogous to Secretary of State Henry L. Stimson’s attitude a century ago to the then-evolving discipline of espionage: “Gentlemen don’t read each other’s mail.” 

    Today we can ill-afford misplaced decorum when crafting policies around influence operations. One of the greatest restrictions placed on American influence operations is a fear of blowback, in which propaganda or other forms of influence or disinformation disseminated beyond our borders filter back into the United States. Although different U.S. government agencies, such as the Department of Defense, the Department of State, and our intelligence services, have varied risk tolerances for blowback, when operations targeting foreign audiences also find American audiences it violates current programmatic statutes that govern the scope of information operations. 

    Government protections of the American population against its own propaganda extend back to the aftermath of the Second World War. The Smith-Mundt Act of 1948, which regulated State Department broadcasts such as Voice of America, was one of the first pieces of legislation to restrict American propaganda efforts. This was due to concerns over empowering government agencies to disseminate ideological materials to the American people. In 1975, after the Church Committee Report revealed Operation Mockingbird, a large-scale multi-decade CIA program to manipulate domestic American media, other restrictions soon followed. These included Executive Order 11905, signed by President Ford, which brought the intelligence agencies to heel on a variety of covert programs, to include assassination and covert influence; and Executive Order 12333, signed by President Reagan in 1981, according to which the president outlines the duties of America’s intelligence agencies, those responsible for American propaganda efforts where the hand of the United States government must remain hidden. Such covert efforts are where the threat of blowback is greatest. It is also where our risk tolerances are lowest, and where we are being overtaken by our adversaries who harbor little to no concern for blowback against their own populations. The question still needs to be asked whether the abolition of covertness for various purposes is strategically wise. It is undeniable that our national obsession with the evils of covertness has made us look away from a more important aspect of our influence operations, which is its robustness.

    In the past decade, both Russia and China have used influence operations and propaganda against the United States to great effect. In 2014, the Russians deployed soldiers in unmarked uniforms during the invasion of Crimea and the Donbas. Disinformation, like cancer, requires only the presence of a single malignant cell to metastasize, and Russia’s “little green men” in their unmarked uniforms allowed the Kremlin to propagate a narrative that their soldiers were homegrown Ukrainian separatists. In the United States we have seen the potency of Russian disinformation firsthand, in our own elections. (“Cut it out,” Obama once scolded Putin about Russian hacking. It didn’t do the trick.) Russian interference in the campaign of 2016 proved catalytic, yielding an exponential result that played out over years, creating a firestorm in American politics with few parallels. As is often the case with well-placed disinformation, the targeted society will do your work for you if you let it. 

    The Chinese Communist Party understands this. For the past two decades, the CCP has effectively used the American profit motive as a tool of American self-censorship. During the Cold War, when the Soviet Union posed the greatest threat to global freedom, American culture articulated that threat, particularly in Hollywood. In the past twenty-years during China’s rise, American cultural institutions have remained largely silent. Unlike with the Soviets, American and Chinese financial interests are intertwined. The Chinese have used this codependence as a tool to silence American critiques. In 2022, as American producers agonized over whether to include a Taiwanese flag on Tom Cruise’s leather jacket in Top Gun Maverick for fear of offending CCP censors and eroding the film’s Chinese box-office, Chinese producers had their biggest box-office hit of all time, The Battle at Lake Changjin, which glorifies the Chinese slaughter of Americans during the Battle of the Chosin Reservoir. The previous box-office champion had been the second installment of The Wolf Warrior franchise, in 2017, in which a Chinese former soldier battles against the arch-villain, a bloodthirsty former U.S. Navy SEAL named “Big Daddy.” 

    Chinese influence operations extend far beyond Hollywood. Their sway over global governance bodies like the World Health Organization has quashed any consensus as to how the Covid 19 pandemic began. A parade of international administrators with their timid “investigations” and non-sensical public statements have proven quite willing to carry China’s water on this issue. All that was needed was to sow some doubt. If Russia’s brand of disinformation — little green men in Ukraine, election tampering, claims of American biological labs in Ukraine — seems more absurdist than the means employed by the CCP, both are plenty effective in obfuscating the truth. 

    The sheer volume of propaganda, manipulation, and disinformation dispensed by authoritarians would seem impossible to counter. An American strategy that would seek to reform government agencies so that they could dispense the same type of propaganda as their authoritarian counterparts is certain to fail. An open society — even if flawed — cannot compete on the field of lies with the authoritarians. The only propaganda strategy that we can consider is one that aggressively propagates the truth. There is nothing that more weakens the hold of dictators and autocrats on their populations than the truth and its ruthless strategic proliferation.

    If one holds the cynical view that the truth is subjective, a matter of competing narratives, then this and all truth-based strategies are doomed to failure as authoritarians will always outmatch us. (But if the relativists and perspectivists are right, on the other hand, then we should be less inhibited in our propaganda!) The Biden administration’s handling of Russia’s troop buildup in the days leading up to the invasion of Ukraine presents a refreshing and encouraging example of how information operations based on the truth can outmaneuver those based on lies.

    As the Kremlin massed its divisions, it continued to insist that these troop movements were part of a military exercise and that war remained avoidable. At the same time, the Biden administration had intelligence that Russia was in the process off coordinating a false-flag operation — a type of psychological ploy in which a military attacks itself or others under a flag that is not its own — to instigate a war. At a press briefing three weeks before the invasion, John Kirby, the Pentagon spokesman, preempted the Kremlin’s plan: “We believe that Russia would produce a very graphic propaganda video, which would include corpses and actors that would be depicting mourners and images of destroyed locations, as well as military equipment at the hands of Ukraine or the West, even to the point where some of this equipment would be made to look like it was Western-supplied.”

    The Biden administration adopted a strategy of flooding news outlets with sensitive intelligence, on everything from the false-flag operations to the movements of Russia’s frontline trauma hospitals and command centers, all of which proved that despite Russia’s claims to the contrary their intention to invade was clear. The Biden administration adopted this strategy over the objections of Zelensky who, at the time, remained concerned about inciting panic inside his own country. Although this strategy did not prevent a Russian invasion, it did limit the Kremlin’s ability to further claims that Ukraine was the aggressor. And just as important, it prepared us mentally, and our allies too, for what was coming. In this instance, the Biden administration skillfully shaped the American strategic imagination. The international condemnation and economic isolation that followed Russia’s invasion is due in no small measure to the Biden administration’s strategy of preempting Russian disinformation.

    Although a strategy of preemption proved effective in Ukraine, no similar strategy was deployed against the Chinese government as they restricted and manipulated studies around the pandemic’s origins. Three years later little consensus exists as to the virus’ origins, though theories abound. The issue itself has become politicized, with views so entrenched it seems no amount of evidence can now sway beliefs; the creation of irreconcilable narratives is, of course, the purpose of a disinformation campaign. In the pandemic we may have lived through a dress rehearsal of the future of biological warfare, a discipline which must sit firmly atop any agenda for our strategic imagination.

    In recent years, biological warfare has taken the form of gas attacks visited by Bashar al-Assad on his population in Syria and assassinations ordered by Putin against his political enemies. These tactics — in which individuals are poisoned, and armies and civilian populations are shelled or rocketed with gruesome agents — have evolved little since first appearing a century ago. They are designed to induce fear and are typically limited in scope to the area in which they are deployed. A pandemic, if ever weaponized, would auger in a different type of warfare, and we would be naïve — we would be catastrophically unimaginative — to believe our adversaries are not imagining ways to do exactly this. 

    In Ukraine, we have seen the critical importance of economic sanctions in modern war. In the pandemic, we saw how a virus brought the global economy to the brink. The grisly nature of traditional biological weapons will likely limit their use in the future; politically, they cost more than they deliver. But a biological catastrophe — of the kind that we have already lived through — would surely be a feature of any future world war, not simply due to the human toll but also the economic toll. 

    Imagine the United States along with its allies was fighting a peer competitor, an authoritarian nation such as China with the capacity to exercise significant control over its population. We would, of course, do our best to exercise economic pressure on them, and they would do the same to us. As in wars past, our national means of production would prove decisive to the war effort. Now imagine that this authoritarian nation possessed a virus like the coronavirus and that it had already developed its own vaccines. As people grew sick that country would be able to implement a vaccination campaign, isolating its citizens from the deadly effects of the virus. Without the vaccine in our possession, America’s war effort would be crippled. Our adversary would, of course, claim no knowledge of how this new virus spread. They would deny its origins and have no moral obligation to share vaccine technology with a nation they are at war with.

    The costs would be profound. Our most recent pandemic saw the largest drop in American manufacturing in seventy-four years. Aircraft carriers such as the USS Theodore Roosevelt were forced into port due to outbreaks of virus among the crew. A healthy army facing a sick one possesses an obvious advantage, and the same advantage extends to the economies supporting those armies. The United States would eventually develop its own vaccine, but the disruption would prove significant and could provide a peer-level adversary with a decisive edge. 

    After living through the Covid 19 pandemic, the way we think about biological warfare must change. It should still include the acute nerve agents and chemical weapons that we have seen in the past, but it also must incorporate a view of biological warfare that includes man-made pandemics and accounts for how such events could be deployed as tools of economic warfare when laundered through the very same types of disinformation campaigns that have, thus far, obscured any global consensus and accountability regarding the origins of Covid 19. The moral case to abolish chemical and biological weapons is obvious. But, as with nuclear weapons, there is the unpleasant but effective matter of deterrence. Even if weapons of mass destruction do not deter conventional or cyber weapons, they have so far deterred other weapons of mass destruction. A balance of terror, that Cold War doctrine and Cold War reality, is existentially hideous but strategically wise. 

    If discussions of a third world war echo another era, it is because this war long existed as part of the Cold War’s vernacular, a time when students cowered under desks for bomb drills, when families constructed fallout shelters in the backyard, and when nuclear winter was the most well-known definition of climate change. To imagine a third world war means to update our conception of it, but also to redefine its terms. And one of these terms, deterrence, has unfortunately fallen out of use. During the Cold War, deterrence (particularly of the nuclear kind) evolved into an entire discipline, with strategists on both sides of the Iron Curtain relying on game-theory to ensure that humanity did not annihilate itself with its newly discovered nuclear weapons. The result of this deliberate approach to deterrence was decades of relative peace, and certainly nuclear peace, between the two superpowers. Also, we were lucky.

    We still possess that destructive capability though deterrence, as a tool, is discussed far less. In the decades immediately after the Cold War, we lived in a unipolar world, in which deterrent strategy lacked its previous relevance because the United States enjoyed a significant power imbalance over any would-be adversary. In recent years, the unipolar post-Cold War world has yielded to an increasingly Hobbesian multipolar world. And in a multipolar world strategies of deterrence become increasingly complex, with so many competing actors involved that it is virtually impossible to arrive at elegant deterrent solutions such as “mutually assured destruction,” which prevented a nuclear war between the Soviet Union and the United States for decades. 

    Yet it is not simply the proliferation of actors that makes deterrence strategies complex, but the proliferation of threats. Unlike in past decades, nuclear weapons are not the only means by which societies can annihilate one another. There are the biologicals and the chemicals. And there is another new dimension of warfare. The end of the Cold War coincided with the creation of the Internet. 

    As deterrent strategies became a thing of the past, every modern nation — indeed, every nuclear-armed nation — was undergoing a decades-long project of taking its infrastructure online. Thirty years later, we have awoken to a multi-polar world in which the United States faces peer-level competitors that not only possess society-ravaging nuclear arsenals, but also cyber capabilities that could wipeout our infrastructure. The chaos that follows a significant infrastructure strike would lead to civilian deaths as planes crash, hospitals lose power, and cities descend into darkness. The economic and social havoc would be immeasurable. And even though cyber-attacks shut down critical infrastructure with the flip of a switch, that infrastructure cannot be brought back online with a second flip. The damage is often permanent. 

    For this reason, our strategic imagination requires a thorough education in the new vulnerabilities, the new possibilities of destruction. One of the challenges of creating deterrent strategies around cyber-warfare is that it is difficult to envisage such destructive capability. Most of this work gets done in films and science fiction. At the end of the Second World War, at Hiroshima and Nagasaki, the world witnessed the destructive capabilities of nuclear weapons. We never needed to imagine a mushroom cloud over a city, we had witnessed it, and it required scant imagination to know that, no matter one’s nationality, there would be few winners if the great powers of the world chose to unleash their nuclear arsenals. 

    Cyber is different. The world has yet to witness the full destructive scope of a strategic cyber-attack, and because the threat is largely intellectualized, and not yet experienced, the likelihood of a misstep is greater. One of those potential missteps, for example, involves the permeability between cyber war and nuclear war. Whereas a cyber-attack may justify a cyber-counterattack, are there circumstances of crisis in which it would justify an escalation — a breaking of what nuclear strategists used to call the firewall? A nation crippled by a cyber-attack could very well respond with a nuclear attack, particularly if an adversary has compromised their ability to respond in kind. Even if a cyber-attack is designed with a limited scope, it is often difficult to control the spread of the attack, resulting in collateral damage that could lead to an unanticipated escalation. This was the case with Stuxnet, the American- and Israeli-designed malware that targeted Iran’s Natanz uranium enrichment facility in 2010. Although Stuxnet proved successful in crippling Iranian nuclear infrastructure, the Americans and the Israelis failed to contain the spread of the malware. It has since attacked industrial capability across Iran and in other Middle Eastern countries. 

    Over the past thirty years, American and Russian conceptions of the use of strategic weapons have evolved in opposite directions. While the United States created security strategies that minimized the role of strategic weapons in future conflicts, Russia pursued new concepts and capabilities to expand their roles. Whereas it is unlikely that Putin would resort to a tactical nuclear weapon, his nuclear saber-rattling is not merely rhetoric. It is based upon the current Russian doctrine of “escalate to de-escalate.” The latest version of this doctrine, titled Basic Principles of State Policy of the Russian Federation on Nuclear Deterrence, was released in June 2020. It declares that Russia “reserves the right to use nuclear weapons to respond to all weapons of mass destruction attacks.” A strategic cyber-attack would certainly qualify as a “mass destruction attack,” but it remains vague as to what else might fall into this category. It also classifies “aggression against the Russian Federation with the use of conventional weapons when the very existence of the state is in jeopardy” as warranting a nuclear response, but this seems a subjective standard, particularly with an authoritarian like Putin who abides by the dictum l’état, c’est moi. When it comes to effective strategies of deterrence, ambiguity is sometimes an advantage but sometimes not. A psychological truism teaches that all ambiguous behavior is interpreted negatively, and Russia’s current strategic posture places a premium on unpredictability, which makes deterrence the more challenging. 

    Since invading Ukraine, Russia has been tempting a mass destruction event. The shelling of Europe’s largest nuclear power plant at Zaporizhzhia was particularly reckless, though it would be a mistake to believe that Russia — and its authoritarian allies like China — are behaving irrationally. Russia has its reasons and its worldview; some of its thinking is characteristic of great power raison d’etat, some of it is peculiar to Russia and its view of its history and is less rational. If today we are at the outset of a third world war, this is because our adversaries are fighting to upset and then redefine the global order. There is no clearer way to upset that order than by mimicking the act of creative destruction that created it: the use of a weapon of mass destruction. 

    This could be a nuclear attack, a cyber-attack, or even a biological attack akin to another pandemic. If such an attack occurs, it will be accompanied by a narrative propagated by the authoritarians who launched it. The attack itself will matter, but what will also matter are the myriad taboos that it will break, jolting us out of one strategic imaginary and into another. The immediate destruction wrought by a low-grade tactical nuclear weapon would be of less relevance than the raw fact that it would be the first nuclear weapon used since the Second World War. This would upset the global order and so would be a logical step for those authoritarian nations whose goal is to destroy the long-enjoyed global dominance of liberal democracies.

    Ample opportunities exist to avoid these grim scenarios. By understanding the intentions of our adversaries, we prepare ourselves to counter their strategic agenda with our own. Our agenda must account for challenges in information operations, biological warfare, and cyber warfare, but it must never lose sight of the truism that armies win wars, not weapons alone.

    In Ukraine, an authoritarian Russia along with its allies hopes to prove that the sun has set on liberal democracy. Thus far, what has stopped them is a fully mobilized society and a highly motivated army. The fighting has been a hybrid of low-tech (infantry, artillery, armor) and high-tech (drones, precision missiles, artificial intelligence). The heart of any battle, according to Clausewitz, is “slaughter.” A people’s will to endure that slaughter has always and will always prove a determinative factor in war.

    In the world wars fought since the Enlightenment, authoritarian armies have performed poorly. I do not mean to downplay the military achievements of the authoritarians: Napoleon certainly knew how to fight a battle, and the Germans invented the decentralized, mission-style tactics that the Ukrainians have used to outmaneuver the Russians. But war is a human endeavor, a contest of wills. A society’s will to remain free will always prove stronger than those who compel them to obey. As we imagine the future, we must not lose sight of this.

    Come Dressed as the Sick Soul of Late Capitalism

    [Innocent wayfarers, beware. This essay contains what are vulgarly known in the trade as “spoilers,” so if for some unfathomable reason you’ve yet to view Succession, Glass Onion, and The White Lotus, tread gingerly and try not to gasp.] 

    It may be the most famous and chewed-over exchange in American literature that never actually took place, at least not in real time. In 1936, when the country was still in the hold of the Great Depression and in no mood for mooniness, Esquire magazine published Ernest Hemingway’s cinematic story “The Snows of Kilimanjaro,” a meditation on mortality and the beautiful consoling desolation of a cathedral mountain, all that. Amid the flashbacks and the regrets, the narrator couldn’t resist sneaking in a catty sideswipe: “He remembered poor Scott Fitzgerald and his romantic awe of them and how he had started a story once that began, ‘The very rich are different from you and me.’ And how someone had said to Scott, ‘Yes, they have more money’.” 

    That “someone” was of course Hemingway himself, unable to resist puffing his chest at “poor Scott”‘s expense. Earlier the same year Esquire had published Fitzgerald’s revelatory confessional “The Crack-Up,” so it was understood that he was in a precarious state. Fitzgerald’s understandable ire at being mocked and misrepresented — he complained to their mutual editor, the Solomonic Maxwell Perkins — forced Hemingway to soften the passage later for hardcover publication and substitute the weak-water name “Julian” for “poor Scott.” Didn’t matter. Sophisticated readers knew the real score. For decades, the original back and forth in print was patted down and packed into a tidy conversational anecdote, with Hemingway’s snappy comeback considered by many (most?) the definitive retort — a bull’s-eye reality check — to Fitzgerald’s dreamy, minty-green Jazz Age romanticism.

    The verdict has been reversed over time. Is there any doubt today that Fitzgerald, swimming in the aqua sparkle of his own perceptions, had it right and Hemingway was talking out of his pith helmet? It was Lionel Trilling who defended the Fitzgerald case most elegantly. “The truth is that after a certain point quantity of money does indeed change into quality of personality: in an important sense the very rich are different from us…” It was true then and it is even truer in this millennium. The evidence pimp-slapped in our faces is that the rich are more different from the rest of us than ever before — they are evolving into a mutant species. 

    As the middle class is increasingly whittled thin — witness union jobs being replaced by a gig economy, the coronation of corporate executives, the premature knighting of Palo Alto wunderkinds, the emergence of Davos Man, and saturation bombing of the airwaves with ad blitzes for online sports gambling and mega-millions lottery draws — the chasm is widening yearly between the have-somethings and the have-it-alls. It has only gotten worse since Covid, only widened. Tech billionaires, hedge funders, private equity predators, Saudi princes, Russian oligarchs, the former president who besmirched the office, and similar excrescences of turbo-charged late-stage lift-off capitalism have top-loaded this century into a second Gilded Age, one that even the ongoing global recession hasn’t been able to dent. 

    A second Gilded Age might seem to be a bonanza opportunity for novelists, for some young, hip, penetrative Gen X/Gen M/Gen Z/Gen-whatever Edith Wharton to train her spy glasses or AR goggles on. But perhaps the spectacle of the mega-rich is simply more than contemporary novelists (a more inwardly investigating crew) can consolidate. The traditional big social novel of manners and disturbing flutters in the drawing room may be too antiquated an undertaking. The pursuit of great wealth and the cruel delight of writing the little ingrates out of your will largely disappeared from serious fiction, as serious fiction itself has been eased into the infirmary. The strenuous toils of Theodore Dreiser (The Titan, The Financier) belong to an iron age. The stately mansions of later John O’Hara lie empty and neglected. Inherited money inhabited the background of Louis Auchincloss’s novels, but it was a listless resource, carpet-worn. 

    The contemporary remakes and invocations of The Great Gatsby — will we ever be rid of them? — offered retro cosplay that’s unable to capture the lyrical lift of the prose, the goosy thrill of Puritan restraint being kicked to the curb and the human body flowing free as if for the first time. The field was left to commercial pop fiction to project fantasies of the rich, virile, fertile, and resplendent in potboilers whipped to a mad fandango by Jacqueline Susann, Judith Krantz, Harold Robbins, Shirley Conran, and other pagan immortals of the airport paperback rack. Some of the anecdotes in these concoctions may have been pinched from honest gossip but the overall effect was of escapist make-believe.

    For a brief fun time, before everything got engorged, the true signifiers of American wealth required a keen acquisitive eye to spot. They didn’t call undue attention to themselves, but were hostessy and understated, niblet-sized and exquisitely prepared. Truman Capote, who prided himself on being the keenest double agent inside the velvet folds since Marcel Proust, informed an interviewer that what separated the rich from the rest of us primates was their serving of tiny vegetables: “Delicious little tiny vegetables. Little fresh-born things scarcely out of the earth. Little baby corn, little baby peas, little lambs that have been ripped out of their mothers’ wombs.” Tom Wolfe’s New York magazine account of Leonard Bernstein’s fundraiser for the Black Panthers — “Radical Chic” — introduced us to the party with a rapture over the hor d’oeuvres. “Mmmmmmmmmmmmmmmmm. These are nice. Little Roquefort cheese morsels rolled in crushed nuts. Very tasty. Very subtle. It’s the way the dry sackiness of the nuts tiptoes up against the dour savor of the cheese that is so nice, so subtle.” Not that there wasn’t the occasional representative of wealth with more democratic taste buds. William F. Buckley, Jr., whose sailboats were stocked with wine and champagne before they embarked into distant latitudes, was addicted to a grocery store brand of peanut butter called Red Wing. But this seems to have been more of a quirky personal indulgence, not something he’d slather on Triscuits when Mrs. Kempner came calling.

    Prime time television, bless its bionic heart, filled the void left by serious fiction and classic Hollywood films and then, as television inevitably does, overfilled it. On network TV, extreme wealth was often troweled out as a bestowal of the golden promise of Southern California on the fortunate few, whether they be oil-rich lucky-strike yokels (The Beverly Hillbillies) or crime-solving playboys (Burke’s Law). The Reagan era became the heyday of the rich clan soap opera that had the swoosh and swoop of a pink poodle Ross Hunter production: Dallas, Dynasty, Flamingo Road, and Falcon Crest, where the matriarch was played by Ronald Reagan’s first wife, Jane Wyman. The storyline from Falcon Crest’s debut episode: “Wealthy vintner Angela Channing feels threatened when her nephew Chase Gioberti returns to Falcon Crest for his father’s funeral.” Angela Channing — a name that could ring church bells — was well advised to be on guard. On all of these soaps, barely a season went by when there wasn’t a misplaced nephew or niece or illegitimate son or daughter popping out of the topiary to demand his or her rightful due. It was also the era of the lavish mini-series adaptation, such as Lace, based on Shirley Conran’s bestseller, and remembered today for Phoebe Cates’s sneering icebreaker, “Which one of you bitches is my mother?”

    To complement the fictional exploits of the coiffed, avaricious, and scheming no-gooders, there gurgled up a new genre of reality television, pioneered by the guided tour through Lifestyles of the Rich and Famous, which premiered in 1984 and ran for over a decade. The popularity of the series ratified that in the 1980s it was no longer enough to be rich or famous, you had to be rich and famous, for that was the new American Dream. Each syndicated episode of Lifestyles was like a fawning vacation brochure or magazine spread with a bumptious, trumpety voiceover supplied by its irrepressible and uncharacteristically effusive British host Robin Leach, whose catchphrase motto “Champagne wishes and caviar dreams!” was like a wedding toast. Lifestyles of the Rich and Famous might have been relegated to the slag heap of a period novelty if it hadn’t inspired copycats such as MTV’s Cribs (2000 to present), documentaries about the palatial lives of fashion designers (Valentino: The Last Emperor, in 2008), and, most infectiously, the “staged reality” extravagant lunches and battle royals of Bravo cable’s Real Housewives franchise produced by Andy Cohen, the David O. Selznick of Ryan Seacrests. It is hard to keep track of how many cities have rich Real Housewives emerging from limos and going Godzilla. Every urban squad of prancing, feuding divas takes valuable time out from babying their tiny pedigree dogs to attend restaurant openings, disparage their frenemies, fling drinks in eachothers’ unreal faces, and point menacingly long fingernails as they trade heavily bleeped-out trash talk. The campy, pool-splashing, hair-pulling catfight between Krystle and Alexis in season three of Dynasty was the precursor for every “Real Housewife” dominatrix match. 

    Cementing fan loyalty and tabloid fever between seasons are the real-life headline scandals that leave cracks in the fake-real facade, as with the arrest and imprisonment of Jen Shah (Real Housewives of Salt Lake City) and Teresa Guidice (Real Housewives of New Jersey), and the commotion on Real Housewives of Beverly Hills over “powerhouse attorney” Tom Girardi, the then-husband of ice-blonde aspiring disco goddess chanteuse and BH housewife Erika Jayne, who was accused of stealing money from clients, some of them desperate and destitute, and the roiling undercurrent of suspicion that Jayne, hardly a spotless lamb, had to be aware of what hubby was up to — the old fool had been bankrolling her vanity career. It made for many squinty, tense pauses and spitfire moments on Real Housewives of Beverly Hills. In relating this, I acknowledge that to the uninitiated it may sound as if I’m speaking Romulan — as when I tried to explain Buffy the Vampire Slayer for the dubious enlightenment of the late John Simon — but this is the streaming canal in which some of us oar. 

    Although in toto these reality TV fishbowls reveal sociological glints of how we live now — or rather how they live now — their blatancy appeals to low-information, viral-clip viewers in need of incessant cheap kicks. For an immersive experience of how the richy rich think, act, behave, misbehave, maneuver, socialize, enjoy their toys, ignore their children, speak in code (“like real Americans, they always talked in code,” to adapt an insight from Norman Mailer), maintain the pecking order, monitor the perimeter, and forge a phalanx whenever they move in concert, only high-budget, hierarchy-obsessed, mission-driven dramas and satires will do. Only they can muster the necessary resources of screenwriters, directors, actors, costume designers, location scouts, etc., to evoke and enter the distortion field of spoiled monsters and damaged psyches.

    For a curated verisimilitude, Tom Wolfe-worthy signifiers are strategically implanted in the most fanatically detailed film and television chronicles of the super-rich, whether it’s the strictly-business “stealth wealth” (as if there is such a thing) black ball caps that the male Roys wear screwed tight on their heads in HBO’s Succession, which has concluded its triumphant four-year run, or the Audemars Piguet Royal Oak Offshore Camouflage timepiece and Randolph Engineering Aviator sunglasses that Bobby Axelrod (Damien Lewis) sport in Showtime’s Billions. These accouterments of killer cool seem to have been issued by the murders and acquisitions division to princelings and upstarts who compare themselves to modern-day pirates, gangsters, apex predators, and fighter pilots, and pride themselves on their agile wits, their mastery of Machiavelli, Sun Tzu, and Jedi moves (yet have a spaz if their favorite bottled water arrives a tad lukewarm). Underachievers are kept under constant notice. The trading floor at Bobby Axelrod’s Axe Capital is a glassed pavilion dojo where only the top survive and flushed-out schmuckos take the walk of shame carrying their belongings in a box. Unreluctant to deploy blackmail and hardball tactics, Bobby Axelrod is Michael Corleone with a bouncier step. Neutralize your foes with extreme prejudice and the gormless boards of directors will fall like dominoes. The scope of the wealth and global designs of Axelrod and his rival plunderers make “the masters of the universe” in Tom Wolfe’s The Bonfire of the Vanities look like tiddly-winkers.

    Tom (Matthew Macfayden): “Umm, do you want…a deal …with…the devil?”

    Cousin Greg (Nicholas Braun), after a pause: “What am I going to do with a soul anyway? Souls are boring.” 

    Succession, Season 3, Episode 9. 

    Succession is less of a bro fantasy than Billions, more of an acid bath where illusions and ideals are dissolved and sentimentality separates from the bones, which is why many deem the series cold, heartless, and intractably cynical. As if Jonathan Swift was some sweetheart. Created by Jesse Armstrong, Succession’s line of attack fuses the scabrous, scorpion invective of The Thick of It, In the Loop, and Veep with the infighting, stylized lingo, and devious subterfuge of peak David Mamet. No series has consistently shown greater gunslinger skill with caustic sound bites while keeping tabs on the chief imperative. In Mel Brooks’s Silent Movie, in 1976, the Hollywood studio modeled on Gulf & Western was named Engulf and Devour, which could double as the corporate handle for Succession’s Waystar Royco conglomerate, with its portfolio of theme parks, cruise ships, and troubled film division. (All film divisions are troubled.) Its crown jewel is the innocuous sounding American Television Network, or ATN, a red-meat, right-slanted “bigot spigot” cable news operation capable of driving the national dialogue, dictating the next president, and dragging even the loftiest reputations through the muck. Any resemblance to Fox News and the Murdoch family is strictly intentional and the ATN lineup struts its own Tucker Carlson/Sean Hannity anchor stud, a Nazi-flirting smug vacuity who bears the perfect evil name of Mark Ravenhead.

    Clinging to the throne and constantly chaffing in irritation and fury at the fools around him, many of them family members, is Royco patriarch Logan Roy (Brian Cox, an aging lion who can inject menace into a simple Uh-huh”), an old school analog-bred media magnate who minimizes mind games to go for the throat or the groin or, preferably, both. Logan Roy possesses an animal cunning for reading situations and subtle momentum shifts; like HBO’s other iconic anti-hero, Tony Soprano, Roy has an animal cunning — a psychological sniffer — for who’s with him, who’s against him, who’s wavering, and who needs to be gang-planked. Animal is the word. “Boar to the floor!” is Roy’s sadistic idea of a parlor game, and he vows to go “full fucking beast” on his foes. When he prowls the floor of the ATN newsroom in ominous sunglasses, one character says, “It’s like Jaws if everyone in Jaws worked for ‘Jaws.'” 

    Seeking to extricate itself from its attachment to old media (newspapers, local television stations, basic cable, movie production), Waystar Royco, mighty as it is, fears being gobbled up in a single gulp by some digital baron “Zucker-fuck.” The younger tribe of Roys — primarily sons Kendall (Jeremy Strong) and Roman (Kiernan Culkin), and daughter Siobhan, better known as Shiv (Sarah Snook) — angles to divorce itself from the doom and chaos they’ve done so much to sow and join the newer species of super-rich that originated in Silicon Valley or some other incubator of myth-hype, red pill hubris, and algorithmic domination. This gnawing awareness of the vexing gap between the traditional 1% and the top 1% of the 1% is rawly laid out plain in Succession when Roman, in one of his conversations with his sexty, executive mommy-figure Gerri Kellman (J. Smith-Cameron) promises that if their scheme pays off, “You will get properly ‘fuck you, fuck you I-don’t-even-care-about-climate-change I’m-in-New-Zealand-with-my-own-private-army’ rich. Not like some pathetic asshole beach house on the Vineyard rich.” A beach house on Martha’s Vineyard, so lame.

    The New Zealand reference in Roman’s spiel alludes to Peter Thiel, whose plans to build an extensive luxury lodge alongside a mountain-surrounded lake as part of his apocalypse insurance policy have been thwarted by local authorities. Thiel is not alone in preparing blueprints for when everything goes kerflooey. Douglas Rushkoff’s recent Survival of the Richest: Escape Fantasies of the Tech Billionaires is an allegory about an elite group of doomsday preppers who intend to live out the coming social disorder and pestilence in scenic, remote compounds that will be self-sustaining, impregnable to zombie invasion, and offering the utmost in civilized comfort as the earth bakes. In times of trouble you can always count on the rich. The undying wet dream of wealthy anarcho-libertarians has been for their own Galt Gulch, the mountain hideaway in Ayn Rand’s Atlas Shrugged where society’s elite doers, makers, and dissenters isolated themselves from the shabby ranks of takers, losers, and liberal simps with their meeching platitudes. Some of the Gulf States are planning glass-domed mega-cities in the desert powered by wind and solar that will contain their own amphitheaters and five-star hotels. “The plan, distilled, is to become the global headquarters for the mega-wealthy,” Scott Galloway writes in his newsletter No Mercy/No Malice. That will do for most, but for a few visionary billionaire survivalists the world is not enough, to borrow the title of a James Bond film. Elon Musk, as everyone knows, has made it his mission to colonize Mars, and Amazon’s Jeff Bezos and Virgin Atlantic’s Richard Branson are competing astro cowboys with their satellite launches and dreams of orbital tourism. 

     Since space colonization is going to take a while and underground compounds lack eye-candy and are indistinguishable from the underlit subterranean nerve centers in Marvel movies and dystopian sci-fi, the preferred getaway in movies and TV for rest, relaxation, and inviolable refuge is a private secluded island that combines the fortress capabilities of Dr. No’s Crab Key with the lush splendor of a tropical paradise. Guests arrive by invitation only, their presence a privilege extended by the host, who cloaks an ulterior motive or two beneath the too-hearty bonhomie. Everything is arranged to perfection, the hospitality staff seamlessly appearing and reappearing as if on winged feet. Or smooth rollers. Robot sherpas serve as the luggage conveyors in Glass Onion, Rian Johnson’s successor to the improbably successful Knives Out, once again centering a tangerine-skinned Daniel Craig with a preposterous Southern gumbo accent trying to solve an Agatha Christie-ish whodunit where clues appear and vanish like magic coins. As in Christie’s And Then There Were None, the great-grand-mommy of this elimination game, a group of unsuspecting strangers have been summoned for a remote outing only to find themselves at the mercy of machinations that produce many a scream and squeal.

    Here the host is billionaire Miles Bron, played with vapid gusto by Edward Norton and blatantly modeled on Elon Musk, a purported galaxy brain tech visionary who has hoodwinked the press and the public into believing that his company’s innovations all sprung from his fecund brow. Bron’s guests are fellow futurists and disruptors — “Disruptors have assembled!” he cries, as if hailing the Avengers — but there is no question that he is the alpha dude disruptor supremo, its Tony Stark. (One of the many Twitter nicknames hung on Musk is Phony Stark.) A culture-philistine (to use Nietzsche’s term) whose Rothko painting is hung upside down, Bron has come into possession of the actual Mona Lisa, which he hopes to deploy as an ace card to impress world leaders and broker global peace or something equally grandiose. The Mona Lisa in its translucent case is just a MacGuffin to keep everyone’s eyes off of the misdirection. 

    As in the later, glossier Hollywood adaptations of Agatha Christie (the original Murder on the Orient Express and Death on the Nile, not the clomping Kenneth Branagh remakes), Glass Onion unfurls a bright tapestry of brittle pleasantries, shadowy motives, fidgety gestures, ominous foreshadowings, and devious mind games, heightened by the showy entrance of an unexpected intruder — Janelle Monae, who is tasked with projecting old Hollywood Rita Hayworth/Ava Gardner-ish glamor and delivering a flat tire. Old Hollywood homaging as well is Johnson’s penchant for peppering the film with amusing cameo pop-ins: Hugh Grant, Jared Leto, and, making his final screen appearance, Stephen Sondheim, whose co-written script for The Last of Sheila was an inspiration for the Knives Out enterprise. The dolled-up cast seems to be having a grand time, which is part of the Easter egg attraction of these films for entertainment buffs and makes them a harmless exercise in slumming for those of us observing from the sidelines.

    Meanwhile HBO’s The White Lotus miniseries — two seasons thus far, with a third season under construction — also stacks its cast with familiar movie-TV faces but it mixes in provocative newbies to freshen up the entourage and add cosmopolitan flavor. Created, written, and directed by Mike White, whose métier is the comedy of creeping unease, The White Lotus pulled off the feat of formulating its own aesthetic from the outset, a cushy ambiance that provides its own commentary on the action. The affluence on display is a confluence of air and attitude, a state of grace soon to come unpeeled. The new wealth — generated by crypto, software, apps, Tik-tok and Instagram product endorsements, sponsorships, and celebrity appearances, OnlyFans stardom, and new sluices for money laundering — seems to have been conjured from nothing, with no apparent effort from its beneficiaries. It seems to flow wherever they go, and for these surfers of invisible, undulating currents, expensive possessions (art, rare wines, diamond tiaras, Architectural Digest interiors) are less existentially desirable than the supreme ease of movement. They’ve come to expect everyday life to be a series of seamless transitions, like a single gliding Steadicam shot from beautiful dawn to beautiful dusk, not that they pay much attention to either. They are too busy contemplating the wonder of their being and why it doesn’t make them happier, more receptive. 

    One of the ingenious aspects of The White Lotus is how it reveals how narcissists become not just spoiled but infantilized by their idyllic lifestyle fantasies. The slightest hiccup in service or hitch in itinerary and they turn into crybaby complainers, offended by every unscheduled raindrop. It is also the series that has most ingeniously incorporated the incel as a human pathogen and negative-energy capsule. Integral to the series’ sensibility is a pervasive, nullifying affectlessness — a cool, glib, eyes-shaded anomie that comes across as so Californian. Caring about something or someone risks being considered uncool; just breathe in and out the moment, dude. Practicing mindfulness only staves off the demiurges for so long, however. Those looks your spouse is giving you when she lifts her Ray-Bans — they spell trouble.

    The plot intrigues of The White Lotus stress-test the vacationers until they shed their protective coating and expose how their wiring really works under crisis or when temptation beckons. Jungian shadow elements can be teased out most tantalizingly for prestige-TV viewers in a balmy picture-postcard getaway where the characters’ inhibitions and self-defenses melt away. This is the overriding advantage that serial television has over feature film: revelations can be hinted at and winkled out over the course of several episodes rather than blurted out in a single bolt. Pressure becomes more systematically applied. The White Lotus also devised its own clever variation on the Agatha Christie formula. Its first season opens with a coffin being loaded on a plane and the question becomes, Who, among the vacationers we’re about to meet, leaves in a box? Who’s the mystery corpse?

    BC

    Jorma (Henrik Dorsin): I’m very rich. Yes, let’s not beat around the bush. I’m very rich.

    Ludmilla (Carolina Gynning): How rich are you?

    Jarmo: Oh, I’m so fucking rich!

    —shipboard conversation, Triangle of Sadness

    Ruben Oslund’s Triangle of Sadness — the title, not as portentous as it sounds, refers to a geometrical frown patch that can appear between the eyebrows — takes place on a luxury yacht where a soiree of privileged wankers are being feted by an infinitely patient crew. (Luxury yachts are for the depraved-rich genre what opera houses were for Balzac.) Their party is joined by a pair of bodies beautiful, Carl (Harris Dickinson) and Yaya (Charlbi Dean Kriek), he a male model, she an Instagram influencer — a matched set of dollhouse cliches to go with the other loaded stereotypes lounging on the deck. For all of its auteurist mojo handjive (individual chapter titles, a long introductory scene that exhales documentary dead air, a distended running time, a subverting Ironic Twist), the film is a self-pleased demonstration of shlock instincts and facile follow-through. Its cavalier knowingness thinly camouflages a rather sophomoric class struggle on a Ship of Fools (or, if you prefer, a Love Boat of the Damned) which coarsely degenerates into a duel of capitalism vs. Marxism quotation-mongering during a raging storm between the socialist Captain (who doesn’t need a proper name in the credits, given that he’s portrayed by an instantly identifiable Woody Harrelson) and a Reagan-idolizing Russian oligarch (Dimitry, bellowed by Zlatko Buric). 

    As the yacht is buffeted to and fro, their audio chatter counterpoints the passengers’ volleys of projectile vomiting — pea-soup geysers that outdo The Exorcist‘s Linda Blair in spray radius and velocity. After the boat goes down, sparing us any further shots of Harrelson slovenly smacking his lips, a few ragged survivors wash ashore on what appears to be a deserted island. A power reversal ensues as Abigail (Dolly De Leon), the yacht’s lowly cleaner and “toilet manager,” takes charge, and the catered-to have to fend for themselves and barter for favors. Roughing it doesn’t come easy to these softies. This dictatorship of the proletariat seems destined to meet a premature end when it is disclosed that a spa resort lies nestled on the other side of the mountain — who knows, possibly another White Lotus. No matter how far away and Robinson Crusoe-ish an island may seem, cabanas spring up like toadstools and beach umbrellas are planted like victory flags. Global capitalism will not be denied.

    Where Succession, Glass Onion, Triangle of Sadness, and The White Lotus have their slapstick, farcical sides, their jarring pratfalls, The Menu is staged with the solemnity of a Passion Play, which gives its flashes of dark humor far more incision. Directed by Mark Mylod, who has honed his needlepoint precision directing numerous episodes of Succession, The Menu presents a sacrificial rite disguised as a unique dining experience — a masque of the red death with impeccable table arrangements and flawless plating. As with The White Lotus, Triangle of Sadness, and Glass Onion, the experiencers in The Menu are a group of achievers and hangers-on who fancy themselves inside dopesters of discernment. They’re ferried to a remote island — but of course, where else? — for a special multi-course meal prepared by chef assoluto Julian Slowik (Ralph Fiennes) and his ninja staff. Before each course the chef offers a brief introduction to the dish and a homily intended to add their appreciation of the thought, the finesse, and the distinctive and locally sourced ingredients. “A course of a single raw scallop perched on a craggy rock and surrounded by carefully tweezed seaweed and algaes is virtually indistinguishable from an actual dish at Atelier Crenn, a San Francisco restaurant with three Michelin stars,” The New York Times helpfully reported. Authenticity in details adds to the absurdism of what transpires, the flattering foreplay for la grande bouffe.

    Although the cult of the (male) chef has taken a deserved blow in recent years with the sexual harassment allegations against former numerous television cooking-show celebrities, the mystique remains, which Fiennes wears like an untarnished crown. His Chef Slowik is impresario, emcee, choreographer, wizard, and samurai of the cutting board. His authority radiates from a tight core of uber-willpower, a testimony to Fiennes’s gift for containment and slow release. It’s frightening how his tight smile sometimes lingers a beat too long, surveying the room. Slowik’s control only begins to betray hairline cracks when some of the feeders, instead of accepting their roles as congregants, begin to behave like customers. They ask for additional seasoning, request substitutes, or get overly chatty and show-offy. He declines to alter the menu with a Caesar-like smile that tenses as the evening proceeds and his patience snaps. In his furious pride and concentration in preparation that goes into his dishes, Slowik resembles a five-star extension of Seinfeld’s Soup Nazi. But where the nicknamed Soup Nazi was satisfied to boot and permanently banish annoying customers from his establishment, Fiennes’s Slowik metes out the punishments of a Lord High Executioner. One of the victims, a past-his-prime action star named Georgie Diaz, played by John Leguizamo (and based, says Leguizamo, on Steven Seagal), is condemned for appearing in a film that Slowik saw on his day off, hoping to be entertained. He was not entertained. He was most disappointed. For this, Diaz must pay. It is a tribute to Fiennes that he makes this explanation sound eminently reasonable. Anyone who has sat stonily through an Adam Sandler comedy can sympathize.

    Like the passengers in Triangle of Sadness and the vacationers in The White Lotus, the A-listers in The Menu have become so accustomed to telling others what to do that they don’t know what to do when they have no one to boss around — when they’re the ones being bossed. Even the wait staff doesn’t indulge their piques. The guests’ inner resources have so shriveled from neglect that most of them are unprepared for the impact of what the critic Marvin Mudrick once called “life direct,” and by the time their instincts kick in it’s too late — they’re fodder. The interesting twist in The Menu is not how feebly the diners resist and how quickly they capitulate, but that they and the staff start to accept that perhaps they have earned their place on the pyre. They whimper, they plead, they submit. This is the price for not having lived the right life. This makes for neat allegory, and the tidy violent end — a choreographed die-in — panders to an audience’s yearning for retribution that real-life villains and greedheads seldom face. 

    The sole escapee from the bloodbath and conflagration is Anya Taylor-Joy’s Margot, who munches a simple cheeseburger that Slowik has made for her with care and devotion. By her unpretentious all-American taste and cheeky insubordination, Margot is spared the bonfire of the vanities. She is also granted absolution because no one wants to see Anya Taylor-Joy killed off, just as no one watching season two of The White Lotus would have wanted to have Aubrey Plaza get the tarp pulled over her. In a difficult, perilous time, Hollywood needs a few unexpendables to keep audience identification from being irreparably severed. Leave all-out nihilism to the mad-hatter satirists.

    The problem with most extreme movie satire is that it has nowhere to go but into overkill. From Dr. Strangelove to Don’t Look Up, the nerviest expeditions rely on a cataclysmic finale to spike their message in the end zone. The depiction of class warfare in High-Rise, from 2016, adapted from J. G. Ballard’s novel, descends into chaos, anarchy, orgiastic stabbing, and the roasting of a dead dog’s leg. It’s that kind of film. Glass Onion, after preening its insouciance through reams of repartee and exposition, climaxes in a giddy orgy of glass sculpture smashing and the fiery destruction of the actual Mona Lisa, a priceless touchstone of Western art torched because of a billionaire’s vainglorious ego. 

    The largest discharge is not blood or fire or showering debris but vast deposits of merde. When push comes to shove for the ultra-rich, all crap literally breaks loose. It is almost a psychoanalytical banality, these filmmakers’ preoccupation with this conjunction of wealth, shit, and mortifying incontinence. “’We [know] about the superstition that connects the finding of treasure with defecation,’ Freud wrote in “Character and Anal Eroticism,” to reinforce the point that, on the subconscious level, “feces have always been understood as a form of currency.” (I owe this to Simon van Zuylen-Wood’s jaunty, punny essay on “Feces and the Gold Standard: A Psychological Explanation of Goldbuggery” in The New Republic in 2012.) The Magic Christian, the novel written in 1959 by Terry Southern and adapted into a now-forgotten film a decade later with a mind-boggling cast (Peter Sellers, Ringo Starr, Laurence Harvey, Roman Polanski, and Raquel Welch are among the duped), ends with desperate, greedy saps fishing for pound notes scattered in a large tub filled with urine, fecal waste, and other unpleasantries. The burly Russian oligarch in Triangle of Sadness, who has made his fortune in fertilizer, proclaims himself “The King of Shit!” He keeps crowing the word as if to rub his fellow travelers’ noses in it. The film soon rubs our face in it, too. Its key punchline edit comes when Harrelson’s Captain extols the $250 million craft being pummeled by the elements and the next cut is to a miserable passenger crouched on a soiled toilet. Freud also hypothesized that misers were those held in their stool as children, hoarding money in adulthood, which may support the revelation in Succession that Logan Roy died while trying to fish his iPhone out of a clogged airplane toilet. The symbolism is almost too much.

    This scatological imperium may have been foretold in a Hollywood film that many of us scoffed at in its day, and rightly so. In 1974, at the weary end of the disaster epic The Towering Inferno, the disillusioned architect played by Paul Newman proposed that the burnt-out, windows-shattered one-hundred-and-thirty-five-story hulk be left standing as “a kind of a shrine to all the bullshit in the world.” Even in our Watergate-era cynicism, what naifs we were then. We little suspected how much worse would be in store. The world’s bullshit supply was still in its developmental stage, amassing its resources to achieve full sentience, establish free-market capitalism as the undisputed queen of the ball, and extend privatization into every sphere and cranny of endeavor until much of mankind would be superfluous and disposable, supplanted by intelligent machines. 

    It is in the nature of satire to go too far, but now “too far” scarcely feels far enough, given the enormity of the wealth accumulating to those at the top of the diamond pyramid and the social fissuring below. If there is something punitive and body-horroring about so many of the films and series about the super-rich, it may reflect the frustration that no matter what the crimes and excesses of the Moneygods, karma isn’t coming for them—the fix is in. Their escape pods are loaded up and at the ready. So karma has to be dealt out on screen, with as wicked a hand as necessary and a wham-bang finish. Rough justice may not be real justice, but you take your reckonings where you can get them. 

    After Neurocentrism

    Some thirty years ago, with the launch in 1990 by the Bush administration of the “Decade of the Brain,” neurocentrism took hold in the Western world — America, Japan, and Europe. It held on well into the aughts. Neurocentrism is the belief that the brain is the seat of the mind, that they are in some sense the same entity, and that therefore one can understand mental and psychic life by understanding the brain, which is often dubbed the most complex object in the universe, with its estimated eighty-six billion neurons and hundred trillion or so synaptic connections. As a consequence of discussions about the brain already underway in the 1980s between upper-level American science agencies, councils, and associations, the government awarded generous funding for research in neuroscience, psychology, and neurology. It aimed in large part to address the staggering cost of neurodegenerative diseases, which was (correctly, as it turned out) predicted to increase massively over the next decades, as well as to study the aetiology and the effects of neurological disorders and accidents. 

    An underlying assumption of the program was that it would constitute one of the ultimate achievements of humankind to unravel the brain’s functioning. A similar hope was pinned on genetics, with the Human Genome Project launched in 1990, with a similar equivalence posited between genomes and selves. If one came to grips with the biology, in short, one would finally understand the nature of life, identity, and consciousness. There was an essence that one could seize. Science would yield ultimate truths. In those years a reductionism of mind and life to their constituent parts prevailed; it was galvanized and encouraged by the optimistic ethos of the time. Popular books about the brain, and also about genetics, flourished. To be sure, there existed corners of resistance to reductionism, in the name of phenomenological complexity, with philosophers of mind exploring the nature of consciousness — for instance, a Journal of Consciousness Studies was founded in 1994, which provided a forum for collaborations between philosophical speculation and empirical data, and interdisciplinary conferences on the major topic started taking off then. But this resistance took place within rarefied academic spheres. Neurocentrism was easier for non-specialists to comprehend. 

    Meanwhile the fields of mind sciences grew, and multiplied, separately from biological neurosciences, insofar as the term “mind” designates not a physical entity but the abilities that allow organisms to function in and interact with the world. A “Decade of the Mind” was announced in 2007. The cognitive sciences yielded modular models of the mind, represented for a while as subdivided into mechanisms supposedly developed during the Pleistocene. Importantly, these cognitive sciences were a formidable and fertile response to the behaviorism that had preceded them, insofar as they supposed, in contrast to behaviorist assumptions, that there was indeed such a thing as a mind that could be studied. The association of the cognitive sciences with neuroscience then gave birth to cognitive neuroscience, which made use of imaging technologies to explore mental functions.

    The appearance in 1991 of functional magnetic resonance imaging (fMRI) — a technology that allows one to observe the brain in action — was a historical revolution whose impact on the imagination was not unlike that of the moon landing. It seemed to announce a bold, bright future when one could finally peer into places that had never before been visible. The first steps into this future were taken, however, with a measure of presentist and materialist hubris, and often at the cost of philosophically informed subtlety. Now, three decades later, research and funding continue, and rightly so — but the mood, the priorities, and the assumptions have radically changed. And so it is time to take stock of where the mind sciences are today, and what place these sciences now hold in the collective imagination, especially in light of the bewilderingly rapid evolution of computer science and, most recently, of artificial intelligence — an expression whose assumptions also need parsing.

    At its height, neurocentrism in its excitement generated countless claims about the cerebral location of mental functions, delivered in the media as so many revelations about the “place” of the deepest aspects of human experience — cognition, language, volition, emotion, and even artistic and religious feeling. Experience was “in the brain” and therefore, somehow, better understood, or so went the claims. In this respect, the historical moment was structurally at least reminiscent of phrenology, popular in the early decades of the nineteenth century until it was dismissed as pseudoscience some years later, according to which the brain’s divisions determined personality, qualities, and faults, and mental functions had specific, visible locations and could be measured by bumps of the skull. The neurocentrist excitement was also reminiscent of the older urge to posit a “homunculus” inside the organism to account for its operations — no matter that this created an eternal circularity, since assigning functions to some agent or structure in the brain begs the question of how that agent or structure works in the first place. It begs the question, too, of how one can determine a causal link between the structure and the function. Extracting indubitable causal connections out of a myriad observed correlations remains, as it happens, a central puzzle and problem for brain science, as indeed it is for all sciences. 

    The excitement that was provoked by fMRIs is understandable. Thanks in part to these and other novel imaging technologies, highly important work has emerged since the 1990s, in all fields of neuroscience as well as neurology, in brain anatomy and physiology. A focus on neural networks replaced the erstwhile localizationism, and the identification of the functions of decidedly interconnected brain areas became, and continues to become, more and more fine-grained. Advances in neurophysiology fed into the development of neuropharmacology and of second-generation targeted medications for psychiatric disorders — though the functioning of these pharmaceuticals remains inadequately understood and their use can be controversial. New tools such as optogenetics were born, to study the behaviors of individual neurons. There developed, and there continues to develop, a better understanding of the etiology of various dementias, despite their continued intractability. Since 1990, there has also been the invention of deep brain stimulation (DBS) to counter some of the symptoms of Parkinson’s, and of transcranial magnetic stimulation (TMS), a non-invasive technique that acts upon and helps to decode neural activity during specific tasks, such as recall or attention. The genetic basis of some devastating neurological diseases, such as ALS or Huntington’s Chorea, emerged out of these significant studies. The use and applications of computational neuroscience grew tremendously, allowing for the refinement of our understanding of attention, orientation, and vision, or for the creation of brain-computer interfaces that allow paraplegic patients to move again — and the tools and concepts that it deploys grow in sophistication every day. 

    The various imaging technologies, moreover, are growing increasingly refined, in some part because awareness has also grown of how complex it is to interpret the images that these technologies yield. Indeed, it is much more of a cultural given today than it was in 1990 that there are no readable maps of any kind without signposts, nor without readers who know the signposts — and that neither signposts nor readers are devoid of bias. A map is never a one-to-one rendering of what it represents. Studies of the brain are not limited to the interpretation of images, in any case. They can provide support for studies that take place at multiple levels — genes, molecules, neurotransmitters, single neurons, and neural networks — and within numerous interconnected subdisciplines, such as neurophysiology, developmental neuroscience and psychology, social psychology, cognitive and affective neuroscience, computational neuroscience and robotics, all of which converge by now with the many fields pertaining to the cognitive sciences and psychology. With the development of epigenetics, the impact of the environment on infants’ and children’s psychic development, and thence on lifelong mental health, has become much better understood, too. The gene-centric biological determinism that had initially characterized the decade, and that could be used to justify conservative social and educational policies opposing investment in public education, not to mention outright bigotry, thus ended up being undermined by some of the very projects that the Bush administration had underwritten. 

    In other words, neurocentrism is no longer the calling card of the mind sciences. It had come along with public enthusiasm for all things “neuro,” which for a few years became a ubiquitous, homunculus-like predicate in the media and publishing worlds. It then waned by necessity, along with public enthusiasm and neuro-fatigue. Public interest in the neurosciences and psychology does continue to simmer today, especially when they address general issues of psychology and well-being, psychiatric disorders and neurodegenerative diseases, but the reductionist enthusiasm has ebbed — much for the better. 

    Now we have a different problem. The recoil went too far. Related to the change of mood is the less welcome growth of skepticism with regard to scientific research generally. Broadly unaware that scientific results and interpretations are provisional, the public tends at once to overvalue and to undervalue the scientific enterprise — attributing to it a capacity to deliver certainty, as it seemed to do at the height of the neurocentric decade, and dismissing it when this desired certainty is not at hand. This misconception of science is precisely what feeds into pseudosciences like the phrenology of yore and the dangerous myths of today, from anti-vax theories to the denial of climate change. An awareness of what scientific research is and what scientists do is necessary for the proper calibration of trust in the value of their expertise. This holds true for the mind sciences as well. These need to be, and in fact increasingly are, informed by philosophical argument and humanist concerns, since by its very nature, the study of the human mind at work upon itself remains a minefield of confusions. 

    These confusions regarding the scientific study of mind are far from new. The study of the brain as it has been practiced since the late nineteenth century, when the neuroscientist Santiago Ramón y Cajal ushered in modern neuroscience with his discovery of the neuron, is not the study of the mind and the psyche. The study of the brain zooms in on the organ and its microscopic components. The study of the mind, by contrast, starts from observations of human behavior. Studying the brain does indeed provide clues about the mind, but the obverse does not necessarily hold, especially given how complex the brain is. The territory has been mapped, the places named, the sulci and gyri identified, some functions recognized and some mechanisms understood, to some extent — yet so much of it is still unknown. In 2004, the neuroscientist Gerald Edelman entitled a book on consciousness Wider than the Sky: The Phenomenal Gift of Consciousness, a title drawn from the poem by Emily Dickinson — “The Brain — is Wider than the Sky” — that is unavoidable for those who want to conjoin humanist musings about meaning with the hard-edged world of scientific experiments. Edelman offered an attempt, one of many at the time, to show how the brain gives rise to the mind.

    But this does not mean that the brain and the mind are identical. And along with the ebb, since the Decade of the Brain, of reductionist enthusiasm and phrenological equivalence, there have emerged over the past two or three decades increasingly rich theoretical constructs and empirical data bolstering arguments against the identity of mind with brain, arguments that until recently were purely the remit of philosophy. The philosophers may enjoy contemplating the mind at work upon its own processes, as they always have. But today the empiricists — neuroscientists and psychologists — are in a better position to provide answers to some of these philosophical questions. They look at ourselves from without, while trying to build an image of the thinking, subjective entity that we each are. 

    It does seem obvious that without the brain there would be no mind — and no advanced animal life at all. The Hippocratic doctors of ancient Greece were craniocentric, as was Plato — though not Aristotle, who believed that the heart was the seat of it all. But brain and mind are, by definition, different entities. We know what the brain looks like. In contrast, no one has ever seen a mind. Nothing visually contains the mind — not even the brain. The assumption that brain produces mind, at least to some degree, arose at some point in the history of self-conscious humans out of correlations between accidents and behavioral changes, and of course well before the advent of brain imaging. Human psychology, however, concerns persons, not the gooey organ within their skulls. Brains, in sum, are necessary for minds, but minds are not reducible to brains.

    In fact, out of the ancient recognition of how strange it really is that intangible mind should arise out of tangible matter, there was born the renowned “mind-body problem,” which posits the irreducibly mysterious nature of higher mental life and consciousness. The apogee of this problem in the West was the dualism famously perfected in the seventeenth century by Descartes — famously mocked by Gilbert Ryle three centuries later as “the ghost in the machine” — which split apart immaterial mental experience from the material body in which mental life took place. Dualism is a powerful and seductive theory, because thoughts and feelings do not have the concreteness of matter, at least in everyday experience. It also informs religious beliefs about life after death in many societies besides Western ones: the awareness of the self-aware mind goes hand in hand, metaphysically and anthropologically, with the awareness of death. The West was marked by the Cartesian version of dualism, not least because it was compatible, including for Descartes himself, with Christian dogma. Thought and feelings pertained to the immaterial (and therefore immortal) dimension of humans, which was called soul, and non-human animals were considered soulless, mortal mechanisms.

    Within the framework of secular modernity, there was no longer any political need to please the Church with a metaphysical doctrine of human exceptionalism. With Darwinian evolution, and the accompanying reconception of humans as evolved animals, human cognition and emotion could be studied in scientific rather than metaphysical terms. From the late nineteenth century, scientific psychology, as established most notably by William James in America, Wilhelm Wundt in Germany, and Théodule Ribot in France, began to parse our corporeal beings, and did away with any immaterial soul. (In this regard James’ view of spiritual life was an exception within scientific materialism.) The psychology that emerged then took as a given that subjectivity could be studied through a combination of observation, introspection, measurement, clinical evidence, and philosophical acumen. 

    But scientific psychology did not put the mind-body problem to rest. The anxiety bred by the conception of ourselves as mortal animals has never gone away. Religious feeling persists in all corners of the world. And the metaphysics of mind are not reducible to the science of mind: the mind-body problem has remained in the philosophical conversation of the last and current centuries, in particular with the so-called “hard problem” of consciousness, in the formula of the philosopher David Chalmers. As he contended in 1996, however successful we may become at parsing the mechanisms involved in “the cognitive and behavioral functions in the vicinity of experience,” missing from the picture are the qualia of experience — that is, what it is like to have any experience at all, in the oft-quoted words of Thomas Nagel, whose essay “What Is It Like to Be a Bat?,” published in 1974 remains a reference point for the anti-materialist argument. The problem as Chalmers states it delimits conceptually the bounds within which an empirical account can have explanatory power, and beyond which it acts somewhat like snow that will not stick to a persistently slippery terrain. On this view, experience and biology partake of two different orders, and so consciousness necessarily escapes the physiological mechanisms that make it up. 

    Not everyone agrees that there is a “hard problem,” however: the snow, so to say, could eventually stick. The neuroscientists who study the nature of felt experience take for granted that it is a biological phenomenon through and through, and on a continuum between lower-order and higher-order mechanisms. Their concerns are not philosophical: the existence or not of the problem of consciousness is not relevant to their empirical research. And it is noteworthy that the notion of a “hard problem” arose as such — as a problem — precisely at the height of the neurocentrist Decade of the Brain, when, with materialist reductionism at its apogee, the old mind-body dualism was replaced with a brain-body dualism that split the brain apart from the rest of the body. This split ensured that cognition was studied apart from the embodied brain, irrespective of the biology involved in cerebral activity, of cell physiology, of genetics, and also of the environment in which the living organism develops and lives. It fed into the development of computational neuroscience, and of its conception of cognition as disembodied and affectless, out of the cybernetics of earlier decades. It also sundered the continuum between lower-level mechanisms that Chalmers, like his early modern predecessors, had deemed available for empirical study, and the higher-order ones that he deemed indeed impregnable to empirical accounts. Now the human animal was as if split into three parts: machine-like brains, machine-like bodies, and disembodied minds. 

    This brain-body dualism has begun to diminish only recently, over the past two or three decades. But this is the case mostly within some circles of psychologists, for everyday language tends to remain dualistic — “it’s all in the mind” means that it is not real; emotions are conceived to float in an abstract realm; your body “belongs” to you; and so on. (Early in his memoirs Bertrand Russell recalled a philosophically amusing old adage: “What mind? No matter. What matter? Never mind.”) Some areas of philosophical speculation are still conducted as if biology were entirely incidental, in an enactment of the dualist stance, in part because the conceptual structure is missing for a proper integration of scientific theories into philosophy. Yet from its earliest beginnings philosophical contemplation overlapped with empirical observation. When Thales asserted that all is water, he offered an empirical description as well as a metaphysics. Etymologically, metaphysics is what comes “after the study of nature,” sequentially after Aristotle’s Physics, later denoting what transcends empirical study. And until the modern era philosophers were also “natural philosophers” — scientists, in other words — who conducted empirical enquiry. 

    Today we do have the tools to construct empirical answers to philosophical questions about the nature of self and mind: what we need to develop now are the tools to understand philosophically these empirical answers, to develop a proper “science humanism.” Yet owing to the divisions between disciplines, few humanists pay attention to science — just as few scientists are in a position to “humanize” their research. In recent decades, there even have been attempts in the humanities and social sciences to dissolve matter and deny the empirical character of science entirely, by making it into just another human expression — social constructionist models that envision biology entirely as a phenomenon that cannot be known apart from the context in which it is theorized, of import only as a cultural occurrence. Computational neuroscience, meanwhile, yields increasingly complex models of a disincarnate “mind,” while what the world knows as “artificial intelligence” is surpassing some aspects of human cognitive competence at increasing speed. 

    Despite these admittedly powerful and often concealed redoubts of dualism, a growing number of researchers in neuroscience and psychology are now taking on board how the brain is in fact interconnected with the body —and that, as the neuroscientist Antonio Damasio has put it, the brain serves the body, rather than the other way round. This is so because, like all else on earth, the brain has evolved into its present shape out of primitive life. As Damasio described it in 2018 in his The Strange Order of Things: Life, Feeling, and the Making of Cultures, from the brainless single-cell organism, such as bacteria, endowed with the capacity to perceive and act upon its perceptions, grew increasingly complex multi-cellular organisms that eventually developed nervous systems to coordinate their multiple parts. Bodies evolutionarily precede the brains that serve them, and consciousness, as indeed all higher mental function, is an upshot of processes internal to the evolution of life. Mental experience encapsulates felt experience, and without the body there would be no feeling — nor any brain, either. 

    This is also why it is no longer possible to hold on to the old story according to which the faculty to reason and to contemplate cosmos, life, and self is a function of a disembodied thinking thing — Descartes’s res cogitans, posited in opposition to the extended bodily thing, res extensa. With his epochal Descartes’s Error, which appeared in 1994, at the height of the neurocentric moment — and based on imaging research conducted with his wife Hanna Damasio —Damasio first explained how, without emotional activity and input, deliberations that seemed to partake of rational evaluation were disabled. From then on, and via their subsequent research and his writings, Damasio was at the forefront of neuroscientists who showed how central emotion is to our highest faculties, and how the notion of a disembodied brain, let alone a disembodied mind, is a fantasy that has nothing to do with our biological reality. The so-called “affective turn” in neurosciences and psychology was launched. Research on emotions accelerated, not only in the sciences but also in philosophy and, more recently, the social sciences: the rationalist and cognitivist canon remains, but emotions are finally being considered centrally. Unlike a thought, a feeling is much less easily mistaken for an abstraction untethered from experience. Whether or not a thought is incorporeally determined, as Descartes believed, having a feeling entails experiencing a physical sensation. And so the “affective turn” was conducive to the notion of the person as a psycho-somatic unity. 

    In the 1990s philosophers also insisted that the mind must be understood not only as embodied, but also as embedded, enactive, and extended within the environment with which it interacts dynamically — that our experience is the upshot of this dynamic interaction of brain and body in relation to the world, and that our minds are a dimension of this interactivity. In fact, this conception of the body’s relation to thought and experience had already been central to phenomenology in the late writings of Edmund Husserl and, monumentally, in Maurice Merleau-Ponty’s Phenomenology of Perception, which appeared in 1945. On this approach, known as “4E cognition,” our tools are also aspects of minds that extend beyond individual skulls. It is misleading, in this account, to contemplate minds in isolation, since we have evolved as social beings. The subject is primarily “intersubjective,” as the philosophers Emmanuel Lévinas and Paul Ricoeur argued, following Merleau-Ponty. The first-person, interactive, felt experience that phenomenology embraces as necessary to our self-understanding has re-entered the realm of science, which is filling in the philosophical picture. A number of philosophers and scientists have joined this phenomenological approach with some aspects of Buddhism, most notably Francisco Varela, whose The Embodied Mind: Cognitive Science and Human Experience, written with Eleanor Rosch and Evan Thompson and published in 1992, marked a turn for students of cognitive science who believed that scientific investigation of the mind must begin with subjectivity. 

    Since then, our understanding of corporeal subjectivity, from the bottom up rather than from cognitive heights, as it were, has grown tremendously. It depicts the central nervous system — brain and spinal cord — as crucially interconnected with the peripheral nervous system, which includes the somatic and autonomic nervous systems. Feelings from skin, muscles, skeleton, and viscera are processed in the brain which monitors their functioning, yielding what is known as interoception. In turn, interoception meshes with and acts upon exteroception, the perception of external stimuli via the five sense modalities. Together these sense-perceptions constitute our very sense of self. 

    The study of interoception has intensified among psychologists over the past decade or so, yielding insights into the somatic basis of the self, the centrality of emotions to its constitution and of bodily states to awareness, well-being and illness. In the words of the neuroanatomist Arthur D. Craig, whose research on the topic has been foundational for the many studies that have multiplied since, interoception is “the sense of the physiological condition of the body,” whether conscious or not. Interoceptive signals travel along specific neural pathways from all bodily systems from skin to gut — vasomotor, cardiac, digestive, sexual, respiratory — and include the sensing of pleasure, pain, hunger, thirst, temperature. They are processed in particular in an area within the cerebral cortex called the insula. Increasingly targeted and sophisticated studies are showing how our sense of self is indexed on these dynamic perceptions and on the brain’s constantly predicting internal bodily states. In turn, these processes reflect the homeostatic regulation of the organism within an always changing world, without which it would not be viable. These constant feedback processes “provide the basis for the subjective image of the material self as a feeling (sentient) entity, that is, emotional awareness,” as Craig puts it. Consciousness, according to this analysis, is the upshot of the neurally encoded capacity to represent these processes as feelings. And in turn, these feelings are indications of our fluctuating bodily states within the world of non-selves, which shapes who we become from infancy on. As many researchers are showing, including developmental psychologists, or the neuroscientist Vittorio Gallese with his notion of “embodied simulation,” we are intersubjective from birth: without others, we do not develop stable selves. And because this multi-pronged and conceptually rich body of ongoing research begins with the subject, it avoids the conundrum that was faced by the disincarnate sciences of the mind that were developed in the age of reductionism, and which resulted in the “hard problem” of consciousness. No, we are not “just” our brains, and this current science is showing how that is the case.

    With this intense focus on the biological underpinnings of the self as a dynamic entity that is embodied, interactive, and intersubjective, as opposed to disembodied, fixed, and isolated, experimental sciences are meshing with philosophical speculations. They had been conjoined in antiquity — Aristotle, remember, was a metaphysician and an empiricist — and to some degree in early modernity, when Descartes, too, practiced empirical research. They met again in the late nineteenth century. Now, armed with insights from phenomenology, daring at last to use the first-person as a starting point for scientific inquiry, we are digging deep into our flesh, extracting from its infinitely complex layers the very consciousness that, for so long, seemed always to escape us. In so doing, we are also building the scientific picture underlying what millions look for and experience in practicing yoga and other somatic disciplines. The insights from yoga comport nicely with the discoveries of phenomenology.

    As we can see from this brief history biological reality is never a neutral given: scientists start from preconceptions about the nature of their object of study. The names on the maps are not inherent to the maps. They must be coined, and they are not fixed, either. Scientific research always takes place within a cultural context that informs its priorities. While it is not reducible to its context, bias is always present. And over the last few decades, cultural context has been partially responsible for this increased attention to the body, which, not surprisingly, has also become central within literature and the arts, humanities, and social sciences. Popular culture, certainly, is body-obsessed in complex ways. But more positively, outside the dualist redoubts that persist in some academic circles, some religious communities, and generally in folk psychology, it is acceptable and even necessary to say today that humans are embodied creatures among other embodied creatures, in the sense that our feeling and sensing body is centrally constitutive of what we are. Cognitive sciences now ally also with anthropology to study embodied and extended cognition — how homo sapiens is the begetter and user not only of symbolic languages but of tools and artifacts. We adapt to our environments, while extending our selves, and building cultures of and with things: humans live within artifactual rather than natural settings. 

    This re-centering of the study of humans onto the body — and of the human body onto the natural world — is an aspect also of the ecological emergency. It is now painfully clear that the old notion of our supposed superiority over other animals, and our lording it over the whole of nature, has been arrogant and destructive. (The pandemic was a reminder of this: viruses can stop us if we invade wild territories — such as bat havens — that are not ours to enter.) The very consciousness that seemed to be our privilege — for long in the guise of the old rational soul — is in fact organically based, an outgrowth of a natural process, and indeed we can best understand it in relation to the consciousness of other animals, which is increasingly studied as well. This is a return not only to the late nineteenth-century roots of scientific psychology, but even earlier, to Lucretius, who wrote in On the Nature of Things that “mind and spirit are both composed of matter,” and to the seventeenth-century “libertines” who adopted Lucretian materialism. 

    It is always tempting to balk at this materialist picture and to reify our self-consciousness into an abstract Cartesian entity. But in doing so we forget that our very capacity for self-consciousness, material as it really must be (what goes on in the mind or between minds by definition goes on in the body or between bodies), allows precisely for the capacity to do so, and for us to be taken in by our very capacity. Daniel Dennett suggests something that seems similar, with his notion of consciousness as illusory, but he believes that consciousness is “just” an illusion — and that is emphatically not the point that I am making here. Rather, consciousness as we experience and understand it is a materially and ontologically real capacity, but we are unable to understand how it arises precisely because that is its all-too-human limit in humans. There is only so much that our brains can do. Animal consciousness is also real, but it probably does not extend to this multiplication ad infinitum bred by our self-reflection in a hall of mirrors.

    And so we cannot leave our self-definition there. We are animals, and knowing that we are helps us to understand ourselves. But in virtue of our highly complex brains, we also differ from the other animals in ways that we need to comprehend in order to understand ourselves. Only humans study consciousness: our metacognition, that is, our self-reflexive awareness, does seem to define us and constitute our apartness. That very thought, in turn, is another instance of our self-reflective metacognition, which is the root and stuff of history, philosophy, art, science — of culture, in short, which in its variegated manifestations remains our human prerogative. Other animals can be acculturated, birds may learn specific songs, chimps can learn tool use and mating behaviors — such processes exist throughout the natural realm. But our very nature is defined by the cultural dimension, and the variety of human cultures is potentially infinite: the sciences of the mind are therefore bound by their inevitably cultural structure and mission. 

    No other creature is interested in its brain or has an idea of a mind — in fact, the very concepts of “mind” and “culture” are themselves cultural artifacts, developed within anthropology and related disciplines. And so, the notion that all species may have a kind of consciousness is asymmetrical: we may attribute it to our dog, say, but the dog does not know this in the way that we do, nor will it be aware that, like us, it may in turn be attributing consciousness to us. At any rate, whatever awareness it has is not elaborated further than its experiencing it. It may share with us mechanisms of attention and volition, it may experience anger and joy, but it does not study how, or ask why, these mechanisms and emotions occur. Neither dogs nor dolphins, neither elephants nor octopuses, study anthropology or philosophy — or write diaries or poems or books. What characterizes our species, besides our elaborate tools, is our propensity for introspection and abstraction, for projecting ourselves into the future on the basis of the remembered past, for imagining what is not present — using elaborate symbolic forms to embody the non-present, as Suzanne Langer emphasized — and, finally, for constructing cultures out of our awareness of death. We also make stories, theories, and trouble out of our perceptions and predictions.

    This human prerogative, which in some instances we may call tragic, is also what defines us in relation to the increasingly sophisticated artificial agents that we are creating — the ultimate artifacts, which may seem to act and look more and more like us. There is no turning back: as technology gains in power, we face the need to re-assert the bounds of our humanity, in contrast to the kinds of faculties that are displayed by these artificial agents. There will be similarities, of course, but the differences are what will define us. Certainly these machines are farther away from us than are the non-human animals to whom we may attribute consciousness. They are neither born nor mortal. They are not biological, they have not evolved over millions of years as bodies that flexibly adapt to a fluctuating environment, they are not constituted of billions of proteins, molecules, cells, an immune system, hormones, enzymes, and neurotransmitters, and they are not conceived within the body of another mortal. In short, they are not “wet,” as Siri Hustvedt put it in her important essay “The Delusions of Certainty.” 

    Yet we are increasingly merging large aspects of our identities and activities with the electronic circuits that we have created. Gmail retains traces of our lives better than we do — though that does not mean that it “remembers,” because memory is made of dynamic processes that pertain to a self, a self that forgets as much as it remembers, and therefore differs entirely from the storage system of AI. Memory is emotionally valenced, and AI entities do not have emotions, so far. Granted, robots are multipart systems, and though they are now able to learn how to navigate within changing environments, they begin their “lives” as circuits stored within the artificial body, in contrast to our nervous system, which is the upshot of dynamic embodied processes. Our minds are not incidentally “housed within” a body: as is becoming clearer in our post-dualist age, it is precisely because there are bodies — complex biological systems — that there are minds. What is bewildering today is that we have developed the capability to reverse these age-old bottom-up processes and engineer mind-like mechanisms out of algorithms. We are able to build these machines, though, thanks in part to our understanding of the brain and the embodied mind, and it is no secret that one motivation for the public support of neuroscience over the past decades has been its utility to the AI that increasingly, and with perplexing speed, undergirds all aspects of our private, social, economic, cultural, and political lives. 

    Something else is missing from AI besides emotions — simple ones like fear or complex ones like shame, ambivalence, nostalgia, melancholy. It is the need for meaning, which is inscribed within the narrative patterning of the human mind from birth onward. The sense of both individual and collective history and belonging. And the sense of beauty, the poignancy of finality, physical desire, the pleasure of food; and the sense AI agents do not somatize illnesses, feel pain, suffer from exhaustion, have anxiety attacks or depressive episodes; they do not have eating disorders or develop dementia; they do not enjoy the garden in the spring, have sex, fall in love. They do not feel claustrophobic if stuck too long in one place, or travel for pleasure, or make war or murder or take drugs. They can also lie; perhaps one day they will know how to lie willingly, instead of just proliferating false information. They know” a great number of things, but nothing about the meaningful narrative that makes up a mortal life. 

    It is legitimate — and an obligation of our moment — to be concerned about the growing power of these artificial agents. But to worry that they could replace us is to forget what humans are. The misplaced worry is itself worrisome. It stems from the same faulty intuition that for so long led humans to ignore their potentially ailing bodies, or to suppose that a brain could just as well grow in a vat. We tend to mistake models for reality and brain maps for minds, to scant or entirely neglect how phenomenologically complex and sensorially rich each experienced moment really is. We project onto machines our all-too-human fears and hopes. Yet it is only by understanding ourselves better that we will be able to develop the culture of care that we need to confront the crises of our time. Care was not a priority in the heady days of 1990s optimism, when knowledge of the brain was touted as the route to enlightenment. But it is so today, all the more that the forces of reaction are ascendant. Against these, and in order to advance self-understanding, we should build an alliance of the psychologists, neuroscientists, and engineers with the philosophers, artists and historians. Such an alliance would enable us never to lose sight of our humanity, whose redefinition only humans can forge.

    What the Night Sky Teaches

    Is astronomy the key to our wellbeing? If we “learn the harmonies and revolutions of the universe,” Plato wrote in the Timaeus, we will attain “the most excellent life offered to humankind by the gods.” The pre-Socratic philosopher Anaxagoras was even more dramatic:

    And they say that when someone asked Anaxagoras for what reason anyone might choose to come to be born and to live, he replied to the question by saying that it was “to be an observer of the sky and the stars around it, as well as moon and sun,” since everything else at any rate is worth nothing. 

    For Anaxagoras, stargazing is the only thing worth doing. Without it, we would be better off not existing at all. These days, I’m sure, lots of people would be thrilled to gaze at the stars if the spectacle could offer them respite from what’s going on down here, let alone lead to “the most excellent life.” But can it?

    In 2020, the Nobel Prize in physics went to three astrophysicists — modern-day stargazers, if you like — for their work on black holes: Roger Penrose for showing that black holes, strange though they are, fit squarely with our theory of the universe; Reinhard Genzel and Andrea Ghez for discovering the black hole at the center of our own galaxy. The year before, the first-ever photograph of a black hole was published to great fanfare. Every newspaper showed the dark circle, surrounded by a ring of fire, on the front page. Eight interlinked observatories, from the South Pole to Hawaii to the Chilean desert, turned the earth into a gigantic telescope to capture the supermassive object five hundred million billion kilometres away. 

    Invariably such events are accompanied by a certain rhetoric celebrating mankind’s curiosity and how it pushes the frontiers of knowledge. But imagine how puzzled we would have been if the laureates had announced from the podium in Stockholm that life is worthless unless we study astronomy. Wouldn’t we have dismissed them as mad?

    In my own family, the enthusiasm for astronomy runs low. It took my son and me a couple of hours to screw together the telescope that he was given for his sixth birthday. After dinner we aimed it at the sky. We saw mostly darkness with a few fuzzy flashes. Finally we found the moon. On the pale, stained surface we made out craters. It held his attention for about a minute.

    “Did you see the man on the moon?” I asked.

    “That’s a fairy tale, dad!” he replied. But if I could get him on a rocket ship, he would have loved to jump around there, like the astronauts in a documentary he had seen. Then came his older sister’s turn. “Cool,” she said. “It really does look like cheese.” That was the end of our space exploration. Since then, the telescope has been collecting dust in a corner of the living room. Clearly we have not been heeding Anaxagoras’s and Plato’s counsel. 

    The only constellation I can identify is Orion, thanks to the belt. I wouldn’t make a Thracian maid or anyone else laugh the way Thales did. He was the first Greek philosopher, and Plato tells this story about him in the Theaetetus:

    Thales was studying the stars, and gazing aloft, when he fell into a well; and a witty and amusing Thracian maid made fun of him because, she said, he was so eager to know what was up in the sky, but failed to see what was in front of him and under his feet.

    “The same joke,” Plato says, “applies to all who spend their lives in philosophy.” Plato is keenly aware that ordinary people see philosophers as ridiculous stargazers. He, of course, thinks that the joke is on ordinary people. They don’t comprehend that stargazing is of much greater value than the things they desire: money, fame, pleasure, or, to take it down a notch, ice cream with friends or a night on the town.

    I have spent my life in philosophy: borrowing philosophy books from the local library as a teenager, writing a doctoral thesis in the discipline, and teaching it at a university for two decades. My parents were not enthusiastic. My mother would have preferred to see me become a doctor, a lawyer, or an engineer, like my cousins. My father proposed carpentry: doing something with my hands, he hoped, would ground me in the real world. “Philosophers,” he once explained to my daughter, echoing Plato, “always have their heads in the clouds.” “So are birds philosophers?” my daughter wisely replied. (She was three at the time.)

    More troubling, however, is that after all these years I am still in the camp of the Thracian maid. My head may be in the clouds on occasion, but I have no interest in dusting off the telescope to look at the stars. Does that mean that I’m doing it all wrong? Or is my world just so different from Plato’s world that we cannot conceive philosophy in the same way? The stakes are high: if stargazing is what makes life worth living, I’m living a life that is not worthwhile.

    A few years ago we took my daughter and her friends to a planetarium on her birthday. They were happily nibbling on their popcorn while the moderator explained the size, the age, and the composition of the universe, including, of course, the mysterious black holes. Then he said something about a “sense of wonder.” This provoked me. Whatever wonder this picture of the universe arouses, I thought, it is certainly not what philosophers felt in the past. Had they come across black holes, they would have been terrified. 

    Now imagine a planetarium show moderated by Plato. We would learn about an altogether different universe: a geocentric one with a system of nestled concentric spheres carrying the stars, planets, sun, and moon around the earth. Plato would point out the amazing mathematical precision with which ancient astronomers described the orbits of stars and planets. They turn with flawless regularity, like wheels of a celestial clock: “the moving image of eternity”! On earth things are messier, he would acknowledge, but they are still very predictable: the offspring of oak trees are always oak trees, of horses, horses, of men, men. The seasons follow each other every year. And everything is perfectly coordinated: the intricate order that allows the eye to see; the organs working together to enable all kinds of living beings to thrive; earth, water, air, and fire that furnish sustaining environments for them.

    Consider a random pile of driftwood, swept up on the beach. Then consider the complex mechanism of a clock. Both have causes, but the former’s causes are blind and the latter’s are intelligent. For Plato, the evidence was overwhelming that the universe is like a clock, not like a pile of driftwood. In fact, it is the most breathtaking piece of craftsmanship. He never even entertains the possibility that it could be the effect of blind causes. As a clock requires a clockmaker, Plato reasons, the universe requires a Maker. That Maker, he contends, is a Divine Mind, called Nous in Greek. “All the wise agree,” Plato writes in the Philebus, “that Nous is king of heaven and earth.” In short: for Plato the universe displays an intelligent designer’s intelligent design. And the first to figure that out, Plato claims, was Anaxagoras.

    Aristotle likewise posits Nous, a Divine Mind, as the cause of the universe’s rational order. In his De philosophia, he proposes a thought experiment: imagine people who spend their lives in comfortable caves under the earth. They have never seen the natural world. One day “the jaws of the earth open,” they emerge from their caves, and they are astounded by the spectacle of nature. They first see the earth, the seas, the clouds, and the winds. Then they behold the sun, the moon, the planets, and the stars — “their courses settled and immutable to all eternity.” What does this spectacle teach the cave dwellers? From the universe’s rationality and beauty, they immediately infer that “there are gods and these great works are the works of the gods.” 

    The Divine Mind, according to Aristotle, does not have a body. So how does it move the heavens, he ponders in the Metaphysics, with no arms to push or pull? It moves them “as a beloved” (hos eroton), he suggests in the most poetic passage of all of his writings. Love makes the world go round. The Divine Mind is the “Unmoved Mover.” Eternal circular motion is how the heavens — living, ensouled, and perfectly wise beings, for Aristotle — express their love for God and imitate his eternal and immutable existence.

    We can now see why astronomy was the intellectual summit: it is a gateway to God. By studying the stars, in the ancient account, we connect to the Divine Mind, just as we connect to the mind of the clockmaker by studying the mechanics of the clock. Moreover, the heavens are moral models: they live perfectly rational lives. The more our lives resemble theirs — unerringly loving and contemplating God — the better off we are.

    As we left the planetarium, I reflected on how physicists today would frown at Aristotle’s claim that the desire to imitate God determines the structure of the universe. Most black holes arise from the death of stars. Love of God must be the last thing that these star tombs experience. “Holes,” moreover, is an utter misnomer: they are the largest and most compact masses of matter in the universe. (The one in the 2019 photograph packs more than six billion suns.) Their gravitational pull turns them into the stuff of nightmares. They are often called “monsters” because nothing can escape them, not even light (hence their blackness). Nobody knows where things end up that pass the so-called “point of no return.” Physicists call it “singularity” because at the center of black holes the known laws of physics do not apply.

    If black holes cross my mind at all in day-to-day life, where the sun still rises and sets as if the Copernican revolution never happened, I chase them away with a shudder. Mostly I hope that they will not swallow the earth. They surely do not point to a Divine Architect—as the wheels in a clock point to the design in the mind of a clockmaker.

    I didn’t want to disturb my daughter’s birthday party, so I did my best to hide the gloom that I felt in the planetarium. Her birthday cake was decorated with the solar system. The icing on my piece included fragments of Saturn’s rings. “Isn’t it interesting,” I said to one of the parents in attendance, “that philosophers in the past saw the heavens as the most sublime expression of divine rationality?” She smiled politely but didn’t reply. I’m sure that she thought I was weird. Then she turned away to chat with another parent. The heavens may have fallen silent, but in our busy everyday lives we don’t care.

    Plato knew that his picture of the universe was not undisputed. In the Laws he mentions philosophers who argue that “nature and chance,” rather than “intelligent planning,” explain the structure of the universe. He has in mind, among others, Leucippus and Democritus, the ancient atomists. But the strongest case for the pile-of-driftwood-view was made by the Epicureans. The Epicureans did not deny that gods exist. But why, they asked, would the gods get their hands dirty and craft a universe if they are blessedly happy, as everyone agrees they are? Instead the Epicureans posit blind causes, both mechanical and random: the weight and the natural motion of atoms moving through infinite void, and the notorious “swerve” that makes atoms deviate from their natural downward trajectory. As Lucretius explains in On the Nature of Things: without the “swerve,” atoms “would all fall straight down through the depths of the void, like drops of rain, and no collision would occur … In that case, nature would never have produced anything.” The “swerve,” then, determines the universe’s structure. It takes on the role that Anaxagoras, Plato, and Aristotle assigned to the Divine Mind. 

    Yes, we can grasp the natural order, the Epicureans reassure us. But gazing at the stars no longer connects us to God. So why bother? Because science for the Epicureans is valuable as a means: it dispels false beliefs about the gods, and about death, desire, and pleasure. After studying nature we will not be terrorized by superstitions about vindictive gods, or by wrong ideas about dying and the afterlife. Nor will we be in the grip of baseless, culturally induced desires and the disruptive passions to which they give rise: greed, envy, frustration, anger. Science, then, is still key to a happy and unperturbed life. 

    But if we held no false beliefs, the Epicureans insist, we would have no reason to investigate the universe: “If our suspicions about heavenly phenomena and about death did not trouble us at all … and, moreover, if not knowing the limits of pains and desires did not trouble us, then we would have no need of natural science.” In line with this insouciant attitude to science, the Epicureans offer a range of different explanations to natural events—eclipses, rainbows, clouds—without trying to settle between them. Any mechanical explanation will do, as long as it does not refer to divine agency and is consistent with sense perception.

    The Platonic sage gazes at the stars to get from the intelligent design to the intelligent designer. Knowledge, in this view, has intrinsic value. As Nous, God is the sum-total of knowledge. Every truth that we grasp — the nature of horses, Jupiter’s orbit, the essence of justice — strengthens our bond with the divine and increases our share in the best life. The Epicurean sage gazes at the stars, by contrast, to remove the superstitions that disturb our peace of mind. Knowledge only has instrumental value. The link to the divine has been severed. Indeed, the Epicurean gods, whose bliss does not require dispelling falsehoods, do not possess knowledge! What joy could they get out of contemplating the random configurations of swerving atoms which make up the Epicurean universe?

    The Epicureans didn’t stand a chance in antiquity. The core intuition underlying the Platonic view was too powerful to be seriously challenged. The universe was like clockwork, not like driftwood. And the heavens remained the chief proof for that. Here is how the Stoics, the arch-rivals of the Epicureans, put it (in the words of Balbus, Cicero’s spokesman for Stoicism in On the Nature of the Gods): “What can be so obvious and clear, as we gaze up at the sky and observe the heavenly bodies, as that there is some divine power of surpassing intelligence by which they are ordered?” Consider again how absurd it would be if the Nobel laureates in 2020 had made such a statement. So, then, what changed? 

    Max Weber described the reversal in a lecture in 1917. Stressing the “immense contrast” between science in Plato’s sense and modern science, he declared:

    Who still believes that the insights of astronomy, biology, physics or chemistry can teach us something about the meaning (Sinn) of the world? … If anything, the sciences are suited to completely eradicate the belief that the world is a meaningful place. And science as a path to God? Given its distinctly atheistic nature? That this is its nature nobody today will call into doubt. Deliverance from the rationalism […] of science is the basic condition for living in community with the divine.

    But how, exactly, did the reversal come about? One popular narrative pins the blame on Copernicus. The French philosopher of science Alexandre Koyré offered a classical formulation. The Copernican revolution, he explained in 1957 in From the Closed World to the Infinite Universe, led to 

    the destruction of the Cosmos, that is, the disappearance, from philosophically and scientifically valid concepts, of the conception of the world as a finite, closed, and hierarchically ordered whole (a whole in which the hierarchy of value determined the hierarchy and structure of being, rising from the dark, heavy and imperfect earth to the higher and higher perfection of the star and heavenly spheres), and its replacement by an indefinite and even infinite universe which is bound together by the identity of its fundamental components and laws, and in which all these components are placed on the same level of being. This, in turn, implies the discarding by scientific thought of all considerations based upon value-concepts, such as perfection, harmony, meaning and aim, and finally the utter devalorization of being, the divorce of the world of value and the world of facts.

    The thesis was neat and influential, but it was wrong. Yes, the clockwork-view of the universe was shaken. The heavens, as Koyré noted, ceased to be special: before Copernicus, their perfect circles around the earth were the paradigm of God’s craftsmanship. After Newton, gravity ruled everything from planets to apples. The universe also got a lot more unwieldy; whatever wisdom was manifest in its structure became harder to discern. What, for example, did the Divine Mind need so much empty space for? Yet the clockwork-view wasn’t anywhere close to collapsing. Newton, for one, had no doubt that God set up the laws of the new physics. And Kant, in the Critique of Practical Reason, still extols “the starry heavens above me” as one of two things that “fill the mind with always new and ever-growing admiration and awe, the more often and more intensely we reflect on it.” (The passage was inscribed on Kant’s tombstone when he died in 1804.) 

    But even if astronomy after Copernicus was no longer as reliable a route to the intelligent designer, biology still was. In his Parts of the Animals, Aristotle already noted that while plants and animals may not be as exalted as celestial bodies, they are much easier to study because “we live among them.” They, too, bear witness to purposeful design—to “what is not random but for the sake of something.” The “nature that crafted them provides extraordinary pleasures to philosophers who are able to know their causes.” As he dissected cats and squids, Aristotle quoted Heraclitus: “for there are gods here, too.”

    When Hegel equated “the rational” and “the real,” in his Elements of the Philosophy of Right in 1820, he continued to maintain the ancient belief that reality is a manifestation of the Divine Mind. He called the Divine Mind Geist, and, in the Encyclopedia of the Philosophical Sciences, he explicitly connected it to Nous, the deity of Plato and Aristotle. In the early nineteenth century, in other words, we were still watching God think when we grasped the universe’s rational order.

    The clockwork-view also persuaded the young Charles Darwin. In his Autobiography, he writes how much the “old argument of design in nature … charmed and convinced” him when he was a student in Cambridge. He had encountered the “watchmaker analogy” in William Paley’s Natural Theology, or Evidences of the Existence and Attributes of the Deity Collected from the Appearances of Nature, from 1802, which was required reading for Cambridge undergraduates at the time. Even Richard Dawkins—today’s noisiest atheist—admits that he would have accepted the proof from design if Darwin had not come up with the theory of evolution. But Darwin did come up with it. And once he found “the law of natural selection,” Darwin writes in the Autobiography, the “old argument of design in nature” went down the drain: “We can no longer argue that, for instance, the beautiful hinge of a bivalve shell must have been made by an intelligent being, like the hinge of a door by man. There seems to be no more design in the variability of organic beings and in the action of natural selection, than in the course in which the wind blows.”

    Random causes—the mutations driving natural selection—play a key role in generating what appears like design in living beings: plants, animals, humans. If Copernicus removed divine craftsmanship from the heavens, Darwin did the same for the earth. That was when the pile-of-driftwood-view really came into its own. The nineteenth century, then, marks the great rupture, not the Copernican revolution and its consequences, as Koyré believed, or the new philosophies of the seventeenth century—Descartes, Spinoza, Locke—or the eighteenth-century Enlightenment. 

    Between Hegel and Nietzsche, God dropped out of the philosophical picture. From a universe chiseled by a Divine Mind we move into one without metaphysical meaning. Nietzsche, in The Gay Science, captures the shift:

    The total character of the world is […] in all eternity chaos—in the sense not of a lack of necessity but of a lack of order, arrangement, form, beauty, wisdom, and whatever other names there are for our aesthetic anthropomorphisms. […] Let us beware of saying that there are laws in nature. There are only necessities: there is nobody who commands, nobody who obeys, nobody who trespasses.

    Note that Nietzsche’s universe remains governed by causal necessity that scientists can grasp. (Consider again the pile of driftwood: if you know the laws of physics and the particular causes at work, you can explain exactly how it came about.) What Nietzsche denied is that the universe displays the intelligent design of an intelligent designer: “order, arrangement, form, beauty, wisdom.” Instead it is random—in the sense of “purposeless,” not in the sense of “undetermined.” The downfall of the Divine Mind, moreover, knocks down the human mind as well. Knowledge no longer forms the bond between God and man. This is what Nietzsche means when he writes “how miserable, how shadowy and transient, how aimless and arbitrary the human intellect looks within nature.”

    At the end of the nineteenth century, the idea gained traction that science and religion are at war with each other. Two books in particular popularized it: John William Draper’s History of the Conflict between Religion and Science, in 1874, and Andrew Dickson White’s A History of the Warfare of Science with Theology in Christendom, in 1896. Galileo’s trial and Christian condemnations of the theory of evolution were routinely adduced as evidence for the alleged war. This is not the place to discuss the flaws of the conflict model, which are considerable. Here I am describing something completely different: a rupture within science—reason turning against itself. “All the wise agree,” Plato wrote, “that Nous is the king of heaven and earth.” Following the lead of Anaxagoras, Plato, and Aristotle, biologists, physicists, and astronomers studied the structure of animals, the laws of motion, and the trajectories of the stars as the gateway to the Divine Mind. That framework was still in place in the nineteenth century. And then it fell apart. Darwin’s scientific career is emblematic of the shift: it was sparked by the Platonic framework and ended up sounding the death knell for it. Today “all the wise agree” that intelligent design is outside the boundaries of science. The idea is kept alive mostly by the anti-modern resentment of the fundamentalist Christian fringe.

    Did we bury the Divine Mind prematurely? After all, the greatest scientist of the twentieth century seems to agree with Plato that Nous governs heaven and earth. Einstein called his view of the universe “cosmic religion.” To sign up, he thought, we need to accept only necessity and intelligibility: that universal laws determine everything and that the human mind can comprehend them. But that much even Nietzsche was willing to concede. Did he, unwittingly, leave a door open to bring the God of the philosophers back from the dead? Einstein speaks unforgettably of the “mysterious comprehensibility of the world,” and claims to find God in “the sublimity and marvelous order which reveals itself both in nature and in the world of thought.” Echoing the ancient astronomy fan club, he even celebrates the human mind’s ability to “grasp the mysterious force that moves the constellations.” 

    I am not convinced. For one thing, there is the empirical objection against Einstein’s determinism: the randomness at the heart of quantum mechanics. If God doesn’t play dice, quantum mechanics — which Einstein opposed to no avail throughout his life—suggests that God does not exist after all. But even if we can defend necessity and intelligibility, I don’t see how, on their own, they can ground a cosmic religion. Let us grant that the pile of driftwood is completely determined and completely intelligible. That hardly implies that it is God’s work. Einstein insists time and again that his God is not the personal God of traditional faith “who concerns himself with the fate and actions of human beings.” Fine, but that was also true about the Divine Mind of Anaxagoras, Plato, and Aristotle. Even if God has more important things on his agenda than caring about petty human worries, there still must be an agenda that we can discern if we are to give the name “religion” to such a theism.

    In his Exhortation to Philosophy, Aristotle proposes a thought-experiment: what would we do if we were on the “Isles of the Blessed,” a place where all our material needs — hunger, thirst, shelter, health — are taken care of, so that we would not have to worry about a thing? Would the freedom that we would enjoy in such circumstances be a blessing or a curse? How would we spend all the time on our hands? Hang out idly on the beach until we die? Isn’t a life without purpose a nightmare, even if it comes with every comfort?

    For Aristotle, the answer is plain: if we pay money to watch sports and theater, he argues, all the more should we be keen to contemplate the rational order of the universe. It is the best show on offer, and it’s free. On the Isles of the Blessed we would devote our life to theoria, or contemplation. That is Aristotle’s idea of paradise. It is easy to see the attraction of such a paradise if the universe is like a clock. But what if it is like a pile of driftwood? Even if it is in principle intelligible, why make the effort? What sublimity would we be contemplating? The obvious answer is the one the Epicureans give: knowledge may not have intrinsic value, but it has great instrumental benefits. It is good as a means to other things that we value: controlling nature, finding medical treatments, developing technologies, grounding good social policies, resisting ideologies, fake news, and the lies of demagogues. 

    Unfortunately, things aren’t quite so simple. We may cheer science for making our lives safer, more comfortable, and less vulnerable to decay and manipulation. In this sense, modern science is arguably a stunning success. But there is an existential price to pay. Let me explain.

    After my children were born, my wife and I started to compile photo albums. I have no illusions about them. They are kitsch, as the four of us cry “cheese” for the camera. But they are dear to me anyway, because they are documents of what has become the emotional center of my life. The meaning of photo albums is a paradox: they are treasured by the people whose happy memories they hold, but utterly meaningless to the rest of the world. Would you ever display your neighbor’s photo album on the coffee table? The paradox tells us something: that the meaning our children have for us is distorted, or rather created, by love.

    If I were to take such an explanation to an evolutionary psychologist, he would thoroughly disenchant its emphasis on love. He would explain that what I call “love” is actually an attachment that evolution has selected for, because it increases the chances of my offspring’s survival and thereby of the perpetuation of my genes. Do I want to know this? I feel conflicted. I have no doubt that there is a basis in biological reality for the evolutionist’s disenchanting account. And in general I am all for scientific progress. But in my personal life? There, I feel, I must protect the magic from the truth. Is it anti-intellectual or irrational to draw boundaries around the explanatory power of science in our understanding of our inner lives? Or can we, whatever the impact of our genes upon our inner lives, carve out a space for a different — yet likewise legitimate — way to understand love?

    The same wish for protection from the idea that science should have the last word grows exponentially when I consider the universe at large. In this instance I seek protection not from biology but from astronomy and astrophysics. I cannot detect evidence in the cosmos of divine craftsmanship. I see a vast, mute, dark, mostly empty space that burst into existence fourteen billion years ago, where I spend a short time on a small planet in one of countless galaxies, living a life of no cosmic consequence. That life, moreover, emerged from the primordial soup by means of amoebae, apes, and all the chance encounters of my ancestors.

    Contemplating the universe in this way is a powerful antidote for vanity; and so, to keep myself honest, I look up to the heavens once in a while, briefly, from the corner of my eye. Yet its lesson of humility notwithstanding, what I see in the night sky or through a telescope cannot give my life value and purpose. On the contrary, it threatens to obliterate my mortal and terrestrial reasons for getting out of bed in the morning: family, friends, writing, teaching, a cup of coffee, a glass of wine, a concert, a noble cause. From the cosmic perspective, my goals and my projects seem trivial and pointless. If I want to hold on to what gives my life meaning, therefore, I must shield myself from the universe rather than contemplate it — the exact opposite of what for Plato yields “the most excellent life.” The cosmos as we now understand it is no longer useful for my soul.

    But is the universe not reasserting, at this very moment, its power to dazzle people around the globe, as the James Webb telescope reveals it to us in ways we had never seen it before? Who is not amazed by the spectacular pictures of galaxies dancing and cartwheeling, stars glimmering like jewels in the dark, or Jupiter and Neptune’s enigmatic glow? There is beauty in the light, shapes, and colors. There is mystery as we look at cosmic landscapes such as the “Pillars of Creation” or cosmic cliffs such as the “Carina Nebula” — structures unimaginably far away in place and time. We have no clue what happens in them (or happened in times immemorial). Yes, the pictures may even inspire awe — a secular awe — if we let them pull us out of our human-all-too-human concerns and dare to lose ourselves for a moment in the universe’s vast expanse. 

    But what are we to make of these emotions that quickly fade as we return to our daily lives and our (by cosmic standards) trifling worries, joys, and sorrows? If the Webb telescope does not point to an intelligent designer (or at least the possibility of one), does it point to something meaningful at all? The mystery that we experience is not religious, as when we recoil in the face of an inscrutable divine will, manifest in the universe’s design. The mystery reflects, to put it bluntly, our ignorance. The more that ignorance is lifted — through even more powerful telescopes, new forms of space travel, and so on — the more intelligible the universe becomes. But intelligibility, as Nietzsche stressed, does not translate into meaningfulness. The universe we glimpse through the Webb telescope may be visually mesmerizing, but unlike Plato’s universe it cannot provide us with a purpose in life.

    Plato thought that studying the stars can replace the things that we commonly desire with something much better. If stargazing could lift us out of our mortal existence and put us in touch with something eternal and divine, there would be nothing ridiculous about it, no matter how much a Thracian maid may giggle. But if the stars are not a springboard to the divine, then staring at them for too long risks leaving us with nothing to care for at all. In a universe punctured by black holes, the Thracian maid may have the last laugh.

    Frau Freud

    In memory of Michael Porder

    I

     

    September 29, 1939, 20 Maresfeld Gardens, Hampstead, London: on the first Friday after Sigmund Freud’s death, having accepted more than a half-century’s imposed impiety at her husband’s insistence, the seventy-eight year old Martha Freud started to light the Sabbath candles again. Licht-bentshn, as the ceremony is called. You light a pair of candles just as the sun goes down; circle your hands in a sweeping motion three times to gather the light and savor the candles’ warmth — the spirit of restfulness that they are meant to convey — and then you cover your eyes with your hands while reciting the blessing in which God is thanked for sanctifying us with the commandment to light these candles.

    Enter Shabbat the Queen, as the Sabbath is known in Jewish tradition, a presiding feminine presence in a patriarchal environment where most of the active, time-specific commandments, such as the wearing of tefillin or phylacteries (a pair of small black leather cubes, containing pieces of parchment inscribed with Biblical verses, one of which is strapped around the left arm, hand, and fingers and the other is strapped above the forehead) for the morning prayers, fall on men, since women are presumed to be busy with other priorities, such as housekeeping and childcare. And now here was the widow of one of the most formidable enemies of religion fulfilling one of the few obligations incumbent upon women under Jewish law. It was, surely, a form of poetic justice — or perhaps a testament to the hold of the past, however abjured it may be. 

    She was born Martha Bernays on July 26, 1861 in the German port city of Hamburg, into a highly regarded and intellectually advanced Jewish family to whom such recurrent observances meant a great deal. With her performance of the act of lighting candles at a prescribed moment on the Jewish calendar, one might argue that Martha Freud was being more than assertive: she was being defiant. In doing so, she was re-establishing her autonomy by renouncing a pattern of submission to her husband’s wishes. She was taking a deliberate step backward, toward her family ethos and the traditionalism of her origins before she became the compliant, devoted caretaker that Freud desired her to be, the “adored sweetheart in youth” who became “the beloved wife in maturity.” And she was also taking a step forward, towards the post-spousal woman she would become after the death of her husband, and reclaiming a small part of the ancient ritual-laden religious tradition that had been instilled in her while growing up. 

    It was a tradition that her fiercely anti-clerical husband, whom she always referred to as “Professor,” as though she were his eternal student, ridiculed, forbidding her to light the Sabbath candles when they set up their own home. A cousin of Martha’s once recalled “how not being allowed to light the Sabbath lights on the first Friday night after her marriage was one of the most upsetting experiences of her life.” And Isaiah Berlin, who visited the couple at their house in exile in London, recalled that husband and wife were still arguing the issue of lighting candles, however playfully, as late as 1938: “Martha joked at Freud’s monstrous stubbornness which prevented her from performing the ritual, while he firmly maintained the practice was foolish and superstitious.” 

    The Freuds’ fifty-three years of marriage are reputed to have been exceptionally harmonious — one of their few disputes was said to have been about the correct way to cook mushrooms — but the couple’s divergent attitudes toward Judaism remained a source of underground conflict. On the face of it, they were wholly deracinated Jews in a golden age of Jewish deracination. They celebrated Christmas and Easter, and their son Martin, in his memoir Sigmund Freud: Man and Father, testified that none of the six children had ever entered a synagogue. Freud, a confirmed atheist whose work was dedicated in part to the debunking of the monotheistic worldview as a neurotic illusion, delighted in ribbing Martha about her religious attachment, pretending not to know the Hebrew name for “candelabrum,” for example, in a note he wrote her in 1907 after visiting the Roman catacombs: “In the Jewish [catacombs] the inscriptions are Greek, the candelabrum — I think it’s called Menorah — can be seen on many tablets.” 

    As if he didn’t know that it was called a menorah! He had learned the Bible as a child, after all, and it is doubtful that he lost his grasp of basic Hebrew or religious objects. Freud never denied his Jewishness, and went so far as to credit his religion for his own lack of prejudice and his uncowed single-mindedness. Yet he was always highly ambivalent about his Jewish identity. He demanded that Martha not fast on Yom Kippur, arguing that she was too thin to fast, and the only one of his books in which he referred overtly to his Jewish connection was Moses and Monotheism

    In this regard he maintained a firm distance between his public and private allegiance — insisting, for instance, that psychoanalysis was not in any way a “Jewish science,” or judische Wissenschaft, which is how the Nazis and earlier anti-Semites had disparaged it. In a letter to Ferenczi, Freud wrote that “there should not be such a thing as an Aryan or Jewish science. Results in science must be identical, though the presentation of them may vary.” His awareness of the danger in having a specifically Jewish quality attached to his work, which could lead to antisemitic resistance to the psychoanalytic movement and render it less universally applicable, led him to court the Swiss psychiatrist Carl Gustav Jung, despite Jung’s very different ideas about psychoanalysis — and more curiously, despite Jung’s own anti-Semitism and racial theories. Freud put all his hopes into Jung, whom he called his “son and heir,” until they had a disagreement about the uses of mythology which led to a permanent estrangement. 

    Yet the story of Freud’s Jewishness, the myth of his complete alienation from his patrimony, which is based largely on his non-observance of even the most fundamental of Jewish rituals, is more complicated than has been implied in most of the writing about him. (This misapprehension about Freud’s complete ignorance of Judaism was recently invoked yet again in Adam Kirsch’s essay “Freud as Talmudist” in the Jewish Review of Books). In contrast to the general image of Freud as an am ha’aretz, an ignoramus, severed from his Jewish roots, we have a letter that he wrote to the chief rabbi of Vienna in 1931, for example, in which he passionately declared: “I am a fanatical Jew. I am very much astonished to discover myself as such in spite of all the efforts to be unprejudiced and impartial.” And more fully a few years earlier, in 1926, accepting an award from the Bnai Brith on his seventieth birthday, he told his audience in a letter that he had joined Bnai Brith because 

    I myself was a Jew, and it always seemed to me to be not only shameful but downright senseless to deny it. That which bound me to Judaism—I am obliged to admit it—was not my faith, nor was it national pride; for I was always an unbeliever, raised without religion, although not without respect for the so-called “ethical” demands of human civilization. And I always tried to suppress nationalistic ardor, whenever I felt any inclination thereto, as something pernicious and unjust, frightened as I was by the warning example of the peoples among whom we Jews live. But there remained enough other things to make the attraction of Judaism and Jews irresistible—many dark emotional forces, all the more potent for being so hard to grasp in words, as well as the clear consciousness of an inner identity, the intimacy (die Heimlichkeit) that comes from the same psychic structure. And to that was soon added the insight that it was my Jewish nature alone that I had to thank for two characteristics that proved indispensable to me in my life’s difficult course. Because I was a Jew I found myself free from many prejudices that hampered others in the use of their intellects; and as a Jew I was prepared to take my place on the side of the opposition and renounce being on good terms with the “compact majority.”

     “Raised without religion”? Hardly. A complicated case, clearly.

    [INSIGNIA] To better understand the Freuds’ respective positions on Judaism, one need look no further than their individual backgrounds. “I was born on the 6th of May [18]56 in Freiberg/Moravia,” Freud wrote in a letter to his colleague Paul Federn in 1912. “My father and mother came from Galicia. My mother, née Nathansohn, from Brody, of very distinguished ancestry (the Nathansohn-Kallir family), my father of the merchant class. According to tradition, as he once reported to me, the Freud family is said to sometime have left their hometown of Köln [Cologne] during a period of persecution of Jews and then to have migrated eastward.”

     Throughout his lifetime, Freud, who was born Sigismund Schlomo Freud, went to great lengths to portray himself as having grown up in a deeply assimilated Reform Jewish family, steeped in modernist values and Viennese culture. (His family moved to Vienna when he was four.) It was a family, or so he led his colleagues and relatives to believe, in which Jewish holidays were minimally observed, and his own religious education was scanty, leaving him with but the vaguest understanding of Hebrew or Yiddish. A large part of our idea of Freud as a “godless Jew” — the term coined by the historian Peter Gay, himself an assimilated Jew who translated his last name from “Froelich” to “Gay” after becoming an American citizen and wrote extensively about Freud — derives from Gay’s insistence that Enlightenment values had completely displaced religious and ethnic ones. (This was consistent with Gay’s simplistic view of the Enlightenment itself.) Gay’s description of Freud’s father Jakob’s position on Jewish matters shows how the notion of total secularization was absorbed unquestioningly by Freud scholars: “Jacob Freud had emancipated himself from the Hasidic practices of his ancestors; his marriage to Amalia Nathanson [his second wife and Freud’s mother] was consecrated in a Reform ceremony. In time, he discarded virtually all religious observances….”

    It is worth pointing out that ritual observance is hardly the only measure of Jewishness. The truth about Jakob Freud’s relationship to his religious background is richer, as was his son’s. The father, too, was a complicated case. The Freud family consisted of transplanted Eastern European Orthodox Jews — Ostjuden, or Eastern Jews, looked upon with disdain as primitive and uneducated by German and Austrian Jews — who were only slightly assimilated, if at all. According to Emanuel Rice, a psychiatrist who closely examined this subject in his book Freud and Moses: The Long Journey Home, Jakob had studied for years in a yeshiva in Tysmenitz, Galicia and was referred to in his youth as a “yeshiva bocher,” a yeshiva student. Rice also cites a granddaughter of Jakob’s who lived with him toward the end of his life and remembered him “reading the Talmud (in the original) at home.” If that is so, then Jacob possessed a considerable degree of Jewish literacy and cultivation. 

    Then there is the famous and much debated issue of the Phillippson family Bible — a German translation of the Tanakh — that Jakob gave his son on his thirty-fifth birthday, with an inscription in Hebrew that included a skillful pastiche of ancient quotations composed in the traditional manner from various Jewish sources. The inscription — “To my dear son Shlomo” — is written in a hand that is clearly comfortable with writing Hebrew. Although this dedication has been parsed by hoch analysts who assumed that Freud could not read Hebrew (an impression fueled by Freud himself), scholars such as Rice and Yosef Hayim Yerushalmi, in his Freud’s Moses: Judaism Terminable and Interminable, have shown that Freud had a considerable Jewish education and would have understood the inscription. (And indeed, one might ask, why would his father have inscribed such an important gift in a language that his son could not read?) Similarly, Freud’s insistence that he could not understand Yiddish — the “jargon” of the Ostjuden — is dubious, because his mother Amalia regularly spoke Yiddish. In addition to which, Rice argues, based on the testimony of one of Freud’s grandsons, Sigmund’s mother remained religiously observant until her death in 1930. This would go some way to explaining why Freud arranged for his mother to have a strictly Orthodox funeral and burial — although it would not explain why he chose not to attend it, sending his daughter Anna in his stead. 

    Martha Bernays’ background, on the other hand, was indubitably Orthodox (“frum wie ein Stecken,” or “religious as a stick,” as my mother, an observant German Jew, used to say), one in which hidebound observances were scrupulously maintained. Her mother, Emmeline, wore a sheitl, or wig, which was required by the rabbinical tradition to preserve the modesty of married Jewish women, and kept a strictly kosher house. Hers was a tenacious and domineering personality, though outwardly she came across as mild and soft. These traits would antagonize her future son-law; he described her as “alien” and wrote his fiancée that “I seek for similarities with you, but find hardly any.” (At the same time he conceded that Emmeline was “a person of great mental and moral power standing in our midst, capable of high accomplishments, without a trace of the absurd weaknesses of old women.”) 

    Martha’s family was renowned in the Jewish community for their scholarship and their leadership. Isaac Bernays, her grandfather, was the chief rabbi of Hamburg in the 1830s and 1840s, and was respected for his combination of secular and religious knowledge, expressed in his sophisticated philosophical views, his linguistic skills, and his superior grasp of Torah, Midrash, and Talmud. He was a distant relative of Heinrich Heine; he appears often in Heine’s letters, and upon his death in 1849 he was acknowledged by Heine to have been an extraordinary personality. Bernays served in this important pulpit in the early years of the Reform movement and opposed it bitterly, formulating in response an approach in which it was possible to live, with certain limits, in both the religious and secular worlds. Bernays’s conception greatly influenced his student Samson Raphael Hirsch, the rabbi and theologian who provided Modern Orthodoxy with its guiding principle of Torah Im Derech Eretz, or Torah and the way of the world. (Hirsch was my great-great-grandfather.) 

    Despite Bernays’ strict commitment to Orthodoxy, he was also considered to be something of a religious modernizer, known for his innovative sermons given in German, and for bringing secular subjects — German, natural science, geography, and history — into the curriculum of the Talmud Torah charity school, which had formerly been limited to Hebrew and arithmetic. Hirsch and Bernays became acquainted when Bernays, after attending the University of Wurzberg and studying at the yeshiva of Rabbi Abraham Bing, the chief rabbi of Wurzburg and a well-known Talmudist, became a private tutor in the house of Hirsch’s father. Hirsch followed Bernays’ practice of fusing the Jewish and secular realms with the hope of keeping the tidal wave of Reform Judaism at bay. (Family lore has it that at the Hirsch school in Frankfurt boys did not wear yarmulkes during secular classes.)

    Two of Bernays’ sons were university professors. The eldest, Jacob, was a prominent philologist and classicist, a man of prodigious learning who was one of the pioneers of Quellenforschungen, or source criticism, according to which the primary method of classical studies was the intense study of the surviving texts of the ancient world for the purpose of coaxing from them knowledge of all that did not survive. He was famous for a controversial interpretation of Aristotle’s concept of catharsis, which he read not in moral terms but in medical ones — thereby making himself a kind of precursor to Freud’s own approach to the subject. His adherence to Jewish religious convictions prevented him from becoming a full professor at the University of Bonn, and so in 1853 he helped to found the Breslau Jewish Theological Seminary, which became one of the great institutions of modern Jewish scholarship. There he taught classics, history, German literature, and Jewish philosophy. In 1866 Jacob was finally appointed an assistant professor and chief librarian at Bonn, but he remained involved with the seminary at Breslau. His younger son, Michael, was a Goethe and Shakespeare specialist who was professor of German literature at the University of Munich. He converted to Christianity (as did Isaac Bernays’ brother Adolphus) in 1856 and was baptized, which led his family to break with him at the same time as it furthered his career. 

    Isaac’s other son, Berman Bernays — Martha’s father — was a merchant, as were the parents of his wife; Berman later became secretary to the well-known economist and constitutional law expert Lorenz von Stein, a great liberal who may have been the earliest theorist of the welfare state. When Emmeline (née Philipp) married him in 1886, his profession was given as “journalist.” One of four children (three older ones died in quick succession), Martha was born in 1881 and grew up in Hamburg in fairly modest circumstances. When she was six years old, her father served a stint in prison for bankruptcy; she is said never to have spoken of this incident. Freud’s uncle, meanwhile, was imprisoned for trading in counterfeit rubles and rumor had it that his father was implicated in the scandal. The writer Jenny Diski suggested, in a review of a biography of Martha Freud by Katya Behling, that Martha and Sigmund were united by a shared legacy of public shame. 

    Martha’s family moved to Vienna when she was eight, but she, as well as her mother and sister Minna, never lost their attachment to Hamburg. “Neither she nor Minna ever made the slightest concession to the spirit and lifestyle of Vienna,” Behling observes in her biography, “and even after fifty years in Austria they still spoke perfect standard German.” To refuse the spirit of Vienna was to live the life of a stubborn traditionalist. One of Martha’s two elder brothers, Isaac, died at the age of sixteen, when she was eleven. (This was another loss that she shared with Freud, who also had a brother who died at a young age.) It is worth noting that these extraordinary lineages lasted more than one generation: Freud’s sister Anna married Martha’s brother Eli, and not long after they moved to the United States their son Edward Bernays was born, who, with his pioneering studies of public opinion and its manipulation, became the father of press relations, mass marketing, and psychological warfare — in other words, a formidable shaper of modern life.

                     

    II

     

     Despite being portrayed in later years as intellectually indifferent (in particular to her husband’s theories), the young Martha developed an interest in art and literature during her years at school. She had a keen appreciation of music and was an avid reader who knew the German classics (Goethe, Schiller, and so on); she had a special fondness for Stephan Zweig and Thomas Mann. Although the time she could allocate to reading was severely cut back during the busy years of her marriage, running a large household according to her high standards, she would return to her love of books after her husband’s death. During their courtship, Martha frequently wrote Freud letters in verse, and he shared his thoughts on John Stuart Mill with her. His first present to her was a copy of David Copperfield — Dickens became of one Martha’s favorite writers — although he warned her off the rude parts in Don Quixote, stating that they were “no reading matter for girls.” By the time he met Martha in April 1882, her sharp mind, slim and attractive figure, and coquettish charms had attracted many suitors. She had already turned down one proposal of marriage.

    The first time Sigmund Freud spotted the almost twenty-one-year-old Martha Bernays was at his family’s dining table; she was visiting his sisters together with her sister Minna. Martha was peeling an apple during their conversation: a decorous feminine activity, suggestive of industriousness and nurturing. The twenty-six-year-old Freud was an anxious, somewhat self-important medical student with next to no experience of women, despite being the favored brother of five sisters and his mother’s goldener Sigi, who was given a room of his own in his family’s small apartment. One wonders whether things might have gone differently if he had glimpsed Martha in a different guise, less the domestic woman in a genre painting and more like her sister Minna, eager to compete intellectually and inclined by nature to take up more air. “Since I learned that the first sight of a little girl sitting at a well-known long table talking so cleverly while peeling an apple with her delicate fingers, could disconcert me so lastingly,” Freud wrote to Martha in June, 1885, “I have actually become quite suspicious.”

    In any case, the young Martha presented a winsome picture, with her hair worn in a center part and pulled back in a chignon, elegantly clad in a high-necked dress with a lace collar and lace-up ankle boots. Soon, although rather awkward and shy, Freud was sure of his feelings for Martha and began calling her “Princess” and sending her a red rose every day accompanied by a poem in Latin or another foreign language. By the middle of June in 1882, a mere two months after they met, the couple was secretly engaged despite Emmeline’s opposition to the match — she considered Freud’s financial prospects to be dim. Freud began writing his “darling girl” and “darling Marty” long rhapsodic letters over the next four-and-a-half years of their engagement — the famous brautbriefe, as their correspondence came to be called. It was only after Martha’s eldest brother Eli became engaged to Freud’s sister Anna on Christmas in 1882 that the couple felt comfortable in announcing their own engagement, although Freud never officially asked for Martha’s hand in marriage. In Freud’s first fervent letter to her, he wrote, “Dear Martha, how you have changed my life”; but he was also an incorrigibly jealous suitor, expressing absurdly patriarchal horror that his fiancée had travelled on holiday with only her younger sister for company: “Fancy, Lubeck! Should that be allowed? Two single girls travelling alone in North Germany! This is a revolt against the male prerogative!” 

    Sigmund would later tell Martha that theirs was an instance of leibe am ersten blick, love at first sight, although it is unclear whether this was wholly mutual; Martha appears to have warmed up a bit more gradually. He would go on to observe to her in one of the nine hundred and forty letters that he wrote to her during the four and-a half years of their courtship and engagement (they also collaborated on a secret journal, a Geheime Chronik) that she was not “in the strict painterly sense” a beauty, but that she had qualities he considered more important, such as generosity, wisdom, and tenderness. One wonders what his fiancée, a young woman just coming into a sense of her attractiveness to the opposite sex, made of this faint praise. (Freud’s own mother had been considered a great beauty in her day and the appeal of female comeliness was never lost on him.

    During the period in which he experimented with cocaine, Freud, worried about her pallor, sent Martha a small dose to put color in her cheeks, and referred jauntily to the disinhibiting effect that cocaine had on him. “Woe to you, my princess, when I come,” he wrote to her on June 2, 1884. “I will kiss you quite red and feed you till you are plump. And if you are forward you shall see who is the stronger, a gentle little girl who doesn’t eat enough or a big wild man who has cocaine in his body.” On February 2, 1886, toward the end of another letter, he wrote: “Here I am, making silly confessions to you, my sweet darling, and really without any reason whatever unless it is the cocaine that makes me talk so much.”

    “I was told,” writes Sophie Freud, Martha’s daughter-in-law, in her memoir Living in the Shadow of the Freud Family, “that her greatest attraction for the young Sigmund Freud had not been her slender grace or charming features but her inner peace and serenity. She radiated calmness; and he sensed instinctively how wonderful it would be to have her near him after a day of hard work.” As for the scrutinizing suitor himself, Freud had good features, a thick beard and a penetrating gaze, but it seems that Martha initially found him too short and a bit intimidating.

    It is difficult to cobble together an image of the pre-Freud Martha Bernays because we have few accounts of her before he enters the picture (and not many more after). From the few impressions that have been documented, she seems to have been both self-contained and curious, a dutiful daughter who retained some independence of mind. She loved to read whenever she found the time, went to plays, and was a demon for needlework of all kinds. Most of all, Martha was marked by the North German sense of discipline, by a horror of shoddiness and leaving things half-done. Her daughter Anna would later observe to her own biographer, Elizabeth Young-Breuhl, that “my mother observed no rules, she made her own rules.” 

    Although there were those, such as his brilliant Hungarian disciple Sandor Ferenczi (whom Freud eventually broke with as he did with so many of his followers), who contended that Freud’s unresolved connection to his narcissistically controlling mother Amalia left him with a fear of intimacy and of sexually passionate women in particular, it seems that, at least at the beginning of their involvement, Freud couldn’t get enough of his “deeply beloved, most ardently worshipped Martha,” as he described her in 1882. As long, that is, as she lived up to the very particular image that he had of a desirable mate. During the course of their epic engagement, undoubtedly fed by Freud’s anxiety as to whether Martha loved him with the same ardor that he loved her (Martha was by nature more reticent) and by his fiercely suspicious nature, Freud bullied her into becoming more of the docile and governable woman he was seeking. Despite maintaining that he did not want her to be a malleable toy doll, he sneered at her efforts to put her foot down and openly disliked what he called her “tartness.”

    Somewhere along the way it seems that Martha lost some of her moxie — her unfettered and even feisty spirit. One can see a glimpse of that spirit still peeking through late in their engagement, when she wrote Freud in irritation: “You now always only write once about each thing, and then nothing more however much I ask. I’m not used to this, my good man, it is certainly high time I brought you to heel, otherwise I’m quite sure to go completely thin and green for sheer annoyance and exasperation.” But by this time Martha could not have been left in any doubt as to precisely what it was that her husband-to-be expected in his partner: a certain docility, and clear and separate spheres of influence. “I will let you rule [the household] as much as you wish,” he decreed, “and you will reward me with your intimate love and by rising above all those weaknesses that make for a contemptuous judgment of women.” Perhaps in keeping with Martha’s understanding that her fiancée wanted to remake her into a more subservient personality, some of her letters show her posing as an intellectual innocent in need of Freud’s assistance: “I finally have read your postcard with Max’s help because it was difficult to read. Yes, that’s how stupid your dear girl is.” 

    The critic Frederick Crews, a disenchanted Freudian and ardent enemy of psychoanalysis all the same wrote perceptively about the Svengali-like attitude toward Martha that hovered right beneath Freud’s almost fulsome expressions of adoration. “When he wasn’t complaining about his present aliments and future neglect,” Crews observed in Freud: The Making of an Illusion,

    the unhappy fiancé was instructing his beloved in how to become a properly deferential mate. He made it clear that she would have to change some of her ways, and the sooner the better. It was precisely Martha’s most admirable qualities — unself-conscious candor and spontaneity, a trusting nature, freedom from class prejudice, loyalty to her family and its values — that struck him as in need of revision. Thus he rebuked her for having pulled up a stocking in public; forbade her to go ice skating if another man were along; demanded that she sever relations with a good friend who had gotten pregnant before marriage; and vowed to crush every vestige of her Orthodox faith and to turn her into a fellow infidel.

    Although Crews is focusing here exclusively on the dictatorial aspect of Freud’s attitude toward his future wife, it is nonetheless a fairly accurate and unattractive picture — conventionally masculine for its time, perhaps, but especially disappointing in one of the great free-thinking apostles of modernity. 

     

    III

     

    How did the Freuds’ marriage negotiate the age-old problem of combining sexual passion with enduring love? Is there evidence in Freud’s writings of his view of the institution of marriage, and of the possibility of lasting erotic attraction? And how confidently can we infer from his “scientific” remarks on conjugal life to the character of his own marriage? 

    There is not much to go on. Curiously enough, the index of the Standard Edition of the Complete Psychological Works of Sigmund Freud, the Strachey edition, has no entry for “wife” and only a smattering of references under “marriage.” Still, he famously wrote about the disjunction between romantic affection and carnal desire in 1912, in a paper titled “The Most Prevalent Form of Degradation in Erotic Life,” a psychoanalytic exploration of what we call the Madonna-Whore Complex, in which he observed that “where such men love, they do not desire, and where they desire, they cannot love.” 

    Freud attempted to explain this phenomenon by looking to the restrictive cultural mores of his time and the inhibition on both men and women to delay sexual engagement well beyond the age of maturational readiness — “the long period of delay between sexual maturity and sexual activity which is demanded by education for social reasons,” which resulted in a “lack of union between tenderness and sensuality.” He went on: 

    In very few people are the two strains of tenderness and sensuality duly fused into one; the man almost always feels his sexual activity hampered by his respect for the woman and only develops full sexual potency when he finds himself in the presence of od a lower type of sexual object; and this again is partly conditioned by the circumstance that his sexual aims include those of perverse sexual components, which he does not like to gratify with a woman he respects. Full sexual satisfaction only comes when he can give himself up wholeheartedly to enjoyment, which with his well-brought-up wife, for instance, he does not venture to do. Hence comes his need for a less exalted sexual object, a woman ethically inferior, to whom he need ascribe no aesthetic misgivings, and who does not know the rest of his life and cannot criticize him. 

    Freud’s own marriage was conspicuously de-romanticized and then rather quickly desexualized after its emotionally impassioned beginning; the letters, which include overt references to erotic longings on both sides, seem to confirm the fatalistic analysis of conjugal love and desire in his paper. Although Ernest Jones, Freud’s British colleague and hagiographic biographer, deemed this vast collection of correspondence “a not unworthy contribution to the great love literature of the world,” the analyst Martin Bergmann once quipped: “We have wonderful courting letters before marriage. After marriage we only get laundry letters. It’s all practical. We don’t have a single love letter after marriage.” 

    In an earlier paper, from 1908, called “’Civilized’ Sexual Morality and Modern Nervous Illness,” Freud provided a larger framework, a civilizational framework, for his dour view of marital life, in which he presented a somewhat disheartening view of the damage to intimate relations that is inflicted by the process of socialization that humans must go through in order to co-exist peacefully with others. “Experience teaches us,” he observed, “that for most people there is a limit beyond which their constitution cannot comply with the demands of civilization. All who wish to be more noble-minded than their constitution allows fall victim to neurosis; they would have been more healthy if it could have been possible for them to be less good.” And so he asks “whether sexual intercourse in legal marriage can offer full compensation for the restrictions imposed before marriage.” And he answers:

    There is such an abundance of material supporting a reply in the negative that we can give only the briefest summary of it. It must above all be borne in mind that our cultural sexual morality restricts sexual intercourse even in marriage itself, since it imposes on married couples the necessity of contenting themselves, as a rule, with a very few procreative acts. As a consequence of this consideration, satisfying sexual intercourse in marriage takes place only for a few years; and we must subtract from this, of course, the intervals of abstention necessitated by regard for the wife’s health. After three, four, or five years the marriage becomes a failure insofar as it has promised the satisfaction of sexual needs….The spiritual disillusionment and bodily deprivation to which most marriages are thus doomed puts both partners back in the state they were in before their marriage, except for being the poorer by the loss of an illusion, and they must once more have recourse to their fortitude in mastering and deflecting their sexual instinct….Women, when they are subjected to the disillusionments of marriage, fall ill of severe neuroses which permanently darken their lives….A girl must be very healthy to tolerate it, and we urgently advise our male patients not to marry any girl who has had nervous trouble before marriage.

    Many critical points could be made about the dark conjectures that Freud offers in this paper, which seem to be based more on personal experience than on scientific findings or cultural observations. Why, for instance, must married couples content themselves “with a very few procreative acts,” unless one believes that the sole purpose of sexual intercourse is procreation? What about sexual pleasure, a subject about which Freud extensively theorized? And his hypotheses about women’s fragility when faced with “the disillusionments of marriage” seem both ill-conceived and misogynistic — a blinkered attempt to understand female sexuality, which he regarded as murky and mysterious, in keeping with his idea of women as the “dark continent.” 

    In any event, Martha and Sigmund were married on September 13, 1886 in Hamburg. Frau Bernays became Frau Freud; she was twenty-five and he was thirty. Since a civil wedding on its own was not officially recognized at that time in Austria, the couple had to marry a second time under a chuppah with full Jewish ritual, despite Freud’s annoyance. The ceremony included the groom giving the bride a ring as well as crushing a glass underfoot in remembrance of the destruction of the Temple in Jerusalem — a memory of sadness in the midst of happiness, which in other contexts was a dissonance that Freud often studied. 

    Freud seemed to have felt abandoned within minutes of casting in his lot with Martha. “Once one is married,” he opined, “one no longer — in most cases — lives for each other as one used to. One lives rather with each other for some third thing, and for the husband dangerous rivals soon appear: household and nursery.” He added that, “despite all love and unity, the help each person had found in the other ceases. The husband looks again for friends, frequents an inn, finds general outside interests.” Hardly a chipper forecast for what lay ahead, but then Freud, despite his inner reserves of strength, was often the one who expressed anxiety. The task of reassurance fell to the unflappable Martha, who learned how to soothe him.

    The union produced six children in nine years, by which point Martha was thirty-four. After three children had been born, the family moved to Berggasse 19, near the university quarter in Vienna, where their apartment occupied an entire floor but was rather small and dark. Martha, who was in charge of their finances, set about looking after her new husband with the utmost attention and care for both his appearance and his comfort. She laid out and brushed his clothes for him — which were, as Martin, Freud’s eldest son, reported in his reminiscences, “cut from the best material and tailored to perfection.” It was said that such was the diligent nature of her caretaking that she put the toothpaste on his toothbrush. 

    While her husband worked up to sixteen and even eighteen hours a day, Martha, carried around an enormous bunch of keys, the better to oversee a household that included, despite the family’s relative lack of money, a cook, a governess, two nannies, and a chambermaid. She ran the large family’s schedule like a well-oiled machine. Lunch was served promptly at one o’clock every day, a formal meal often featuring tafelsptiz or Rindfleisch, boiled beef and vegetables, with a horseradish sauce, a favorite of Freud’s. Sophie Freud, Martin’s daughter, recalled in a memoir that Martha maintained impeccable standards: “At each meal Mrs. Freud has a pitcher of hot water and a special napkin at her place, so that if anybody made a spot on the tablecloth she could hurry to remove it. Only her husband was permitted to make as many spots as he wished.” Dinner was at seven, after which Freud usually worked until midnight. 

    The children, who were Martha’s domain when young although of keen interest to their father as they grew older, seem to have been suitably well-behaved, their parents having instilled in them the importance of their father’s work. As Martin recalled, “There was never any waiting for meals: at the stroke of one everybody in the household was seated at the long dining-room table and the same moment one door opened to let the maid enter with the soup while another door opened to allow my father to walk from his study to take his place at the head of the table at the other end.” 

    Jenny Diski, in her review of Behling’s biography, observed that the exemplary bourgeois surface that Martha helped to provide — “the rigid table manners, ordered nursery, and bustling regularity” — enabled her husband to organize his “deeper, hardly thinkable thoughts” into “something that looked like a scientific theory.” By polishing that surface and keeping the clocks ticking in unison, Diski grandly concluded, “Martha was as essential to the development of Freudian thought as Dora or the Rat Man.” This may have a grain of truth to it, in the logistical sense that an orderly environment allowed Freud to concentrate on his work, but it strikes me all the same as something of an exaggeration as though Mrs. Einstein were to be credited with facilitating her husband’s ideas about energy and mass.

    The simple truth is that Freud never had any plans for Martha to be an intellectual partner or to participate in his intellectual life in any way. He was happy for her to take care of his every need and to view him as the genius of his age, the equal of Newton or Darwin, just as she was happy to call herself Frau Professor after Freud was given his title in 1902. Although he seems to have initially been drawn to Martha’s cultural sophistication, Freud quickly felt the need to downplay her braininess, a demotion in which she willingly acquiesced. Early in their engagement he referred condescendingly to “the charming confusion in your dear sentences.” In his memoir Martin Freud recalls that when his parents had distinguished visitors over for dinner and a learned guest began to recite from The Iliad, Martha had already departed the premises. “My mother,” Martin writes, “who knew no Greek and, in consequence, was without any admiration for Homer’s immortal epic, had quietly withdrawn earlier.” 

    Then, too, there was the slight puzzlement that she expressed at her husband’s choice of profession, as though his high-flying speculations were beyond her ken. “I must admit,” she said, “that if I did not realize how seriously my husband takes his treatments, I should think that psychoanalysis is a form of pornography.” The Viennese analyst Theodor Reik reported that, based on conversations with Martha during walks that they took together, “I got the decided impression that she not only had no idea of the significance and importance of psychoanalysis, but had intensive emotional resistances against the character of analytic work. On such a walk she once said, ‘Women have always had such troubles, but they needed no psychoanalysis to conquer them. After the menopause they become quieter and resigned.’” That sentence, so dismissive of the real problems faced by herself and other members of her sex, is painful to read. 

     

    IV

     

    After the birth of their sixth child, the Freuds — or more precisely, Freud — decided to practice abstinence as a means of contraception. While he believed that pregnancy “is a normal state in a young woman,” he also held that coitus interruptus led to neurosis, a view that was based on some misbegotten Darwinian notion about the right and wrong “discharge” of semen. In some ways, of course, Freud was very much a man of his time and place, fascinated by the notion of “perverse” desires but also cautious and somewhat sexually inhibited. Freud professed to dislike Vienna, the hothouse capital of the Hapsburg empire, writing to his colleague Wilhelm Fliess that “I hate Vienna with a positively personal hatred.” But Vienna was all the same the center of intellectual life in Europe — a bubbling cauldron of ideas about literature, music, art, architecture, science, and philosophy — and therefore a stimulant to his thinking. It was also a city whose culture was intensely preoccupied with sex. While modernist painters and writers dug deeply into erotic life, the bourgeoisie had a more prurient and censorious attitude toward sexuality, especially as it applied to women and children. Women were expected to be chaste before marriage, and the youthful exploration of sexuality through activities like masturbation was vehemently discouraged. (Freud’s fine sense of humor did not desert him on even on salacious subjects such as these. The problem with masturbating, he once observed, is knowing how to do it well.) 

    In his research and his theory, Freud indicted these Victorian mores as a source of neurotic conflict, and his views on infantile sexuality (“one of the sources of Freud’s enduring appeal, I believe,” observed Paul Roazen in his book Meeting Freud’s Family, “is that he so often took the side of the suffering child”), were remarkably forward-looking — and yet those same Victorian mores were reflected in some of his own constricted views on the subject of carnal pleasure. “I stand for an infinitely freer sexual life,” he wrote in a letter, “although I myself have made very little use of such freedom. Only so far as I considered myself entitled to.” Having proposed that the sexual drive was necessarily self-divided (as he believed all the drives were), he took the view that sex could never be completely gratifying.

    His ability to sublimate erotic desire in his work was remarkable, and in a small book about Leonardo da Vinci he observed that Leonardo’s apparent asexuality set him “above the common animal need of mankind.” In a letter written to Fliess in 1897, when he was forty-one, Freud made a reference to ceasing connubial relations entirely. “Sexual excitation is of no more use to a person like me,” he wrote, although he attested to some incidents of sexual intercourse with Martha later on, recording in his diary at the age of sixty that he had “successful coitus Wednesday morning.” He also wrote to Fleiss that he often suffered from impotence. (Some scholars have argued that Freud’s decision to abstain from sex, although ostensibly to avoid having more children, may have stemmed in part from an unconscious desire to get back at Martha for her sexual reticence during their prolonged engagement.) According to Oliver Freud, who was born fourteen months after Martin, neither parent thought to talk their sons about the birds and the bees. A family doctor was enlisted to teach the boys about sex. 

    Freud’s own sexual behavior reminds us that he was not only the champion of psychological and sexual enlightenment in his work, but also the champion of the rewards — and demands — of sublimation and repression. In 1936 he characterized his married life with a startling degree of restraint in a conversation with Princess Marie Bonaparte, one in a bevy of his female friends that included Minna Bernays, Princess Bonaparte, Lou Andreas-Salome, Hilda Doolitle, and Helene Deutsch, with whom he shared his thoughts. “It was really not a bad solution of the marriage problem,” he said, “and she is still today tender, healthy, and active.” When one compares this wan statement to his impassioned declaration to Hilda Doolittle, the poet H.D., who spent several years in analysis with him, that “I am an old man and you don’t think it worth your while to love me,” they almost seem to come from two different men.

    To his son-in-law Max Halberstadt, he conveyed his relief that his children had turned out well and that Martha “has neither been very abnormal nor often ill.” (Recall the patronizing passage in the paper of 1908 about “regard for the wife’s health.”) This was a far cry from the sentiments that he felt during their engagement, when he clashed with Martha’s mother about who had the greater claim to her daughter: “Marty, you cannot fight against it; no matter how much they love you I will not leave you to anyone, and no one deserves you; no one else’s love compares with mine.” Martha, by contrast, sounded more enthusiastic when describing her marriage to her granddaughter Sophie: “I wish for you to be as fortunate in your marriage as I have been in mine. For during the fifty-three years I was married to your grandfather, these was never an unfriendly look or a hard word between us.” With accommodation and compromise on her part came harmony in their conjugal relationship, whatever it may have lacked in the way of higher communion.

    This brings us to the sensational theory, originated by Jung and fanned over the decades by Peter Swales (also known as “the guerilla historian of psychoanalysis”), that after ceasing to sleep with his wife Freud embarked on an affair with Minna Bernays, Martha’s smart, witty, and acerbic younger sister. The two had corresponded while Freud was pursuing Martha, and clearly they had a companionable relationship. Among other things, they were both avid card-players. They lived together in the same household for forty years — first on Berggasse 19 in Vienna, where Minna moved in in 1896, and then at 20 Maresfeld Gardens in London. Indeed, the sleeping arrangements in Vienna were weirdly intimate, as I saw for myself when I visited Berggasse 19. Minna’s small sleeping quarters were right next to Sigmund’s and Martha’s bedroom, and the only way Minna could get to her room was to go through the bedroom that the Freuds shared.

    The two also took trips together, and the rumors of their illicit liaison were fueled in 2006 by a German sociologist who found a yellowing hotel ledger entry written in Freud’s distinctive scrawl at an inn in the Swiss alps where the psychoanalyst, then forty-two, and Minna, then thirty-three, stayed for two weeks in 1898. The couple had registered as “Dr Sigm Freud u frau” — as husband and wife. They took the largest room at the inn, which had the equivalent of a double bed. This last detail persuaded some Freud loyalists, such as Peter Gay, of the veracity of the rumors, although I myself remain dubious. For one thing, they might have checked into a single room because of Freud’s frugality; they were anyway used to being in close quarters and it is unlikely, given the Victorian ethos of the era, that they could have rented the room if their actual unmarried relationship had been made clear. For another, despite his heretical approach to religious strictures and his theoretical advocacy of greater sexual freedom, Freud strikes me as a man fairly haunted by guilt, and he would have been disinclined to cheat on his devoted wife. There was also the fact that Minna was not particularly attractive and female appearance was important to Freud. In one of his early courtship letters he told Martha that her nose and her mouth were shaped “more characteristically than beautifully, with an almost masculine expression, so unmaidenly in its decisiveness.” Such a microscopic analysis of his future bride’s less than ideal features suggests that he was a critical observer of female appearances. Then too, he himself had once noted to his future wife that “similar people like Minna and myself don’t suit each other specially.”                       

     

    Who, then, was Martha Freud? Why is she so hard to find amid the obsessive interest and research that swirls around her husband? Was she really just a contented Hausfrau, an efficient manager of a busy household, a firm, undemonstrative, but affectionate mother, and a devoted wife who “tried as much as possible,” as she wrote in response to a condolence letter after her husband’s death “to remove the misère of everyday life from his path”? Assuming that Freud’s ideas about everything from female psychology to wayward sexuality to neurotic conflict were drawn even slightly from his own experience, what influence did Martha’s personality and her interactions with him have on psychoanalytic theory? It is hard to imagine her living with him for more than half a century and not having had some impact on him beyond making sure that his boiled beef was served on time. In addition to which, as Sophie Freud points out in her book, “some of Freud’s most fundamental discoveries were made by observing his own children. Mrs. Freud was his assistant in helping to transform the nursery into a psychological laboratory. But the children were not to know they were being used as guinea pigs. ‘Above all, the family must be normal,’ she said.” 

     It is all the more surprising, then, that Martha has been of so little interest or consequence to the many biographers of her husband. In recent years there has been Katya Behling’s biography of her in German, as well as a novel called Mrs. Freud by the French writer Nicolle Rosen. There is also a short memoir by the Freuds’ long-standing housekeeper, Paula Fichtl, but it does not add much to the overall picture except for the author’s own adulation for Herr Doktor and her unstinting admiration for Martha’s capabilities and resilience. As Behling recounts in her biography, Martha was astonishingly courageous. When, shortly after the Anschluss, a group of armed SA men showed up at Berggasse 19, sending Paula into a tizzy, Martha is said to have maintained her composure, suggesting that “the gentlemen” might wish to deposit their rifles in the umbrella stand for the duration of their visit. And when another phalanx of Nazis stormed into their apartment a few days later, Paula’s upset was met with an ironic comment: “Surely, Paula, you did not expect the Nazis to come with flowers.”

    Is Martha’s featureless, sphinx-like presence an odd gap in the story, a glitch in the hermetic, all-consuming narrative of male genius? Or does it point to some deeper absence, some way in which Martha willingly went along with being sidelined from her husband’s larger concerns the better to ensure a peaceful home from which Freud could venture out with his unconventional, often alarming ideas? One might argue that in a certain fashion she was her husband’s muse — not a particularly glamorous or inflaming one, but a steadfast, earth-bound figure who helped him roam freely in his head. It was perhaps Martha’s very ordinariness — her “fully developed and well-integrated” personality, as Ernest Jones put it — that cast into relief the neuroses and the pathologies that Freud found everywhere he looked.

    Freud’s attitude to Martha, which verged on the fondly dismissive, is not irrelevant to the sense that psychoanalysis missed out on some of the big questions, particularly about women, and fell short of its liberating aspirations. Yet it is too easy to dismiss her as a martyr, unless one adds that she was a willing and seemingly contented one. Indeed, who is to say that she wouldn’t have played the role of helpmeet to a lesser figure as well, to a man who was not a genius? Or that despite her intelligence and sensibility, that she, like many people, was simply not driven to live up to what might have been her potential? Not every wife, no matter how intelligent or talented, wishes to compete with her husband. The competitive impulse, which looks to us invariably like a strength, can also derive from weakness and an infirm sense of self. Martha clearly knew who she was. Her dignity is undeniable. Her power derived from being the ultimate caretaker and ur-wife, presiding over the circumstances that facilitated Freud’s work. One might even see Martha’s abnegation of self — if abnegation it was — as an adult example of “altruistic surrender,” which was the term that her daughter Anna Freud coined for the children she worked with at the Tavistock Clinic who sacrificed their own well-being in the service of another child.

    In any case, Martha seems to have gone through something of a sea-change in the wake of her husband’s death at the age of eighty-three in September 1939, after years of excruciating jaw cancer. Aside from returning to lighting the Shabbos candles, she took to reading again, often sitting on the stairs or on a chair on the half-landing between the ground floor and the first floor at Maresfield Gardens, and even developed a curiosity about Anna’s patients, marveling at how expensive child analysis was. Although she remarked that life had “lost its sense and meaning” without her husband, she carried on in exile with her energetic and orderly existence, and appears to have relished being at the center of a crowd of doting and often celebrated visitors who came to see her. She might be said to have embodied the spirit of Goethe’s das Ewig-Weibliche, or Eternal Feminine, a concept that is profoundly alien to us but was a pillar of Martha’s culture. As a frail but vivid old woman who had been the lifelong companion of an undeniable visionary, she must have aroused curiosity of her own accord. Frau Freud died in London on November 2, 1951 at the age of ninety, and was cremated and her ashes joined with her husband’s ashes in an ancient Greek vase in something called the Freud corner in the Golder’s Green crematorium, taking her mystery — her hopes, her disappointments, and her regrets — with her.

    The Poet Misak Medzarents, and Two Poems

    He was born in 1886 in Armenia, in a remote mountain village called Pingyan above the Aradzani River. It was not the typical Armenian village of the Ottoman Empire, subjugated by Turkish authorities and terrorized by marauding Kurdish tribes in the guise of tax collectors. Pingyan was an unusual place: it was secure and very nearly free, a place where life could be happy. After the Moslem conquest of Anatolia began in the seventh century, Armenians struggled to preserve their liberty in princely states that juggled alliances with larger powers and tried to hold their heads above the flood of invasion by Turkish and Kurdish nomadic groups. After the fall of the Armenian Bagratid capital Ani in the east, the extinction of the Armenian Cilician kingdom in the south in 1375, and with that, the end of national sovereignty, little strongholds of freedom endured to which men might make their way — the mountain fastness of Sasun above Lake Van, Artsakh (today’s Nagorno-Karabagh) in the east, Zeitun in the southwest, and, in the northwest of historical Armenia, the village of Pingyan. (The name derives from the diminutive, Benik, of its founder, a prince named Benjamin.) 

    The houses, churches, schools, mills, and monasteries of the village clustered on the steep mountainside, below a well-defended pass; the villagers went to their fields on the other side of the river across a bridge with a great iron gate that was locked at night. The name of Misak’s family, the large Medzadourian clan — the young poet was to shorten the name to Medzarents — suggests they were descendants of a noble “great house” (medz dun) who had heard of the fortress village and made their way there across Armenia, centuries earlier, from Ani or even farther east. The villagers spoke Armenian, not the Armeno-Turkish of much of the Armenian community in Anatolia. They used metal tokens inscribed in the Armenian alphabet for trade, and maintained a school in which the Modern and Classical forms of Armenian were taught. They were horsemen and marksmen, and in their homes books shared the walls with guns. Some families owned businesses in the distant Ottoman capital, Constantinople, and were prosperous; workingmen sent remittances home.

    It made for a happy boyhood, for a time. Misak learned Armenian classics and foreign languages at school, read poetry, rode horseback to the fields, heard work songs, dozed and dreamed under trees, listened to his mother’s prayers and to legends about water spirits, and played with his friends. The Armenian massacres that began in 1894 and were to culminate in the Genocide of 1915 affected even Pingyan, and the family moved for safety first to the city of Sepastia (Sivas), then in 1902 to the capital, where Misak’s father had a business. In Sivas, a Moslem butcher’s son stole up from behind and stabbed Misak in the street. He survived the attack, but it traumatized and weakened him. The family chose the comparative safety of Constantinople, with its large Armenian community: Misak went to school, made friends, frequented the offices of literary journals, read widely, and was a prolific writer. When he was twenty-one he published two small volumes. But it was the year before his death, in 1908, of consumption: his life, like that of his precursor Bedros Tourian, was destined to be short. 

    Tourian had invented modern Western Armenian poetry almost singlehandedly, in the short years before his death in early 1872. Armenians closely followed European literary trends, and in the period between the lifetimes of the two poets Symbolism had become the dominant trend in poetry, music, and the arts. Through the use of dream imagery, indistinct allusions, exotic colors, and magical patterns of sound, Symbolists sought to open the doors of perception to an emotional and aesthetic sensibility towards a supernatural reality that, they believed, lay just beyond the everyday. The French poet Stephane Mallarmé and the composer Claude Debussy most famously exemplify the movement; but it can be argued that its beginnings were much earlier, and that William Blake and Edgar Allan Poe were proto-Symbolists. I will have more to say about Poe presently, in the discussion of what I consider to be Medzarents’ greatest poem, which I will give in translation. 

    Medzarents was described by his contemporaries, and sometimes derided, as a Symbolist, and he retorted defensively, in versified satire. The characterization is fair for some of his lyrics, but it is not complete — his work is not confined by narrow categories and definitions. There is evidence, in the form of a few fragments of poems, that Medzarents was developing a new style, sharper, harsher, more vivid, that reflected political events and a revolutionary consciousness. An analogous evolution from early Symbolist verses to a raw and jagged, sharply strident, revolutionary kind of verse typifies the work of the greatest Eastern Armenian poet, Yeghishe Charents, eleven years Misak’s junior, one of the great early non-Russian poets of the Soviet Union. Charents lived longer, but not by much: he was killed in November 1937 in the Stalinist purges. He wrote homoerotic verses that were unpublished in his lifetime and that still arouse controversy among the ultra-nationalist establishment in post-Soviet Armenia. We cannot know with any certainty what Medzarents would have written  had he lived on in the turbulent twentieth century: he died on the eve of the Ottoman revolution and just a few years before the Armenian genocide. It is almost certain that he would have been murdered with the other two-hundred-and-fifty-or-so Armenian luminaries of the capital at the start of the Genocide in April 1915. The life was far too short; the future, far too dark. 

    Let us consider one poem in detail, with its far-reaching ramifications. It is called Gaydzer, or “Sparks,” and was published September 10, 1905 in the journal Masis with another verse and the heading Yergu sirerk, “Two Love Songs”; and it was reprinted in the poet’s first volume of verses, Dziadzan, “Rainbow,” two years later. The political activist, publisher, and literary scholar Aram Andonian wrote the preface to the book. The poem consists of four quatrains; each line is seven syllables in length. (Armenian stress is regular: the accent falls on the final syllable of a word except for enclitics — short unstressed words, of one syllable, following a longer one.) The rhyme pattern of the poem in the original is a conventional one: ABCB BCCD BCBA ADDD. Here is my translation.

    The drumbeat of my soul and its tambourine’s

    Trill this night descend in laughter.

    Like cymbals clashing, they delight:

    My memories clap their hands together. 

     

    Accompanying the castanets’ song

    Your falcon’s eyes’ flame,

    Purple-born and fire,

    Burn within my soul again.

     

    Drunken on that intangible ambrosia

    With kisses redolent of flowers

    Sway there in mad dances

    The regal lady’s undulations.

     

    The dark night gently wears away!

    Oh, just once more, just once again!

    My soul intoxication craves

    In the rivulets of fire flowing from your gaze. 

    The poem is a reverie, an induced, dream-like act of imagination by the author at night within the four walls of his room. There are frequent images of fire, but the title, “Sparks,” suggests that the fire could ignite but has not yet; and before dawn the passionate but insubstantial vision fades, even though the poet pleads for it to stay. The first three stanzas progress through the five senses, each more physically immediate than the one before it. The first stanza bursts on the reader with vividly percussive sound: drums, tambourines, cymbals, and hands clapping. The second stanza moves to the sense of sight: fire and flame evoke bright red and rich gold, and the poet also uses the epithet dziranedzín to describe the fire in his imaginary beloved’s eyes. This word is a calque, that is, an exact translation of a foreign word according to its parts, of the Greek and Latin adjective porphyrogenitus, literally “born to the purple,” meaning “noble.” In Armenian it is, serendipitously, richly alliterative. It is a word that describes a quality while also making one think of a color, and part of the way it makes the connection is through the repetition of a sound. That is a game that certain special words can play in our minds, making us see and hear in a new and wider way. 

    The third stanza combines the remaining three senses of smell, taste, and touch: the poem overwhelms with voluptuous imagery of flowers, intoxication, and kisses. The poet is drunk on nectar — the Greek word, whose variant and equivalent is ambrosia — means, literally, “immortal,” and is echoed by the compound word dzaghg-anúysh. Dzaghíg is “flower”; anúysh is a Classical Armenian loan word from pre-Islamic Persian meaning “immortal nectar” again. In Modern Armenian, with the diphthong reduced, anúsh also means “sweet.” The Armenian for “kiss” is hampúyr, which means literally a sharing of fragrances, such as sweetness. Through Medzarents’ choice of words, in sum, the senses all blend into one another.

    In the final quatrain of the poem, the sensuous vision fades as morning dawns, though the poet begs it to linger and prolong his self-induced intoxication. The final word of the poem, a Classical Armenian compound most familiar from the Hymn of Vesting of the Divine Liturgy, is hrahosán, “flowing with fire”— used here of Misak’s beloved’s gaze. The word here recalls the poet’s palette of crimson, purple, and gold; but in its liturgical context it alludes to the fire of the Holy Spirit that descended upon the Apostles in their upper room and conferred upon them the gift of tongues. Linguistic inspiration is precisely what this poem is about in the first place; but the final stanza also reminds one plaintively that the sumptuous scene conjured so richly by the poet’s imagination is insubstantial as a dream. The reader of English will be reminded here of Prospero’s words in The Tempest:

    Our revels now are ended. These our actors, 

    As I foretold you, were all spirits and 

    Are melted into air, into thin air: 

    And, like the baseless fabric of this vision, 

    The cloud-capp’d towers, the gorgeous palaces, 

    The solemn temples, the great globe itself, 

    Yea, all which it inherit, shall dissolve 

    And, like this insubstantial pageant faded, 

    Leave not a rack behind. We are such stuff 

    As dreams are made on, and our little life 

    Is rounded with a sleep.

    As I was writing the first lines of this essay, the sun was beating down and the branches on the pomegranate tree in our California garden were bent to the ground with fruit. They reminded me of the words in poetry that are heavy, dense with many meanings; we have already seen how Medzarents chooses ripe words bursting with juicy seeds. There are yet others in this poem that harken back to antiquity, to the archaic pleasures of noble hunters, to regal feasts. Shahení, “falcon-like”; pampish(n), “queen”— Armenian is an ancient language with roots in the Thraco-Phyrgian akin to proto-Greek, layered with the vocabulary of many centuries, and these words are redolent of the Parthian age, the heroic epoch of the fourth century chronicled in the Epic Histories of P‘awstos Buzand. My teacher and friend Nina Georgievna Garsoïan, who passed away last year at the age of ninety-nine, published the definitive translation and study of that work, in which it is related that the Sasanian Persian Shah of that time, Shapur II, captured his perennial rival and enemy, the Armenian Arsacid Arshak II. 

    The story is taught in every Armenian school: Shapur had his servants sprinkle Armenian earth on the ground of his banqueting tent. When Arshak trod upon alien Iranian soil, he meekly professed fealty and submission; but when he stepped on the earth brought from his native land, he angrily promised rebellion. At the royal feast later that fateful day, he derided Shapur as the usurper of the throne of his own clan, the Parthian Arsacids, and audaciously demanded his rightful place at the head of the table. Arshak was clapped in irons and imprisoned in a place called the Fortress of Oblivion, from whose dark confines no inmate ever emerged. The prisoners’ very names and memories were expunged from official records and forbidden by law to be spoken. But the Armenian king’s faithful eunuch Drastamat (a Parthian word meaning “welcome”) secured permission to entertain his liege lord one last time, with royal viands and dancing maidens. At the end of the revel, Arshak seized a fruit knife and plunged it into his own heart, lest he live past the end of the entertainment and return to the dim existence of a captive. Thus did the voluptuous vision end; and from his choice of words and images it is all but certain that Medzarents had the famous episode from P‘awstos’ history in mind. 

    Epameron— ti de tis, ti de ou tis? Skias onar anthropos, declared the poet Pindar in a celebratory ode to the victory of an ancient Hellenic athlete. “Thing of a day. What is somebody? What is he not? Man is the dream of a shadow.” Yet when glory rests upon a man, the moment in his life’s span is sweet as honey, he adds. But when the laurels wilt, the dream fades, the revel ends, the vision flies away, we have the poem, the play, the historical saga, the ode. What is it that gives these written and spoken words power over millennia? What is it they immortally capture? What can they do that the reality of an ordinary day cannot?

    Let us take Medzarents’ word, dziranedzín, in the second stanza of the poem discussed above. It means “born to the purple,” and thus combines a color with the idea of nobility. It has a particular sound-signature, a musical quality, in the poem, too, for it resonates very strongly with other words the poet has already used in the first stanza: dzidzágh, “laughter,” dzĕndzghá, “cymbal,” and dzap‘, “clap”. Armenian dziraní means “purple,” but it is also the color of Homer’s wine-dark sea, for thus we find it as dziraní dzóv (the latter word meaning “sea”) in the earliest Armenian poem, the Song of the Birth of Vahagn recorded by the historian Movses Khorenats‘i. The word dzirán means also “apricot,” a fruit originally from China that the ancient Romans called the Armenian plum. Apricots, the fruit a medieval writer in Asia praised as “the golden peaches of Samarkand,” were an expensive commodity, even as cloth dyed in the royal purple was precious. The Russian linguist Pyotr Kocharov has convincingly argued that the Armenian word is a very early loan from Old Iranian zaranya-, meaning “golden.” 

    That origin goes far to explain the semantics of the word in its development through time, its wide array of meanings and associations. As a thing, a dzirán is a choice fruit of fiery, red, and purple hues. As a color, dziraní evokes a range of hues over the spectrum; and as a quality, it confers nobility. (The Biblical and later Hebrew word for purple, argaman, is likewise freighted with the implication of nobility and great value. It is derived from Akkadian argamannu, which has the dual meaning of “purple” and “tribute.” An ancient Anatolian derivation is possible, but I would suggest again a very early Iranian origin, comparing for instance the Iranian-in-Armenian name Argawan, “precious.” In Hebrew magical texts, argaman serves as an acronym for the names of the angels Uriel, Raphael, Gabriel, Michael, and Nuriel, its meaning as a word on its own doubtless conferring additional nobility upon its celestial reference.) 

    Now, one can call a word — say, dzirán — a signifier. That is, it signifies, refers to, names an object, a thing in the physical universe. The thing that the signifier refers to — in this case, an apricot — is the signified. In general, the signifier is arbitrary: there are different words in different languages for an apricot, and none of them has a provable relationship to the object it denotes. (I say generally, because most languages also have onomatopoeic words that echo the perceptible sound or quality of the thing or action that they describe.) It is also the case that the signifier is inadequate fully to express the reality of the signified. I can say “apricot,” but the colors of the fruit in the sun, its silky feel, the juice, the pit within — obviously one word cannot carry all these features, and even a page-long description would not be the same as an immediate experience. A long shadow thus falls between signifier and signified, over the centuries of human speculation about language: we feel instinctively that there must be a relationship, but there is not. To compensate, our ancestors crafted the myth of an Adamic language, the primordial, perfect speech in which the first man gave each of the animals its true name. 

    But look what Medzarents has done! As we have seen in the analysis of his chosen term, dziranedzín, his signifier is more, not less, as a word sparking various mental and aesthetic associations. It is a signifier that has more to it than the signified. I think that this can serve as one good definition of a poem (not to the exclusion of other definitions, of which there are many): a literary form in whose lexicon a signifier is to be encountered that is greater than the signified. 

    That definition has implications worth further thought, but I don’t want to say farewell just yet to Medzarents’ magical word dziranedzín, which means literally “to the purple born” but also much more. We encounter the English form of its Greek parent, porphyrogenitos, in 1839 in the poem “The Haunted Palace” by Edgar Allan Poe:

    In the greenest of our valleys 

    By good angels tenanted, 

    Once a fair and stately palace — 

    Radiant palace — reared its head. 

    In the monarch Thought’s dominion, 

    It stood there! 

    Never seraph spread a pinion 

    Over fabric half so fair! 

     

    Banners yellow, glorious, golden, 

    On its roof did float and flow 

    (This — all this — was in the olden 

    Time long ago) 

    And every gentle air that dallied, 

    In that sweet day, 

    Along the ramparts plumed and pallid, 

    A wingèd odor went away. 

     

    Wanderers in that happy valley, 

    Through two luminous windows, saw 

    Spirits moving musically 

    To a lute’s well-tunèd law, 

    Round about a throne where, sitting, 

    Porphyrogene! 

    In state his glory well befitting, 

    The ruler of the realm was seen. 

     

    And all with pearl and ruby glowing 

    Was the fair palace door, 

    Through which came flowing, flowing, flowing 

    And sparkling evermore, 

    A troop of Echoes, whose sweet duty 

    Was but to sing, 

    In voices of surpassing beauty, 

    The wit and wisdom of their king. 

     

    But evil things, in robes of sorrow, 

    Assailed the monarch’s high estate; 

    (Ah, let us mourn! — for never morrow 

    Shall dawn upon him, desolate!) 

    And round about his home the glory 

    That blushed and bloomed 

    Is but a dim-remembered story 

    Of the old time entombed. 

     

    And travellers, now, within that valley, 

    Through the red-litten windows see 

    Vast forms that move fantastically 

    To a discordant melody; 

    While, like a ghastly rapid river, 

    Through the pale door 

    A hideous throng rush out forever, 

    And laugh — but smile no more. 

    The poem is an allegory: the palace is the poet’s head; the two luminous windows, his eyes. Long ago the king of Thought reigned there in serenity; but madness invaded the fortress of the mind and now all is chaos within: the windows that were once bright are now “red-litten” (or, as a variant of the text has it, “encrimsoned”). Jerome McGann, in a study of Poe’s poetry, asserts that the key word of “The Haunted Palace” is “porphyrogene,” which he considers both noun and adjective, and “fundamentally, a synaesthetic figure, both chromatic and phonetic,” part of Poe’s “musical architecture.” That is, the word “porphyrogene” has manifold functions in the poem. It is both a word describing the king who sits in the palace, and his title. It evokes a color, royal purple, while also serving as a central chord of the poem’s music — and most of all, it sounds just right. 

    Poe stressed the sound-structure, the musicality of poetry, some have said, even more than its overt verbal meaning, and built his great and final poem, “The Bells,” around the single tantalizing word “tintinnabulation.” It was a poem in a chrysalis, ready to open its wings and fly as music: the Russian translation of the poem by the Symbolist Konstantin Dmitrievich Bal’mont became Rachmaninoff’s Third Symphony; and Phil Ochs, the great American protest singer-songwriter of the 1960s, took its tintinnabulation to the guitar. Medzarents would have loved it. McGann, who studied “The Haunted Palace,” is mistaken, I believe, in thinking that Poe coined the word “porphyrogene”: we have reviewed its long and noble pedigree. But he is right to stress the centrality of the term: the word brings together, like its Armenian cousin but perhaps not as variegatedly, different concepts and different kinds of realities and perceptions. It is one of those poetic signifiers that are more than the signified. The word leads the reader, as Poe wrote in his essay “The Poetic Principle,” “to perceive a harmony where none was apparent before.”

    Valerii Bryusov, a Russian Symbolist poet of the turn of the twentieth century, was enamored of Armenian culture and in 1916 he edited a volume of translations by various hands including his own, called The Poetry of Armenia from the most ancient times down to our own days. The book, whose proceeds went to the relief of Armenian refugees from the Genocide, includes several of Medzarents’ poems, though not “Sparks.” (The poem was translated into Russian, badly and not by Bryusov, in an anthology published in Erevan in 1987.) But in 1924 Bryusov did translate Poe’s “The Haunted Palace” into Russian. The translation is of interest here for two reasons: first, the poem has an affinity to Medzarents’ “Sparks,” and it is worth knowing how a poet so strongly attracted to Armenian verse approached it. The other reason is perhaps less intuitively obvious and requires some explanation.

    Why are Medzarents’ poems so chromatic? If color was so important to him, wouldn’t it have been more sensible for him just to paint a picture? It is not an unreasonable question: in the years before reproducible media such as photography and cinema attained prominence in the arts, painters were a much more visible presence than they are today. This was no less the case for Armenian Constantinople or Tiflis than for Paris or St. Petersburg. Martiros Saryan’s palette blazes with the gorgeous colors of the Armenian landscape; Hagop Kojoyan’s delicate hues evoke Symbolist reveries. I think Medzarents intended for us to read his poems and then make the mental effort to see his colors, which are also sounds and emotions, within our own minds. Every viewer of a painting sees it differently because his mind is taking in the picture and organizing it in a way that is particular to him. In a sense he is participating in the creative process of the artist, supplying colors. Even more so, the reader of a poem with a rich chromatic lexicon is making a multi-faceted creative effort, provided that he is attentive, perceptive, and engaged. The neuroscientist Eric Kandel has argued, following the pioneering art historians of twentieth-century Vienna who employed the findings of psychology in their research, that an important part of what makes a work of art great is that it is ambiguous: it forces the viewer to consider and select various possible meanings. 

    As we have seen, the factor of ambiguity as a component of artistic mastery is eminently applicable to literature, as well as to painting. In the former case, the act of translation reveals the beholder’s share explicitly. It is a window into the laboratory of his mind. A great translator is himself an artist, and the choices he makes when rendering a particular term from one language into another enhance our perception of the poem. Bryusov in his translation of “The Haunted Palace” devotes particular attention to Poe’s palette: he chooses izumrúdnaya, “emerald,” to lend extra scintillation to Poe’s superlative, “greenest,” in line one. Poe’s “yellow, glorious, golden” of becomes púrpur, zláto “purple, gold”: two words have replaced three, with purple in its metaphorical sense of regal rendering “glorious” and adding chromatic variation to the scene. For “gold,” Bryusov has selected the archaic zláto, with its aura of storied antiquity, over modern Russian zóloto. Poe’s “Pearl and ruby” are reversed to lal, zhémchug, with a marked Arabo-Persian loan-word for the ruby, lāl, standing in for the more common rubín. The color red is of unique significance in Russian art and symbolism — the common word for it, krásnyi, originally meant “beautiful.” As for Poe’s “Porphyrogene,” all Bryusov has to do is to reach into Russia’s own Byzantine heritage and retrieve the well-known Slavonic calque upon the Greek original: Porfiroródnaya. (It is a queen, not a king, because thought is a word of feminine gender in Russian.) And so the original, with all its fertile ambiguities, is there in its perfection, with only an alteration in dress. 

    This discussion of a poem of Misak Medzarents has treated the complexity and depth of his vision and language; and these subjects have prompted further considerations in brief about the definition of poetry itself and even the nature of the perception of literary art. After a brief sketch of the historical setting we turned from the dark prospects that lay beyond the short life of the poet to that which endures unshadowed, the work. That work draws upon the millennial resources of the Armenian language to express intricate visions; and sometimes the verses of Misak Medzarents are suffused with a pantheistic joy. Here, in my translation, is his greatest poem, his ars poetica, an invitation to the reader to travel farther into the realm of delight.

    With what intoxication… 

    To my friend Kegham Parseghian

    With what intoxication! The trees, in the light,

    Trees in the wind and the rain,

    Shaggy-tressed trees, trees that to the heavens strain,

    And saplings green, as sea waves

    Collapsing to the bosom of the corn strewn,

    Dazed, all drink of the swelling sunburst of life.

    With what intoxication! The grass above the soil rising

    Opens to the light, amazed,

    For the moment of its life the dewdrops that are its eyes.

    With what intoxication! Flowers in the dew,

    Flowers in the light, accustomed to the hand,

    Swoon in their expectation.

    With what intoxication!

    Every field and hill upon green brow

    Bind the flowers’ multicolored wedding band. 

    With what intoxication! From the lovely plains

    And his bride, the dale, the red foot stork

    Returning home imbibes his longing’s satiation.

    With what intoxication! Blackbirds

    Drink the light and whisper it, alert,

    In orchards’ leafy fastnesses.

    With what intoxication! Snow-white jays afloat

    On high seem to swim as they perambulate upon the sky,

    Taking wing, gilded on the glowing firmament.

    With what intoxication! The turtledove her nuptial

    Bed arranges in the shady cover of a tree

    And waits for her husband in expectant passion.

    With what intoxication! The butterfly unfolds upon

    The tiny sparkling lakelet of its leaf

    And with its milky wings constructs its canopy.

    With what intoxication! On the purple plain

    To scarlet flowers hies the bee, 

    To suck upon the little female nipples, luxuriating.

    With what intoxication! The seas are blue;

    River waters, abundant; springs, brimming;

    The rill, swiftly purling; lakes, azure —

    The rill, his locks green-fronded, tossing,

    Passes intimate among the willows, blue as the moon.

    With what intoxication! Clouds shake their heads,

    The wondrous liquid massing in their breasts;

    Which like a snaking thread descends

    To slake the hot gold thirst of earth. 

    With that intoxication drink their fill

    Upon the parched soil’s universal burn

    All creatures born, all flowers grown:

    Drunk, the wave laps in embrace the perfumed tree;

    Thyme and mint and basil growing wild

    And storax, frankincense, aromas teeming

    Are embracing, drunken, every thing,

    All shapes and forms, all colors gleaming,

    All essences and elements: He

    Whose rainbow every thing reflects, returning,

    God! Who from God knows where has come to them.