Memoirs of a White Savior

    Last year, a student came to my office hours to discuss her post-graduation plans. She said she wanted to travel, teach, and write.

     “How about joining the Peace Corps?” I suggested.

    She grimaced. “The Peace Corps is problematic,” she said. 

    I replied the way I always do when a student uses that all-purpose put-down. “What’s the problem?” I asked.

     “I don’t want to be a white savior,” she explained. “That’s pretty much the worst thing you can be.”

    Indeed it is. The term “white savior” became commonplace in 2012, when the Nigerian-American writer and photographer Teju Cole issued a series of tweets — later expanded into an article in The Atlantic — denouncing American do-gooder campaigns overseas, especially in Africa. His immediate target was the “KONY 2012” video of that year, a slickly produced film — by a white moviemaker — demanding the arrest of Ugandan warlord Joseph Kony. But Cole’s larger goal was to indict the entire “White-Savior Industrial Complex,” as he called it, which allowed Westerners to imagine themselves as heroic protectors of defenseless Africans. Conveniently, Cole added, it also let them ignore the deep structural and historical inequities that had enriched the West at the expense of everybody else. “The White-Savior Industrial Complex is not about justice,” Cole wrote. “It is about having a big emotional experience that validates privilege.” Instead of assuming that they know what is best, he urged, Americans should ask other people what they want. And instead of engaging in feel-good volunteer projects that do not do any actual good, we should challenge “a system built on pillage” and “the money-driven villainy at the heart of American foreign policy.”

             The Peace Corps is a volunteer agency as well as an agent of foreign policy. So it has also become a frequent punching bag on several popular Instagram accounts that have echoed — and amplified — Cole’s critique. No White Saviors (906,000 followers) denounces the Peace Corps as “imperialism in action”; at the parody account Barbie Savior (154,000 followers), you can thrill to the pseudo-adventures of a Peace-Corps-like doll who takes selfies with orphans, squats over a pit latrine, and invokes famous humanitarians. (“If you put an inspirational quote under your selfie, no one can see your narcissism — M. Gandhi.”) Never mind that Cole’s original posts mocked digital activism such as the KONY 2012 video, which featured “fresh-faced Americans using the power of YouTube, Facebook, and pure enthusiasm to change the world,” as he observed. In the Age of the iPhone, apparently, the only answer to a misguided social-media campaign is another social-media campaign.

             And now the campaign has spread into the Peace Corps itself, as my student noted. She alerted me to Decolonizing Peace Corps (9300 followers), which was started by three returned volunteers from Mozambique after the agency evacuated them — and the other 7300 volunteers around the world — amid the COVID pandemic in March 2020. Later that spring, following the police murder of George Floyd, the Mozambique trio circulated a petition urging the Peace Corps to reckon with its allegedly racist and colonialist roots. They sent it to No White Saviors, who told them a petition “wasn’t going to be enough”; what the volunteers needed was, yes, their own Instagram account. Decolonizing Peace Corps went live shortly after. Inspired by campaigns to abolish the police, it demanded the abolition of the Peace Corps. “When you look at the Peace Corps and you look at the police and you see the origins, you ask yourself, can this really be reformed?” one of the account’s founders asked. “How can you reform a system that was founded on neocolonialism and imperialism by a country built on genocide and slavery?” The question answers itself.

             Meanwhile, as the pandemic continued to surge, No White Saviors stepped up its own attacks on the Peace Corps. Now that all the volunteers had come home, it wrote, the agency should permanently close up shop. “No more pretending inexperienced young people are actually useful in countries and cultures they are alien to,” No White Saviors wrote in 2021. “Instead you could pay skilled local volunteers to work more effectively. No more spending money on flights or evacuations, no need to teach language or culture.” Indeed, a volunteer back from Nepal added, the worldwide evacuation was itself a “gross display of resource privileges.” She still couldn’t figure out why the Nepalese people in her community had wanted her there, “other than maybe for ‘cultural exchange.’” But, as her air-quotes indicated, that “is not a good enough reason to invest so many resources into mostly fresh out of college, inexperienced Americans.”

             I served in the Peace Corps in Nepal, fresh out of college in 1983. My father was a Peace Corps director in India and Iran in the 1960s, when I was a child. The story of the Peace Corps is, in many ways, the story of my life. Now my student wanted to know: was it worth it? And for whom? In reply, I related an experience from my years in Nepal. It’s all about who gets saved, and from what, and why.

    I was teaching one day when a kid bounded into my classroom, breathless from running. “John-Sir,” he panted, “your friend is in the valley!”

    “Your friend” meant another white guy, a very rare sight in that part of Nepal. I taught at the top of a hill — we would call it a mountain, but in Nepal it was a hill — about two hundred and fifty miles west of Kathmandu. To get there, you took an overnight bus across the plains that bordered India and then walked north for three days. When I first arrived, some of the children thought I was a ghost. They had never seen someone so pale.

    I peered down into the valley, shading my eyes against the sun. It was a picture-perfect, blue-sky afternoon in the Himalayas. And sure enough, there he was: another white guy. Everyone wanted to know what he was doing there. So we cancelled the rest of the school day, and all of us — teachers, students, and curious hangers-on — started walking down the hill.

             As we neared the valley, a villager approached me excitedly. He was holding a crudely printed pamphlet, which flapped in the mountain breeze. “John-Sir, your friend sold me this book!” he exclaimed. “Only five rupees!” Five rupees was what a man earned for chipping away all day at the tractor road, until it got washed away by the monsoon and you started all over again the next year. A woman got three rupees, and a child got one. It was a day’s wage. I took one look at the pamphlet and right away I knew what it was. There was a figure that looked like Greg Allman — long straight hair, close-cropped beard — nailed to a cross. And he was white, of course.

             We found “my friend” standing on the top of a rock at the bottom of the hill, selling dozens of pamphlets and collecting a large stack of cash. I walked straight up to him, ready for a fight. “What are you doing?” I demanded.

             “I’m saving these people for the Lord,” he said.

             Saving! Of course he was. I told him that they already have a Lord — about twenty thousand Lords, in fact. He said that Hindus were thieves and murderers. Recognizing his accent as German, I asked him why his own church had rolled over and played dead for Hitler. He replied that the Holocaust was a tragedy, but mostly because of the gypsies who perished.

             I told him to be fruitful and multiply, but not in those words. 

             He told me to fuck off, too. (So much for turning the other cheek, which was a pretty big deal to the white guy on the cross.) I told him that it was illegal to mission in Nepal, and that I would call the cop if he continued to sell the pamphlets. This was a total bluff, because “the cop” was a day’s walk away and — based on my encounters with him — most likely drunk. But the missionary didn’t know any of that, of course. So he hoisted his backpack, told me to fuck off yet again, and started climbing into the hills.

             By this time, a great sea of humanity had gathered. Nobody knew what the missionary and I were saying, but they could tell it wasn’t good. “John-Sir, you were angry with your friend,” someone said, as the missionary walked away. “Why don’t you like your friend?”

             “He’s a very bad man,” I said. “He doesn’t like your religion.”

             Then I heard one guy say, “Hey, John-Sir’s friend said if I believe in his religion, I’ll go to heaven and won’t be reborn over and over again.” And someone else said, “Hey, can I buy your book? I’ll give you six rupees for it.” “Run after John-Sir’s friend,” another guy said. “Maybe he has some books left over.”

             I cursed the missionary again, and then I cursed myself. Many years later I figured out why: we were both white saviors, in ways that still mortify me today. His saviorism was more direct and straightforward: Hindus were thieves and murderers, so he was saving them for his Lord. My own brand of saviorism was dressed up in the liberal multicultural dogmas of the era: I had to protect my villagers’ fragile and endangered belief system from an evil Western interloper. But I knew what was best for them, every bit as much as the missionary did.

             When you get on your high horse, you can disregard everything below you. There have been Christians on the sub-continent for nearly two millennia. But I didn’t know — or care — about that. Nor did I care what the villagers thought about what the missionary was saying (and selling). I knew what was “indigenous” to their “culture,” or so I imagined. And Christianity wasn’t.

             Most of all, I wanted to defend their “authenticity” against inauthentic outsiders. That echoes the type of white savior that you can find on Instagram today, condemning white saviorism. The classic version of white saviorism assumes that people in other parts of the world would be just like us, if they only had a better upbringing. The Instagram critique of white saviorism reverses that formula, insisting that they do not — or must not — embrace or imitate anything we do and say. That prescription is saviorist in its own right, wrongly and patronizingly ignoring the same local autonomy and agency that it claims to uphold. No White Saviors tells Western do-gooders to take account of what other people want, but it dismisses those wants if they do not correspond to its idea of the good.

             Eventually, the Peace Corps would save me from that kind of white saviorism, too – from the idea that people who seemed so different from me should forever stay the same. But to see how, as I told my student, we have to go back to the beginning.

    My father met John F. Kennedy by happenstance in August 1956, on a beach in the French Riviera. It was the day after the Democratic convention, when Adlai Stevenson — nominated for the second time for president — let the delegates select his running mate. Kennedy was hoping to join the ticket, but he was edged out by the Tennessee anti-mob crusader Estes Kefauver. Kennedy flew out that night to France, where my dad encountered him the next morning. When he heard that my father was at Yale Law School, Kennedy urged him to come work for the federal government. Most of the other students would head off to white-shoe law firms on Wall Street and elsewhere, Kennedy said. But my dad should go to Washington, JFK urged, because that’s where the action was going to be.

             I have told that story to my students many times, because it highlights the cavernous gap between Kennedy’s time and our own. Today, across the political spectrum, “Washington” is a symbol of dysfunction and decay. It is where action — and idealism — go to die. But not then. My father did go to Washington, where he held several government jobs. He also volunteered for Kennedy’s presidential campaign in 1960. At a meet-and-greet, he waited on a long line to shake the candidate’s hand. “Paul Zimmerman!” Kennedy exclaimed, to my dad’s shock and delight. “I met you on a beach in France, right after the 1956 convention.”

             Kennedy proposed a new volunteer corps during a famously extemporized 2 a.m. campaign speech in Ann Arbor, Michigan in October 1960. Between that night and his inauguration, over twenty-five thousand Americans sent letters to Washington asking how they could join the agency. The Peace Corps sent its first volunteers out in August 1961, to Ghana. By the end of 1963, seven thousand volunteers — known around the world as “Kennedy’s Kids” — were serving in forty-four countries. The agency received its largest weekly number of applications in the seven days after JFK’s assassination.

             My father signed on a few years after that. He made one phone call to the Peace Corps, which somehow patched him through to Bill Moyers. A fellow Texan and top aide to Lyndon Johnson, Moyers had been an assistant director of the Peace Corps under Sargent Shriver, the president’s brother-in-law. My dad recalled that Moyers asked him a single question: where he went to law school. In the era of the best and the brightest, that was all you needed to know. Moyers told my father to walk over to the Peace Corps office, where he was offered a job on the spot. His international “experience” was limited to two summer vacation jaunts in Europe, including the trip where he met Kennedy. But somehow, attending Yale Law School — and, I would imagine, working on a Kennedy campaign — qualified him to be the director of the Peace Corps in South India. He was thirty-two years old.

             So off we went to Bangalore. My mother took my brother (almost seven) and me (four and a half) first, stopping along the way in Israel to visit relatives. My dad followed with my ten-month-old sister, whom he fed with a bottle on the long Pan Am flight. We camped at a hotel for a few weeks until my parents found a house on a lovely compound adjacent to Cubbon Park, the city’s green oasis. When we got to Bangalore, in 1966, it had 1.4 million people. Today it is roughly ten times that size.

             My memories of India are dim, obviously. But the family albums show scenes that would fit nicely on the No White Saviors Instagram account. Here I am, riding an elephant at what looks to be a birthday party. Here’s my baby sister, in the arms of her Indian nanny. Here’s the rest of our large household staff: cook, cleaner, security guard. Here’s the whole family decked out in traditional Indian white cotton dhotis. The most remarkable photo shows my brother beaming next to Indira Gandhi, who towers a bit stiffly over him. Our family was visiting a volunteer somewhere on the countryside, and word spread that the prime minister was going to be disembarking at a nearby railway station. So we went down there to see if we could catch a glimpse of her. At such scenes, a child was customarily chosen to greet the arriving dignitary. The volunteer we were visiting held my brother aloft, urging officials to select him. Eventually young Jeffrey Zimmerman was ushered to the front of the crowd to hang a garland on Mrs. Gandhi. Talk about white privilege! It doesn’t get more privileged than that.

    What were we doing in India, other than making it — to quote Teju Cole — a “backdrop for white fantasies of conquest and heroism”? A few months before we arrived, Indira Gandhi met with Lyndon Johnson at the White House to discuss food shortages in India. Under its “Food for Peace” program, the United States agreed to send 3.5 million tons of wheat, corn, and other crops to India. Gandhi’s government also partnered with the Rockefeller and Ford Foundations to bring new methods and products to Indian farmers. Many of the Peace Corps volunteers whom my father directed were engaged in programs to improve agricultural practices, too. I can’t say I know what effect they had on farming, or whether local workers could have performed the Americans’ jobs more effectively or efficiently. But here’s what I do know: India kept requesting more and more Peace Corps volunteers, until the Bangladesh war in 1971 soured India-U.S. relations and led to their withdrawal. 

    That has been the pattern for sixty years: when the Peace Corps is asked to leave a country, it is for geopolitical reasons rather than programmatic ones. The people who actually host volunteers — school principals, health-clinic directors, and government officials — almost always want more of them, which isn’t something you see mentioned very often by the critics of white saviors. If the volunteers are so clueless and useless — or, worse, if they are actively visiting harm on other countries — why would these countries continue to invite them? When the question is asked at all, on anti-savior social media, the most common answer is that non-white hosts have “internalized” white-saviorism themselves. “I will always carry the assumptions of villagers that I am better, smarter, and work harder than my local counterparts,” a Peace Corps volunteer in Cambodia wrote in 2019, before the pandemic brought everyone home. “Does my contribution have enough value to outweigh the perpetuation of the white savior complex on the local people?” Note the choice of words here: his very presence foists white-saviorism on his hosts, who are feeble and powerless to resist its seductive wiles. That patronizes “local people” in the guise of protecting them.

             And here is something else I know: my parents forged deep and lasting human bonds in India. Their closest friends were the parents of an Indian kid I met at school. (It was a girls’ school, which is a whole other story.) The anti-savior critics might dismiss that as mere “cultural exchange,” which doesn’t do anything to change lives. They are wrong. At the Peace Corps’ twenty-fifth anniversary march in Washington in 1986, a man accosted my father and declared, “I’ve been waiting two decades to talk to you.” My dad had a self-deprecating sense of humor — that is to say, he was Jewish — and he told me he half-expected that the guy would raise his middle finger and shout, “Fuck you!” Instead the man said that he had gotten sick during the first months of his Peace Corps service in South India. He made his way to Bangalore, presented himself at my dad’s office, and announced that he wanted to go home. My father suggested that he go back to his village and give it one more try. He did just that, and he had lived there ever since. He married an Indian woman, had a bunch of kids, and worked in several different jobs in the health and education sectors. Perhaps he had once believed that he would save India, from disease or destitution or something else. Instead, he said, India saved him from the dull and altogether predictable life that awaited him back in the United States. America was his birthplace, but it wasn’t his home. India was.

    Our next stop was Teheran, where we arrived in January in 1969. It was flush with oil money and foreign workers, which gave the city a lively boomtown feel. Out in the countryside, a different set of changes were underway. The “White Revolution,” declared a few years earlier by Shah Mohammad Reza Pahlavi, redistributed land and sent health and literacy workers into rural areas. It also enfranchised and educated women, which outraged some Islamic religious leaders. So did the Shah’s development of secular courts and schools, which reduced the clergy’s power and influence in both realms.

             Peace Corps volunteers participated deeply in all of these modernizing efforts. In the cities, they taught at universities, advised local governments, and worked in the arts; a handful of American professional musicians were even recruited by the Peace Corps as volunteers in Teheran’s symphony orchestra and opera. Rural volunteers taught in primary and secondary schools or served in health clinics, sometimes alongside teams of urban Iranians who were doing the same. The Peace Corps volunteers tended to be slightly older and more skilled than earlier cohorts, owing to political changes back home. In 1969, shortly after Richard Nixon entered the White House, a group of returned Peace Corps volunteers called for the abolition of the agency: as the war in Vietnam raged, they argued, the Peace Corps provided cover for American violence and imperialism around the world. (Ours is not the first college generation to look askance at the Peace Corps.) Nixon was only too happy to get rid of the agency, especially after volunteers briefly occupied its headquarters in 1970 to protest the bombing of Cambodia and the slaughter of antiwar protesters at Kent State. Nixon told his aide Lamar Alexander — later the Secretary of Education and then a senator from Tennessee — to seek an appropriations cut in Congress as the first step towards zeroing out the Peace Corps. But Patrick Buchanan, another rising young GOP politico, warned Nixon that slashing the Peace Corps’ budget at that point would provoke a “real storm” among the “Kennedyites.” Buchanan instead suggested publicizing drug arrests and other overseas blunders by immature volunteers, who would eventually dig the agency’s grave.

             No such luck. To the chagrin of Nixon and his cronies, the Peace Corps retained its strong bipartisan support in Washington. But it did shift its priorities towards older and more professionally prepared volunteers, to head off the charge that it was a haven for pot-smokers and draft-dodgers. (“‘Technically qualified’ was a euphemism for ‘not liberal,’” quipped Peace Corps evaluator Charlie Peters, who went on to a distinguished career in journalism.) Several married volunteers who worked for my father even brought children with them, which was unheard of before that time. Yet this was hardly a straightlaced crowd; if anything, antiwar protest and youth culture back home probably made them more radical in their politics — and more hippyish in their habits — than earlier volunteers. My mother tells a great story about taking a long trip with my dad to visit a group of volunteers up in Tabriz, in the northwestern part of Iran. Famished upon her arrival at their house, she noticed a tray of brownies and asked if she could have one. You won’t like them, the volunteers said; they’re old and stale. My mother, characteristically, would not take no for an answer. Biting into a brownie, she deemed it delicious. She ate another, and then another. Eventually, the terrified volunteers drew her aside and explained that the brownies were laced with marijuana. They pleaded with her not to tell my dad, who didn’t hear about it until many years later. 

             Nor were the volunteers mere mouthpieces for the Shah, whom they mocked privately as “George” (as in “George Bernard Shah”). For a variety of reasons, including his recognition of Israel, the Shah was a strong ally of the United States. But he was also a brutal dictator, torturing dissidents and muzzling the independent press. Back in the States, protesters imagined that the Peace Corps — like other parts of the foreign policy apparatus — was helping to prop up pro-American tyrants. Around the world, though, a much more complicated pattern emerged. As the historian Beatrice Wayne has shown, the Marxist revolutionaries who overthrew Ethiopian ruler Haile Selassie often credited their changed political consciousness to the Peace Corps volunteers who taught them in the early 1960s, including the future senator Paul Tsongas. Most of the volunteers were middle-of-the-road liberals, not wild-eyed Marxists. But they led debates about Marx — and many other controversial matters — in their classrooms, which opened students’ minds to new ways of thinking and acting. Likewise, Peace Corps volunteers in Iran subtly undermined the same regime that their government was supporting. Opponents of the Shah befriended several volunteers, who quietly cheered the campaign against him. And as enmity towards the United States mounted, fueled by the simmering Islamic revolution, the volunteers reminded Iranians that all Americans were not anti-Muslim bigots or stooges for the Shah.

             But some of them were. The best decision my parents ever made was to send my brother and me to the Community School, which had been founded in the 1930s by American missionaries. It evolved into a thoroughly secular and cosmopolitan institution, patronized by families from dozens of countries — including many Iranians. That made it very different from the Teheran American School, where most of the American military and corporate types sent their kids. They didn’t speak any Farsi, and they called Iranians “ragheads.” I played Little League with them at the U.S. army base in the heart of Teheran’s Ugly American bubble, where they munched on hamburgers and made fun of the servants. Even as a nine-year-old, I knew that wasn’t right.

             Still, I was enormously proud of America. We were in Teheran for the Apollo 11 moon landing, a moment of huge American celebration — and, yes, American conceit — around the globe. And on United Nations Day, when everyone at Community School put on a short play about their country, we Americans staged the Thanksgiving story. We made a big Plymouth Rock out of papier-mâché, which we painted gray. I played Miles Standish, standing astride the Mayflower and shouting “Land ho!” as we approached the virgin wilderness. The folks over at No White Saviors would have a field day with this: it was a way to explain American power without addressing American conquest. A neat trick, if you can pull it off. But still I learned some important history lessons at U.N. Day. When I came home, I told my mother about the large Israeli delegation that marched under a blue Star of David. She laughed. The “Israelis,” she explained, were Iraqi Jews who had fled the Baathist revolution. They didn’t want to identify as Iraqis, which wouldn’t go over well in Iran. So they were Israelis instead. Problem solved.

             We were also in Iran for the epochal heavyweight boxing match between Muhammad Ali and Joe Frazier. As a Muslim, Ali was a huge celebrity across the Middle East. One evening, when my parents were on the road, our cook asked me why Ali was American but didn’t look like me. My Persian had gotten pretty good by then, and I explained to him—as best I could—that Africans had been enslaved by white people and transported across the Atlantic. “Really?” he asked. “You did that?” Well, yes. Not me personally, I said, but people who resembled me. The cook frowned and his brow furrowed. If our real purpose was to burnish white superiority in the non-white world, we weren’t doing a very good job of it.        

    Neither of my siblings ever evinced any interest in joining the Peace Corps. My brother doesn’t have as fond memories of those years that I do. My parents were away a lot, and I think he felt responsible for us while they were gone. And my sister was too young to recall much of anything. But I can’t remember a time when I did not think that I would be a volunteer. 

             In 1983, when I graduated from college, I applied for exactly one job: the Peace Corps. The agency was weathering another round of assaults from another Republican administration. David Stockman, Ronald Reagan’s famously Scrooge-like budget director, recommended a big cut for the Peace Corps. Meanwhile, GOP apparatchiks at the Heritage Foundation charged that the agency had lost sight of its Cold War mission: to provide a “counterweight” to America’s Soviet foe. Indeed, the original Peace Corps charter in 1961 required volunteers to receive training in the “philosophy, strategy, tactics, and menace” of communism. To head off the Republican attacks — and, of course, to protect its appropriation — the agency reinstituted anti-communist lessons for trainees. When my cohort met in West Virginia, before we left for Nepal, a Peace Corps staffer wrote the charter language about anti-communism on a whiteboard and said, “This is what the law requires us to teach.” That seemed like something that might happen in a communist country, someone replied, and everybody laughed: the lesson was a joke, and we all got it. Maybe such instruction made sense when the agency was founded, but the Soviet Union was on its last legs by that point, and the entire exercise seemed anachronistic as well as propagandistic. 

             As in the Nixon years, meanwhile, the Peace Corps announced that it was recruiting volunteers who were “older and better-trained” (read: sober and politically moderate) rather than the presumably radical and irresponsible youngsters of the Kennedy-Kids prototype. I couldn’t find much evidence of that shift in my own group. We were still mostly “B.A.-generalists” (as the Peace Corps called us) coming straight from college, who didn’t know how to do much of anything. Out at my village, in the Pyuthan district of western Nepal, my closest Peace Corps neighbor was a woman who had earned an education degree and actually taught (imagine!) in an elementary school back in America. It took me three hours to walk to the house where she and her husband lived. On the weekends, I hiked over there and picked her brain. I still think she taught me more about teaching than anyone I have ever met.

             But some of the work was intuitive, at least to an American. As a friend once told me, teaching is a lot like parenting: for the most part, you do it as it was done to you. In Nepal, most elementary-level teachers taught via repetitive chants. You knew you were approaching a school if you heard a chorus of kids singing in Nepali, “Two times two is four, two times four is eight, two times eight is sixteen,” and so on. English instruction happened in the same manner. The kids would say an English phrase and then its Nepali translation, over and over again: “This is a cat. Yo biralo ho. This is a dog. Yo kukur ho.” That wasn’t the way I learned, and it wasn’t the way I taught. Writing with a rock on the charcoal-covered piece of wood that served as our blackboard, I drew a cat. Everyone said the Nepali word for it: biralo. I said, “This is a cat.” Then, pointing to the board, I asked a kid, “What is it?” With a bit of coaxing, she said, “This is a cat.”

    Soon I had the kids drawing their own pictures on the board and asking each other — in English — to identify them. I wrote little plays for them to stage, in which Ram and Sita (the Jack and Jill of Nepal) discussed feeding water buffaloes, cooking rice, and other day-to-day activities in their lives. Eventually the students wrote their own dialogues, introducing new characters — Gopal, Durga, Soorya — and different topics: marriage, childrearing, and school itself. A student played John-Sir in one memorable exchange, donning a makeshift straw wig to mimic my Jew-fro. It brought the house down.

             Was my way of teaching “better” than the standard Nepalese method? Call me a white savior, but I believe it was. And here is how I know: my students told me. Nothing I did was particularly creative or innovative; it just made sense to me. But to the students it was a revelation. They had never experienced a classroom that required them to engage and imagine in the ways mine did. Eventually my Peace Corps neighbor and I designed trainings in these methods for other teachers in the region. We held three week-long sessions, in different parts of the district, where teachers put on plays and wrote songs and — most of all — laughed and laughed and laughed some more, in big collective guffaws that echoed out of the schoolhouse and into the hills. Sometimes, I’m sure, they were laughing at the weird Americans and our goofy, smiley mix of informality and exuberance. (“Why are you people always so happy?” a Nepalese friend once asked me.) But I also think they were having fun, and learning. I can’t tell you what effect these trainings had on their own instruction, or on anything else that happened in Nepal. But I also can’t tell you whether the history course I taught last semester at Penn will make any difference in the lives of my American students, either. At some level, all education is an act of faith. You throw a whole bunch of stuff at a wall — or, in Nepal, at a charcoaled piece of wood — and you hope that something sticks.

             And you never know what will. About fifteen years after I left Nepal, I got a call from Akron, Ohio. “John-Sir?” a voice on the other end said. That could only mean one thing: it was one of my Nepalese students. He had passed the national school-leaving exam and had gotten a job teaching in another district, where there was a female Peace Corps volunteer. The first thing he said to her, she later told me, was “Hello, Miss. Do you know John Zimmerman?” You know the rest. They had fallen in love, married, and moved to Ohio to be near her family. My student happily recounted our classroom dialogues as well as the “Steal the Bacon” game (which I rendered, he said, as “Steal the Pig’s Meat”) that we played outside. But he also said he had hesitated to contact me, because he feared that I was still “angry” with him. “Angry?” I asked. “About what?” Near the end of a class, my student recounted, he stood up and started erasing the girls’ blackboard before they had finished copying the evening homework assignment. (Girls and boys sat on separate sides of the classroom in Nepal, each with its own blackboard, so I wrote the assignment on both of them.) I screamed at him, he recalled, the only time the students had ever heard me do that. “You should be ashamed of yourself!” he remembered me yelling. “The girls have as much right to learn as you do!”

             I’d love to know how No White Saviors would parse this little tale. At first glance, it is perfect fodder for their critique: heroic white dude rides into town, spreading truth — his truth, that is — to the benighted brown people. Who was I, they might ask, to police how the Nepalese thought or acted around gender? At the same time, though, I’m sure that many people asking that question fashion themselves feminists of some shape or form. If I had failed to censure my student for erasing the girls’ blackboard, wouldn’t I have been “complicit” — to quote another of our current platitudes — in sexism and misogyny? When I tell my American students this story, they often say that my real mistake was raising my voice; I should have spoken to the student afterwards, calmly explaining the error of his ways, instead of humiliating him in front of his peers. Perhaps so. But whatever means I used, was it OK — or even necessary — for me to correct him at all? If he actually believed that the girls were second-class citizens, who says he was wrong?

             Here’s my answer: the girls did. When my student started erasing their blackboard, he told me, they raised their voices in protest. Indeed, he said, that’s what drew my attention to him in the first place. You can say that I was behaving like a white savior, and you might be right. But I don’t see how you can condemn me for imposing my views of gender, without regard for local tradition, and then disregard the expressed wishes of the girls. That sets you up as the arbiter of what is really “traditional” in Nepal, even as it throws half of the country — its female half, of course — under the proverbial bus. It is imperial anti-imperialism, the saviorism of No White Saviors.

    I fell victim to that, too, in my fateful meeting with the missionary. Unlike the blackboard incident in class, where the girls objected to my student’s behavior, I didn’t hear anyone complain about the missionary. The complaints were in my head, and I projected them onto Nepal. Perhaps that’s inevitable, to some degree or another, when human beings from different parts of the world encounter each other. Every living person’s perspective is partial. People in Nepal projected onto America, too, in ways that I found endlessly fascinating. Americans made lots of money, I was told, and they also had sex 24/7. How are you so rich, a guy once asked me with a straight face, when all you do is screw? I laughed, but it was no joke for female Peace Corps volunteers who had to fend off the men who believed it. Racial minorities and LGBTQ volunteers have faced special challenges, too. Taboos against dark skin run rampant around the globe, as does discrimination against gays; in several Peace Corps countries, indeed, same-sex love is illegal.

             Should Peace Corps volunteers try to challenge that kind of prejudice, as best they can? I think so. And if that sounds like white-saviorism, I have news for you: the Peace Corps isn’t as white anymore. In 1990, four years after I returned from Nepal, just seven percent of Peace Corps volunteers were racial minorities; by 2020, when volunteers around the world were evacuated, 34 percent were non-white. That doesn’t mean that they will be perceived as such outside of America, of course. Back in the 1990s, I conducted oral histories with black Peace Corps volunteers who had served in Africa. Many of them were called anomalous nicknames like “black white” or “native foreigner,” which acknowledged the black volunteers’ connection to the continent while simultaneously underscoring their difference from it. Most of all, the volunteers learned how much difference a category like “black” could contain. Countries such as Kenya and Nigeria were as diverse as the United States, volunteers discovered, containing dozens of different ethnic and language groups. And these people often hated each other, too, just like racists back in the States. “The color of the foot on my neck doesn’t matter, as long as there’s a foot on my neck,” one black ex-volunteer told me, describing the ethnic prejudices he observed in Africa. “Discrimination is discrimination.”

             And love is love, as another black returned volunteer added. “I got an appreciation of where I came from, and the beauty of that,” he said. “Basically, we’re all the same.” Although it sounds hopelessly trite in these jaded times, love for our shared humanity, our universal commonality, is the only way to appreciate our differences and to communicate (literally: to make common) across them. Six decades after it was founded, the Peace Corps remains a foremost symbol of that ideal: love, respect, friendship, and cooperation across oceans and borders, cultures and races. So it will always be an easy target for politicians and ideologues who see it as either starry-eyed nonsense or blind propaganda. In 2017, the Trump Administration proposed slashing the Peace Corps’ budget by fifteen percent; two years later, Republicans in Congress introduced a measure to eliminate the agency’s appropriation altogether. Channeling his inner Donald Trump — or, perhaps, his subconscious Richard Nixon — co-sponsor Congressman Walker (R-NC) said we should “put America first” by using the saved dollars to support disaster relief at home. The bill got 110 votes, all from the GOP, which wasn’t enough to get it through. By June of this year, as the pandemic abated, Peace Corps volunteers had returned to eleven countries. The agency is recruiting for about twenty other nations, too.

             Who will hear that call, especially on our increasingly particularist and identity-inebriated campuses? Very few people, I fear, if No White Saviors and their allies have their way. They use a different vocabulary than conservative critics, of course, sounding more like Noam Chomsky than Newt Gingrich: the agency is racist, neocolonialist, and so on. But the upshot is the same: the Peace Corps is a bad deal, so it’s time to bring everyone home for good. “If we actually solved the world’s problems, who would pay us?” a volunteer in Zambia asked in 2019, shortly before volunteers were evacuated worldwide. The Peace Corps is a junket, she said, wasting scarce resources that could be better spent in America. Mark Walker couldn’t have put it better himself. 

             But that spectacularly misses the point. I have never met a volunteer who thought that they solved the world’s problems, and I’m very sure that I didn’t. I wanted to help, and to learn, and to live. That is all. The scorn and the distrust of our present-day politics—on both sides of the aisle—cannot refute or erase the simple desire for connection. I left Nepal feeling humbled, not privileged. I learned how many different ways there are to be human, and how difficult it is to connect across the differences. But I also learned that we can do it, by trial and — mostly — by error. I learned that universalism is real, that it can be verified by experience, and that difference is not the last word on human life; that one is made wiser by the people one teaches, even when the differences are colossal; that there is more to the human world than power relations, even as we work to make them more equitable; and most importantly, that we mock altruism at our peril. The greatest danger of our moment is not racism or sexism, ableism or cisgenderism. It is cynicism, which tricks us into believing that we cannot overcome the differences—and, yes, the prejudices—that separate us from each other.

             When my service ended in Nepal, the teachers held a party for me at our school. The rice wine flowed, and everyone recounted ridiculous things I had said when I first got there and didn’t know any better. My favorite one: a teacher asked me to visit his house on the weekend, and I replied, in Nepali, “Yes, I’d like to meet your wife.” That’s what you might say in America, if a colleague invited you over, but nobody told me that the verb “to meet” in Nepal had another, earthier connotation. It was horrifying, humiliating, and altogether human: what used to be known as an honest mistake. “It doesn’t matter,” the teacher reassured me, patting my shoulder. And everyone urged me to “come back soon,” a common Nepali farewell.

             It wasn’t soon, but I did come back. Twenty-five years later, in 2010, I returned to the village with my seventeen-year-old daughter. The tractor road was finally finished, so the three-day walk had been trimmed to about eight hours. The first guy I saw said, “Hey, John-Sir, where have you been?” Everywhere, I wanted to say. Nowhere. It doesn’t matter. The school sponsored an impromptu “Welcome Home” reception, at which my daughter and I were pelted with flowers and doused in red tikka powder. I stood up to give a speech in my broken Nepali, which was slowly coming back to me. But I broke down in tears, overwhelmed by my good fortune to have loved — and been loved by — these good people. Was this sentimental? Sure. Was it condescending? Not for a second. Perhaps I had acted, in my worst moments, as their white savior – but they saved me too, from the rigid categories that we now deploy, loudly and dogmatically, to define and divide us. Black and white, Jew and Gentile, Nepalese and American: we are all different and we are all the same. Come back soon, John-Sir. Come back home. 

    Rapunzelania

    It is not wholly myself, this shadow tugging itself loose 

    as though it knew better where to go from here

    what to do and see

    before the ship leaves with the tide. Not a thousand ships, you understand,

    just the one. Tall and proud, I suppose, and in a dreadful hurry,

    what with the wind so uncertain.

     

    But we are not near the sea, my shadow and I,

    we are in a high, quiet place, one of those improbable towers —

    Rapunzel’s, if you will —

    and the prince is nowhere to be found, he is perhaps out sunning himself

    on a rock, like a gecko

    with destiny in the flick of his tail.

     

    Every love story ends badly: we know this,

    as we know that leaves will grow toward the sun, that the gods will die, 

    and the floodwaters rise. 

     

    I recall meeting a prince once or twice,

    and a few princesses, slender and lethargic,

    casting about for something to do.

    One of them made luxury backgammon boards, boxes 

    of fine walnut wood inlaid with bees and cicadas.

     

    Well. I will stand at the window, counting myself lucky. It is dark here, 

    and peaceful. Out flies my restless shadow. Let it. 

    You can only leave the tower if a prince climbs up your hair, and this is a challenge

    because you know princes are quite heavy, heavy as earth, 

    they come bearing happiness which swells as they climb,

    takes on its own shape, like a herald 

    riding the prince’s shoulder.

    Projection

    This is the work of your own hands

    strange to say, all these stories carved

    with a certain severity, each woodcut brought forward

    in strokes, a register of darkness

    removed.

              

    There’s a tower and a bridge. A figure midway across 

    watches the shadows below. Midway between

    what? Today and tomorrow, if you like, cross-hatched

    in black and white.

     

    There are ways to create gray here.

    Inking techniques, washes that float the heavy black 

    up off the page like a boat at high tide,

    like the slow stain of floodwaters seeping across mired ground

    undoing the earth, loosening grubby fistfuls

    to look for worms, asking endless questions, dragging weeds and fenceposts

    until their roots give way. The clouded sky turns over

    in its sleep and goes on raining.

     

    I am like a person who has thought so vividly of someone else

    for years that I no longer have room for them anywhere

    and when you put No on the table between us, hardly taller than

    the pepper grinder and salt cellar, I study it carefully

    – the real thing, not stooping to If but

    brief and apparent as sunrise 

    Blurred World

    It has been here as long as I have I think,

    settling in the sand.

    The current drags through like a wind, 

    carrying small reef-building creatures

    to this outpost.

     

    When I was a child it looked different. 

    It was a room then, the brass knobs spit-shined 

    and the drawers filled with carefully folded clothes;

    the sheets on the small bed slightly rumpled, 

    the corners tucked in. A lamp, a little table and chair. 

    These were mine. Back then, mine meant always,

    or, if you like, an open cannister holding the two fragile wings

    of this and now: mother and father and sister.

     

    What brought me here? Algae covers the bedstead 

    like a tiny dark forest. A moon snail creeps across the floor

    in a straight line. 

     

    Sixty feet down, color leeches from everything, even the red charm I bought in Tibet 

    for my mother, whose body had crystalized. Red is for long life. “Her abdomen is frozen,” 

    the oncologist said. 

     

    The moon snail is almost out of sight. 

    Its gray, wrinkled body passes over the sand in that same straight line

    toward the end of the continental shelf

    and the drop-off. I watch for others, visitors,

    a gleam of something turning away

    in the blue. 

     

    It is not childhood, exactly. 

    A shoe drifts by.

     

    There are my table and chair

    too small to sit at. They say: all along

    you have not seen even one thing clearly

     

    Prehistoric 

    I burned a hole in cloth, watching the threads

     

    shrivel back like the stockinged legs of the wicked witch of the east, who leaves no path

    of return so you had better keep those shoes on

     

    while you learn to grow up with your mistakes like good siblings: you will fight but make up 

     

    so many times that finally the stories themselves will hold you together—just don’t turn to check the facts, for too often they will have given you a choice, and then what do you plan to do

     

    with all those shadows, the road leading back to where you first found the locations and faces that live in you 

     

    as names propped upright in the gloom, a museum 

     

    after the war, and the groundskeeper’s dead son playing

     

    among their shapes as if in another world, his head far away and you too

     

    only a child and not knowing whose memory

     

    you’ve got there?

     

    Exile to Exile 

    We live in a state of constant strife; the truths we relied on no longer seem certain; we are unsettled, shaken, adrift. Even those of us lucky enough to retain our health, homes, families, and jobs feel exiled from the lives we once knew. The bonds of friendship and community that secured us have loosened, and we are cut off in time as well: the past is more remote, the future unimaginable. In crisis, we face a world of danger: war, plague, apocalypse, discord, violence.

    How odd that the work of literature which most fully describes our state, our emergency, was not written recently, but over seven hundred years ago. The evils and the fears that troubled Dante in his Divine Comedy are uncannily like the ones that threaten us; his world is our world, so much so that the monsters of cruelty and lust that fill his pages are instantly recognizable — one only need change the names to see them as the figures of today’s miserable news.

    The ever-present relevance of the The Divine Comedy has long been noted. Almost one hundred years ago, Osip Mandelstam, who experienced some of modernity’s worst darkness, observed that “it is inconceivable to read Dante’s cantos without directing them toward contemporaneity. They were created for that purpose. They are missiles for capturing the future…His contemporaneity is continuous, incalculable and inexhaustible.” This immediacy is no accident. Although he was writing in the early fourteenth century, the poet composed his poem to speak to us. All serious artists hope their work will endure, but Dante, as almost no writer before him, sought to address the future, not just the readers of his day. We know this because he says so. At the end of the Paradiso, he states that the Divine Comedy is for la futura gente, the people of the future.

    The poem is a compendium, an encyclopedia, a summa, and yet there is one subject above all that gives it urgency, and that is exile. Displacement and the longing for return have inspired so much literature, beginning with the Hebrew Bible and the Odyssey. But perhaps no writer is more deeply associated with the theme than Dante. He was cast out from Florence, his birthplace, in 1302, after having served for two years as a high city official, a victim of the interminable political violence of the city. He was stripped of all his property; broken off from his family; threatened with death by fire at the stake if he ever dared to come back. He never did, wandering poor and dishonored through Italy until his death in 1321. With a single exception, all his great works were written in exile; it was exile that made him a master artist, philosopher, and prophet. It was exile, and the causes of exile, that he needed to explain to himself and to warn us about. 

    Before his banishment he was a lyric poet, one of the pioneers of the literary movement known as the dolce stil nuovo — the sweet new style — who wrote love poetry of a philosophical cast. He composed his verses for the admiration of a tiny elite of other poets, a group of a few dozen men whom the great Dante scholar Erich Auerbach compared to a “secret brotherhood.” His chief work of these years is the Vita Nuova, an autobiographical fantasy in poetry and prose about his love for the young Beatrice Portinari, her early death, and his spiritual reaction. (She was to become the avatar of salvation in The Divine Comedy, and thereby the most famous idealized woman in Western literature.) Although a work of great inventiveness, with passages of aching poignancy, other sections are plainly insincere, and Dante’s language can often be vague and periphrastic. No one is sure what some of it means. Auerbach concluded that parts are “baffling…puzzling…obscure.” Dante himself found much of his early poetry dissatisfying, and even called it faticosa, laborious.

    How utterly different is The Divine Comedy. Written in a mode of Italian of Dante’s own invention — the volgare illustre, or elevated vernacular — and addressing the broadest possible audience, the poetry is characterized above all by its vividness, lucidity, and dramatic appeal. No longer does he want to dazzle a few friends with his brilliance and his erudite allusiveness. Now he wishes to inspire and explain and guide and admonish. He throws away the ambiguities that he once adored and turns instead to “clear words and precise language” — to borrow the terms Dante uses in the Paradiso (the third of the three “canticles” that comprise the work) to praise the speech of Cacciaguida, his ancestor and hero. The new goal is visibile parlare — visible speech, presented in tercets of eleven-syllable lines that makes the reader picture and see and hear the events and the figures that the poet describes in gripping detail.

    The leap of imagination that this devising entailed would not have been possible without the experiences gained in exile. One of the wanderer’s discoveries, for example, was the diversity of Italian as spoken throughout the peninsula — its richness, range, and dignity. This gave him the conviction that Italian could be a literary language of the highest ambition, worthy of the most serious subjects, a distinction previously reserved only for Latin. Another was his recognition that language, literature, and indeed all of civilization are part of an historical process, and thus evolve continually and ineluctably. We, who have been born and bred on historicism, may take for granted the idea of cultural change, but it was Dante, comparing the dialects in Italy and the Romance languages in Europe, who first realized that it did so. Words mutate; norms shift; poetry can grow, rather than repeat the forms of the past.

    At the same time, it was only in banishment that Dante came to know Latin literature well. At the end of the Middle Ages, books were still immensely expensive and incredibly rare, and the libraries of Florence were poorly stocked. To be sure, Dante knew Latin before his expulsion, but as has been brilliantly demonstrated by Ulrich Leo, it was only after he was cast out that Dante “read again, or in part for the first time, classical Latin poetry and prose.” It was in exile that he started to identify with another wanderer and founder of Italian civilization — with Aeneas; in exile that he learned to know deeply the Aeneid, whose Book Six, with its account of a descent into Hades, was fundamental for his creation of the Inferno. It was in exile that he came to see Virgil as the greatest of Latin poets, il mio maestro e ‘l mio autore, from whom Dante learned lo bello stile che m’ha fatto onore — “the beautiful style that brought me honor.” This transformative encounter perhaps took place in Verona in 1303, or in Bologna a year or two later: both cities had libraries of classical texts far richer than those in Florence.

     The new clarity of Dante’s poetic language was stimulated not only by high-minded considerations. It was also sparked by the rage that he felt for the injustice of his punishment. We are accustomed to think of The Divine Comedy as a poem of love, and of course it is: the love of God, the love of Beatrice, the love of “the love that moves the sun and the other stars.” But it is important to acknowledge that it is also a poem of hate. His wrath never left him. Such was its ferocity that even when he comes to the redemptive climax of the poem and describes the upper reaches of Paradise, where all is joy and harmony, light and laughter, he cannot resist pouring venom on Pope Boniface VIII, whom he held ultimately responsible for his ill fate. Until this point in The Divine Comedy, Dante had always portrayed Beatrice as an emblem of wisdom, grace, and love; and yet, quite shockingly, Dante makes her last words in the poem, spoken from the Empyrean, castigate the pope and pray for him to rot in Hell. His rage contaminates even her supreme purity.

    Dante’s lust for revenge is most palpable in the Inferno. Here we see him kick one sinner in the head, and pull another’s hair, and give thanks to God for the dismemberment of a man whose family had taken Dante’s house after his exile. Throughout the canticle, it is Dante’s seething and uncontrollable anger that powers the vividness of description. Consider, for example, the first punishment he recounts in the poem:

    These wretches who never were alive,

    were naked and beset

    by stinging flies and wasps

     

    That made their faces stream with blood,

    which, mingled with their tears,

    was gathered at their feet by loathsome worms. 

    Translated by Robert and Jean Hollander

    The outcast poet stings with those flies and wasps; he cheers for those loathsome worms. With insatiable avidity, in canto after canto, he piles up images of muck, filth, and fire; devils cut, gash, strike, jab, and smite the malefactors; the air is greasy, dark, and grim; the shrieks of the damned are piercing and hopeless; blood and bones and guts and shit are everywhere. 

    Crafting these scenes of punitive horror gave his language an edge and bite, a richness and directness, all without precedent in European literature. Only the darkest moments in classical poetry can rival Dante’s black passages for intensity. But really no one ever before had lavished so much talent on detailed and sadistic depictions of utter gruesomeness, and at such length. For Dante ancient literature was fundamentally tragic, a poetry of death and madness. Conquering this mode made him the rival of the masters of the past. This is perhaps most evident in Canto 25 in the Inferno, where amid descriptions of monstrous rape and cruel deformation, Dante triumphantly proclaims, “Silence Lucan … Silence . . . Ovid.” Here he asserts that he has achieved the greatness and the stature that was implicitly promised him when he met these poets, and Homer and Horace, in Limbo, at the outset of his journey through Hell.

    Dante’s wrath in exile was not only an engine of creativity. It was also part of the great moral crisis whose reckoning and resolution lies at the heart of the poem. Dante knew that just retribution belongs to God, not to mortals. He portrays Hell full of “furious shadows” whose unquenchable ire contributes to their agony; and he repeatedly depicts the damned biting and beating themselves in a bestial frenzy. He knew that to allow anger to take hold of his life was a form of mortal peril. Indeed, many scholars believe that Dante may have been on the verge of insanity and contemplated suicide in the early years of his banishment. The challenge for Dante was how to exit the dark wood in which he found himself in the beginning of the poem, to master havoc and despair. 

    His few surviving letters show that he remained bitter and wounded to the end of his days. “We have long wept by the waters of Confusion” like “exiles in Babylon,” he wrote in an epistle in 1311, echoing the famous Psalm. And yet The Divine Comedy is finally a manual of deliverance, an incitement to hope and action. It makes its way to paradise. The book itself is the product of the perseverance that it preaches. It took Dante about fifteen years to complete, working at the pace of about three lines — one tercet — a day. The longer his banishment went on, the more ardently he poured all his powers into the book, which he completed shortly before his death in 1321. The work is full of exhortations to himself: to start, to continue, to persist to the end. “I leave behind bitterness and go for the sweet fruit,” he writes in the Inferno. “Nothing hinders the ascent except the darkness of night which binds the will with helplessness,” he reminds himself in the Purgatorio. He must go on because he “who robs himself of the world…and wastes his abilities, lamenting when he should rejoice,” sins against God.

    The poem was Dante’s path to recognition and redemption. In the wilderness of exile, his understanding of exile deepened, his vision expanded, and he became the author of perhaps the most visionary work in Western literature. The book encompasses not only the details of his own suffering but also the reasons for the predation, fraud, and violence that humans have inflicted upon one another everywhere. All the theology in the work notwithstanding, he is a close and unillusioned student of human life. And as his conception of the theme expanded, so did the audience that Dante imagined for his work: not just other cultured elites or fellow Florentine citizens, but potentially all readers of his time and into the future.

    While always grieving for his homelessness and dreaming of his return to Florence, he began to see that exile gave him an unprecedented perspective on human affairs, an intellectual advantage. In a world where everyone’s identity was defined by their membership in groups — especially the family, the parish, the city, and the political party — Guelf or Ghibelline, the two alliances that poisonously divided all of Italy and much of northern Europe too — Dante came to view himself as a party of one, and to believe that this status was a badge of honor rather than blame. The loss of belonging stripped him of community, but in his isolation gave him new hope for understanding.

    Exile became a voyage of discovery, for all its peril and humiliation. At times he compared his genius, his poem, and his fate to a ship on a sea. Like Ulysses, with whom he so strongly identified, he was bound “to gain experience of the world and learn man’s vices, and his worth,” for man was “not made to live like brutes or beasts, but to pursue virtue and knowledge.” This pilgrimage carried him from the dark wood, where all seemed lost and nothing could be seen, to ever greater distances, whose vantage point made all clear — first the extremities of Hell and then the mountain of Purgatory and finally the heights of heaven, from which he looks down and sees all of earth. From this vista he marvels not only at the sublimity of his apotheosis, but also at man’s unending propensity for violence, comparing the world to l’aiuola che ci fa tanto feroci, “the threshing floor that makes us so ferocious.” Even in heaven he remains troubled by earth. 

    Searching for the cause of this malevolence, he not only looked at mankind from above, he also turned inward. “If the world around you goes astray, in you is the cause and in you let it be sought,” he instructs in the Purgatorio. One source of the power of the book is his capacity to empathize with the sinners whom he describes in all their misguided passion and intensity. The author of the Inferno is a connoisseur of human failings. The vividly portrayed characters are fragments of himself, intended to mirror his own passions and weaknesses, his own temptations to moral error. Dante lusts like Francesca and Paolo, and feels with them the seductions of literature; he thrills with partisan hatred like Farinata; he considers suicide like Piero della Vigna; he longs for vengeance like Ugolino; he abandons all for the pursuit of knowledge like Ulysses. Dante was certain of his own genius and his supreme stature as a poet, but he was also sure that he was an Everyman, who shared in the common flaws of humanity, as do all his readers.

    In exile, Dante came to see the fallen nature of man after exile from the Garden as the unifying force of human history. It is the disorder of human desires that propels the unfolding of events and ties them all together, so that Florence of his day resembled Babylon and Troy and Thebes of the past, each brought to ruin by unbridled passions for power and gold. Like Augustine, Dante believed it was not sexual lust but the lust for dominance — what Augustine called the libido dominandi — that drove history. For Dante, the world is “blind,” a “bloody heap,” a “desert devoid of virtue,” a “nest of Leda,” whose offspring in their lordly arrogance and mindless cupidity doom everyone around them to destruction. The uncontrolled appetite of humans began at the very beginning: according to Dante, Adam and Eve tasted of the fruit and were expelled from Eden only six hours after their creation.

    It is a tragic view of life, and yet Dante entitled his poem a Comedy. (The name we know it by, The Divine Comedy, is a later convention.) He did so because, despite his belief in man’s propensity for sin, he also believed that by nature humankind is virtuous, noble, and dignified, and supposed to live a life of happiness and fulfillment here on earth, not just in heaven. The poem is the story of the recovery and the unfolding of this destiny — at least for one exemplary pilgrim, Dante. He spells out his idea of human goodness most succinctly in the two philosophical treatises that he worked on during exile. In the Convivio, or Banquet he declares that “there is no greatness greater for human beings than virtuous activity, which is our intrinsic good,” and that “the fruits most properly ours are the moral virtues, since they are completely within our power.” In the Monarchia, or Monarchy, he writes that “ineffable providence” has set before us the goal of “happiness in this life, which consists in the exercise of our proper virtues.” In these opinions he was deeply influenced by Aristotle, whom he refers to as “the master of our life” in the Convivio. Dante had likely studied the Nichomachean Ethics under the guidance of a student of Thomas Aquinas at the Dominican convent of Santa Maria Novella in Florence in the 1290s.

    So the human darkness that Dante chillingly portrayed reaches its limits. Fundamental to his analysis of human nature was his focus on love, and its expression as desire and appetite. “Desire is spiritual motion,” he said, the force that drives our action toward evil or good. It is what defines and gives shape to an individual’s character and destiny. You are bound or you are liberated by what you love. The major distinction between the figures of the Inferno and those of the Purgatorio and the Paradiso, between the sinners and the saints, is what they desire.

    The figures in the Inferno suffer from insatiable appetite. They are ruled by their mouths, an image that Dante emphasizes repeatedly, from the mouth of Francesca trembling with passion for Paolo, to the mouth of Ugolino, stained with the blood of Ruggiero whose brains he has eaten in revenge, to the mouths of Satan at the very pit of Hell, stuffed with the three great traitors Judas, Brutus, and Cassius. The she-wolf of avarice who blocks Dante’s way in Canto I sets the tone for the entire canticle:

    Her nature is so vicious and malign

    her greedy appetite is never sated—

    after she feeds she is hungrier than ever.

    Translated by Robert and Jean Hollander

    Avarice by nature always seeks an end — more, more, more — that can never be reached, thus unleashing a kind of frantic madness in those who experience it. According to Dante, Pride and Envy also share in this bottomless avidity. They are the three ruling vices of life on earth; they typify rulers, popes, warriors, and merchants—the effective lords of the world — and determine the unjust character of their governance and domination.

    Dante believed that these vices were essentially anti-social, spurred by hatred, rather than love, for one’s neighbor. They necessitated ingiuria, an Italian term whose meaning combines both injustice and harm to others. To highlight this idea as central in the poem’s ethical philosophy, he composed this sentence for the exact midpoint of the entire Divine Comedy: “All these three forms of love cause weeping down below.” Humans, at least after the Fall, are ensnared and distorted by these appetites. The war and the greed and the bad government that plentifully characterized his own era were thus bound to repeat forever. 

    The blessed of the Purgatorio and the Paradiso are likewise spurred by desire, but theirs is a need for God, and for the Common Good. It is a vision of love as Agape rather than Eros. On earth humans are locked in a zero-sum game, fighting over spoils than can never be divided equitably. In heaven, by contrast, what they seek and what they share is love, each of the blessed radiating benevolence to others, like an undiminishing light reflected among an infinity of mirrors. While continuously yearning for God, the truest object of love, this is a desire that is also continuously fulfilled. It is the zeal of the saints, and it pulls Dante upward with ever greater force as he ascends to paradise. And unlike the misplaced appetites of the fallen, it ends in the just and proper form of man. As he explains in the Convivio, “The supreme desire of each thing, the primal one given by its nature, is to return to its principle. And since God is the principle of our souls and made them like himself…the soul desires more than anything else to return to him.” Here exile is given a cosmic metaphor and becomes metaphysical. The greatest banishment is the banishment from God, and therefore from oneself. And rather than the mouth of the Inferno, here the essential human organ of love and desire is the eye, the recipient of heavenly light. “Beatitude itself is based upon the act of seeing,” Dante affirms, almost as if from experience.

    Dante’s tale is one of optimism, too, because he believed man to be free to regulate his behavior and affect his destiny. In the Paradiso, he calls free will the “greatest gift” of God to man. It is this gift, the capacity to choose and to act, that emancipates us from the inevitabilities of fate and makes life into a journey, although a perilous one. “In the middle of the journey of our life”: those are the famous opening words of the work. Auerbach has beautifully described the moral condition envisioned by Dante in The Divine Comedy: “God is at rest; His Creation moves along eternally determined paths, while man alone must seek his decision in uncertainty… Man alone, but man in every case regardless of his earthly situation, is and must be a dramatic hero.” Even though he had been selected by God for his visionary voyage through hell and heaven, an honor among the living previously given only to Paul, and despite the traveler’s aid granted to him by Virgil and Beatrice, Dante’s trajectory is still beset with difficulty; he still must struggle to understand, to persevere, to act correctly, to be worthy of the mission with which God has tasked him.

    In conceiving and writing The Divine Comedy, the poet reimagined his exile as a pilgrimage. No longer does he want primarily to return to Florence, the city on the Arno. Now he seeks to return to Jerusalem, the city of God in heaven. To do so, he must transform himself. He must learn to control his anger, and act with right measure in his pursuit of knowledge, and temper his lust for fame, and constrain his pride. Only by doing so will be able to “return in beauty” like “new plants renewed with new-sprung leaves pure and prepared.” And as a man remade, he will also return as a new kind of poet, one with a new language resurrected from the ashes of the tragic and the erotic, and reformed in the mode of what he exquisitely calls the cantar di là, the song of the beyond. Dante’s voyage is one of hope, announced with nearly prophetic fervor in the face of disorder and destruction.

    We tend to pictures philosophers and writers in the setting of the cloister and the café, the library and the lecture room, but Dante saw them — and himself — as figures in the world, and he knew that like everyone else in the world, they are routinely tossed aside by the vicissitudes of power in war and politics, and often they end up on the margins of society, in exile or worse. Consider the fates of those whom Dante admired most: Socrates, executed; Plato, enslaved; Aristotle, exiled; Cicero, murdered; Seneca, exiled and later forced into suicide; Ovid, exiled; Augustine, dying in a city besieged by Vandals; Boethius, imprisoned, tortured, killed. The parade of woe continued into Dante’s own time; indeed, almost all his mentors and friends among the poets of the dolce stil novo spent time in banishment. The greatest philosopher and theologian of the thirteenth century, Thomas Aquinas, had been assassinated (or so Dante believed). And as a member of the Florentine government council, Dante even participated in the decision to cast out a fellow poet, the remarkable Guido Cavalcanti. They suffer so, because everyone does.

    His own experience did not offer Dante much ground for a worldview of hope, and so he had to locate it elsewhere. And yet he stubbornly counseled that, all the evils notwithstanding, we act with determination and purpose. Early in his exile, he observed that “truly I have been a boat without sail or rudder, carried to various ports and inlets and shores by the dry wind that painful poverty blows.” But about five years later, having surveyed and described in the Inferno the damage that humankind perpetrates, he set out to find and show a better way in the Purgatorio. For the first line of this canticle, he wrote: “To course in better waters, raise the sails.”

    In our journey of recent years, we have seen some of the best and the worst of humankind. Like the wayfarer in The Divine Comedy, we have faced gruesome death, raged against despair, witnessed corruption, disorder and anarchy. We have asked what it means, and why we must suffer so. In my own experience of this symbolic exile, cut off from friends and from accustomed activity, I began reading Dante. He speaks to our troubles because he reminds us that the folly, vice, and depredation of humans will always be with us. We are always waking to find ourselves in the middle of the dark forest. Yet he leaves us believing that, armed with the recognition of both our vices and our virtues, we can find a better nature within our imperfect selves. If we can raise our sails, we can catch a breath of hopeful wind; and on that wind we can navigate life toward a better end, in the pursuit of knowledge and the reverence of love. 

    The Sorrow Songs

    I

    On New Year’s Day in 1863, Thomas Wentworth Higginson was stationed in the Sea Islands of South Carolina, presiding over a large group of Unionist whites and formerly enslaved black workers who had gathered to celebrate the Emancipation Proclamation. A prominent prewar abolitionist, Higginson had recently become commander of one of the first black regiments in the Civil War, the First South Carolina Volunteers. At the emancipation ceremony that Higginson had arranged, a local planter who had converted to abolitionism read the great document. There was a presentation of colors. Then, unexpectedly, as Higginson started to wave the flag, an elderly black man near the platform broke into song: “My country, ’tis of thee, Sweet land of liberty.” Two women joined, followed by others in the crowd. Higginson could hardly contain his emotion; everyone started to cry. “I never saw anything so electric,” he wrote in his diary; “it made all other words cheap, it seemed the choked voice of a race, at last unloosed; . . . art could not have dreamed of a tribute to the day of jubilee that should be so affecting; history will not believe it.”

    Higginson had arrived in the Sea Islands five weeks earlier. He was part of a wave of northern whites who, during the turmoil of the war, experienced their first sustained encounters with southern slaves. As Higginson later recalled, one thing that this group mostly shared was their view of black people as “intensely human.” This differentiated them sharply, in Higginson’s mind, from southern slave owners, who had long claimed to understand their slaves better than northerners but tended in practice to see them (according to Higginson) as “merely a check for a thousand dollars, or less, from a slave auctioneer.”

    Still, even though abolitionists tended to be relatively free from prejudices that were strictly racial in nature, they often harbored their own prejudices of culture and class. So while they never equated blackness with a permanent condition of barbarism, they did sometimes see black people as ignorant or backward, in need of education and guidance as they made their first steps in freedom. The result was a variety of responses, sometimes sensitive and sympathetic, sometimes condescending or patronizing, often a complex combination. 

    With all its complications, this sudden mass encounter between newly freed slaves and white people who saw them as “intensely human” opened a fresh possibility in the study of slavery: the formal study of slave culture.

    At the time such a thing did not exist. This neglect obviously owed a good deal to slavery and bigotry. The largely illiterate society of black slaves was not seen at the time as contributing anything to American culture as it was usually understood. Slaves seemed to have no role in driving the progress of civilization toward its pinnacle in white European and American society — aside from, in the minds of southerners, performing the important role of propping it up. Writings on slavery up to that point had focused more on its political, economic, and social effects, especially its effects on whites, and abolitionists had catalogued slavery’s physical cruelty and deprivation on blacks. 

    The neglect of slave culture also reflected a broader lack of attention given to oral “folk” cultures, which tended to be either taken for granted or considered unimportant at a time when certain conceptions of cultural hierarchy and social evolution still held sway. White southerners did not become interested in black music or culture until after the Civil War, when they felt a strong wave of nostalgia for old plantation life. Some black music was known in the North before the war, but it was mostly minstrelsy. 

    White southerners and northern travelers surely would have seen and heard the cultural practices of enslaved blacks in the South, but such observations usually did not make it into print. “I believe they have no history or a very short one,” one sympathetic southern white woman wrote, in connection with the slave songs, just after the Civil War; her mother had made a small collection around 1840, but it was never published. To the extent that antebellum Americans discussed slave songs, and slave culture more generally, it was primarily to argue about whether they provided evidence of contentment or sorrow.

    That changed when northern abolitionists, missionaries, soldiers, and teachers encountered newly freed slaves during the war. Almost from first contact, slave music received significant attention. When Reverend Lewis Lockwood arrived as a missionary to the Union post at Fort Monroe, Virginia, in early September 1861, he took down notes about the music that he heard from southern slaves who had fled there. His initial report, written the day after he arrived, included the first written record of the song “Go Down, Moses.” By December, Lockwood had the full text of the song, which was published in the New York Tribune and the National Anti-Slavery Standard

    The Lord by Moses to Pharaoh said: 

     “O let my people go!

    If not, I’ll smite your first-born dead, 

     Then let my people go!”

    O! go down, Moses

    Away down to Egypt’s land,

    And tell King Pharaoh,

    To let my people go!

    The song caught on with the abolitionist crowd. Henry Ward Beecher’s Brooklyn congregation started to sing it, and sheet music went on sale at the Pennsylvania Anti-Slavery Society office in Philadelphia.

    By that time, reports were also starting to come in from the Sea Islands, which became by far the most important site for sustained encounters between northern whites and newly freed blacks during the war. The Navy had secured a beachhead there very early, in November 1861, as part of the larger effort to blockade the Confederacy. White planters fled the islands. They tried to convince their slaves to go with them, but the slaves — thousands of them — thought better of it and stayed put. These people technically became contraband of war — and, soon, part of a vast experiment in education and free labor centered on the islands around Port Royal, including Hilton Head and St. Helena. Abolitionist groups in Boston, New York, and Philadelphia raised money and recruited volunteers to serve as teachers, missionaries, and plantation superintendents.

    In Philadelphia, the city’s Port Royal Relief Committee was headed by James Miller McKim, who also helped run the Pennsylvania Anti-Slavery Society office where “Go Down, Moses” was for sale. McKim was an abolitionist of some distinction. Back in 1849, he had been the one to open the box containing Henry “Box” Brown, the enterprising slave who shipped himself from Richmond to Philadelphia, and a decade later McKim and his wife had helped John Brown’s wife bring Brown’s body back north from Harpers Ferry. In early 1862, McKim quickly raised more than five thousand dollars to send to Port Royal. A few months later, the committee decided to dispatch McKim himself to provide a report on conditions there. McKim brought along his eighteen-year-old daughter, Lucy, as his assistant. The McKims had raised their children as staunch abolitionists. Lucy’s younger brother Charles Follen McKim — later the architect of New York’s Pennsylvania Station and other masterpieces of neoclassical modernism — was named for a German-born abolitionist who had died in a shipwreck off the Atlantic coast in 1840. In the 1850s, Lucy briefly attended and then taught at Eagleswood, an experimental school in New Jersey run by the abolitionist couple Theodore Dwight Weld and Angelina Grimké. 

    James and Lucy McKim arrived at St. Helena Island on June 8, 1862 and stayed for three weeks. Within a day, Lucy felt that she was seeing slavery with fresh eyes. “The pro-slavery folks were right when they said, Go South, Abolitionists, if you want to have your views changed on the subject of slavery,” she wrote to her mother. “Mine have been most profoundly.” Even Garrisonian extremism seemed like a mild response now that she had seen the conditions of slavery for herself. “How lukewarm we have been!” she exclaimed. “How little we knew!” 

    She began to jot down notes about everything she saw. She and others in the Sea Islands seem to have been motivated by a desire to record what they could before the evidence of slavery was swept away by change. In one letter, Lucy described the typical layout of a plantation’s “Quarters — or ‘Nigger-houses’” for a friend. “These were built in two long rows, facing each other, with a row of beautiful trees . . . down the center,” she explained. Every cabin had a small vegetable garden and fig tree out back. Inside, the cabins were “inconceivably small & filthy. With but few exceptions, the occupants lie on the floor, in bundles of rags, that have been reduced to one common dirt color.”

    But what captured Lucy’s attention most were the songs. Like many people who came to Port Royal, she started copying down the tunes and the words just days after her arrival. Unlike many others, she remained entranced by the songs even after her return to Philadelphia in July. A trained musician who had already spent a few years teaching piano, she was better prepared than most people to respond sensitively to them. She asked her friends in Port Royal to keep sending her examples, and she began to prepare a few piano arrangements for publication. “Poor Rosy, Poor Gal,” the first in her series of “Songs of the Freedmen of Port Royal” — and the second published slave song arrangement in the United States — came out in October: 

    Before I spend one day in hell,

      Heab’n shalla be my home.

    I sing and pray my soul away.

      Heab’n shalla be my home.

      Poor Rosy, poor gal!

      Poor Rosy, poor gal!

      Poor Rosy, poor gal!

      Heab’n shalla be my home.

    She sent a copy to Dwight’s Journal of Music along with an introductory letter — which, when it was published in the journal in November, became the first article ever to describe the musical style and technique of the songs. 

    Beyond their purely musical interest, she noted, the songs were also “valuable as an expression of the character and life of the race which is playing such a conspicuous part in our history.” She went on to provide a perceptive reading of what the songs revealed: “The wild, sad strains tell, as the sufferers themselves never could, of crushed hopes, keen sorrow, and a dull daily misery which covered them as hopelessly as the fog from the rice-swamps,” she wrote. “On the other hand, the words breathe a trusting faith in rest in the future.” 

    Lucy’s second arrangement, “Roll, Jordan, Roll,” appeared in January, just after Emancipation Day:

    March, angels, march!

    March, angels, march!

    My soul am rise to heav’n Lord,

    Where de heav’n’e Jording roll.

    Little chillen sittin’ on de Tree ob Life,

    Where de heav’n’e Jording roll,

    Oh! Roll, Jording, roll, Jording, Roll, Jording, roll!

    She seems to have had a plan to publish four more, but life intervened. Young men to whom she had romantic attachments died in the war, and she soon entered into a courtship with Wendell Phillips Garrison, the son whom William Lloyd Garrison had named for his longtime abolitionist ally. She spent her time teaching piano and waiting for Wendell to find a steady job so they could get married.

    Meanwhile Thomas Wentworth Higginson had arrived in Port Royal, called there in November 1862 to take command of the First South Carolina Volunteers, a regiment raised from the freedpeople on the Sea Islands. He got the position because he was one of the most fiery abolitionists in America. As a young man in Massachusetts, he spurned the law because he saw it as nothing but a system for the protection of property. Eventually trained as a minister — he dropped out of Harvard Divinity School because he was dissatisfied with the professors, then returned a year later — he took a position in Newburyport, where he advocated abolitionism and temperance. Neither stance went over well in a town with a rum factory, and he was asked to resign. 

    After 1850, Higginson upped the ante. He became involved in the Underground Railroad and also helped to form Boston’s Vigilance Committee, which tried to protect alleged fugitive slaves from being recaptured. In 1854, he bought the axes for an ill-fated attempt to free the alleged fugitive Anthony Burns from Boston’s Federal Courthouse. In the scrum, he got hit on the chin and acquired a scar that he carried for the rest of his life; but a federal marshal was killed, an event that Higginson later recalled with perhaps a hint of pride (and certainly little sadness) as the first casualty of the Civil War. His next act was to take rifles and ammunition to Kansas, where he learned about the violent exploits of John Brown in the battles going on there between slaveholders and free-soilers. A few years later, when Brown needed help planning and funding his raid on Harpers Ferry, the radical Higginson was one of the so-called Secret Six who provided assistance. After the raid went awry, he was the only one of the six courageous or foolhardy enough to spill his secret; he also raised money for Brown’s defense and even planned an armed raid to rescue Brown before Brown learned of the scheme and called it off.

    Higginson redirected his energy into an oblique defense of Brown mounted in a series of deeply researched articles on slave insurrections — Nat Turner, Denmark Vesey, maroons in Jamaica and Surinam — which appeared from 1860 to 1862 in a new magazine called the Atlantic Monthly. He also prepared for war. He did not volunteer initially because his wife was ill, but by 1862 he could no longer resist — particularly when he got the invitation to serve with black soldiers in South Carolina, a position he saw as the fulfillment of Brown’s legacy. “As many persons have said,” he wrote on board the ship taking him south, “the first man who organizes & commands a successful black regiment will perform the most important service in the history of the War.”

    Higginson thought he already knew all about slavery, but several months in the Sea Islands with his men showed him that he still had plenty to learn from the slaves themselves. At one point his black corporal, Robert Sutton, led the regiment up the St. Mary’s River to the plantation from which he had escaped, where the woman of the house remarked with some disdain that she had once known Sutton as “Bob.” Sutton took Higginson to the plantation’s slave jail for a look at the chains and stocks; in their presence Higginson felt himself choking, as if he couldn’t swallow or breathe. He realized that Sutton had a “more thorough and far-reaching” understanding of the slavery problem than “any Abolitionist.” That sense continued to grow as Higginson got to know his men better and talked to them about their experiences. “It was not the individuals, but the ownership, of which they complained,” he later recalled. “That they saw to be a wrong which no special kindnesses could right. On this, as on all points connected with slavery, they understood the matter as clearly as Garrison or Phillips.”

    Higginson trained his men and led them on sorties during the day, and at night he listened to them as they gathered around their campfires to sing “these incomprehensible Negro Methodist, meaningless, monotonous, endless chants,” as he wrote in his journal after about ten days, “with obscure syllables recurring constantly & slight variations interwoven, all accompanied with a regular drumming of the feet & clapping of the hands, like castanets.” There was more than a little primitivism in this initial response, a sense that the songs were barbaric relics beneath the attention of cultured people. But soon Higginson learned to love the music, looking forward to it every night, running out of his tent when he heard a new song, writing down all the lyrics he could decipher (sometimes with pencil and notebook hidden in his pocket) and calling on Sutton to help fill in the gaps. “When I am tired & jaded in the evening nothing refreshes me more immediately than to go & hear the men singing in the Company streets,” he wrote after about ten months in South Carolina. He was particularly moved by the “graceful and beautiful” song “I Know Moon-Rise”:

    I know moon-rise, I know star-rise,

      Lay dis body down.

    I walk in de moonlight, I walk in de starlight,

      To lay dis body down.

    I’ll walk in de graveyard, I’ll walk through de graveyard,

      To lay dis body down.

    I’ll lie in de grave and stretch out my arms;

      Lay dis body down.

    I go to de judgment in de evenin’ of de day,

      When I lay dis body down;

    And my soul and your soul will meet in de day

      When I lay dis body down.

    “What ages of exhaustion these four words contain,” Higginson marveled at the repetition of “lay dis body down.” “Rivers of tears might be shed over them.”

    This was not idle talk; Higginson knew real artistry when he saw it. In addition to his abolitionist exploits, he was one of the most prominent literary men in America. He had studied French with Longfellow at Harvard and would later serve as poetry editor of The Nation for twenty-five years. Earlier in 1862, not long before he took command in the Sea Islands, a young woman from Amherst named Emily Dickinson had sent him, unsolicited, a few of her poems; recognizing their merit, he wrote back and kept up a correspondence with her for the next two decades. After her death, her family gave her poems to Higginson, who arranged for their first publication — today probably his chief claim to fame.

    He lasted only a few months in South Carolina. He was wounded in an upriver raid in July 1863, spent a month recovering in Massachusetts, and then contracted malaria soon after his return. Back in New England, Higginson started writing articles about his experience for the Atlantic, which were eventually collected in 1870, along with some of his journals and letters, in his book, Army Life in a Black Regiment. One of the articles, which appeared in June 1867, discussed “Negro Spirituals.” Higginson included the lyrics to three dozen songs as well as an account of how the songs were composed, which he had learned on a boat ride from Beaufort to Lady’s Island. His oarsman recalled a time when he had started a new song while carrying loads of rice. “De nigger-driver, he keep a-callin’ on us,” the oarsman explained. “Den I made a sing, just puttin’ a word, and den anudder word.” His song was called “The Driver.” When he started singing it there on the boat, the other black men quickly caught the tune and joined in for the chorus even though they had never heard it before. The spontaneous composition showed how individual improvisation worked in concert with a vast reservoir of communal culture.

    Like McKim, Higginson speculated about the role the songs — and religion more generally — played in slave culture. One thing that had surprised him a great deal in the Sea Islands was that while slavery struck him as even more brutal than he had imagined, the enslaved people themselves did not seem to be brutalized by it. He wondered for a while about this paradox, and eventually solved it with the songs he had heard. “They were a stimulus to courage and a tie to heaven,” he believed. “By these they could sing themselves, as had their fathers before them, out of the contemplation of their own low estate, into the sublime scenery of the Apocalypse. There is no parallel instance of an oppressed race thus sustained by the religious sentiment alone.” Now that slavery was gone, he wrote, “history cannot afford to lose this portion of its record.”

    By chance, there had been a historian in the Sea Islands with Higginson, and he was also interested in the songs. His name was William Francis Allen, and he arrived in November 1863. A Massachusetts abolitionist, he was sent by the New England Freedmen’s Aid Society, tasked with teaching one hundred fifty freedpeople spread across three plantations on St. Helena Island. He had studied philology and classics at Harvard and ancient history at Göttingen; after the war, he ended up at the University of Wisconsin, where he became a mentor to the great American historian Frederick Jackson Turner, who always considered Allen the finest scholar he ever met. Like McKim, Allen was also an enthusiastic amateur musician who could sing by sight and play flute and piano. Walking through the woods one day in South Carolina, for example, he heard a bird singing a trill of notes that reminded him of Beethoven’s Sixth Symphony.

    Allen’s reactions to the slave south mirrored those of McKim and Higginson. He was struck above all by the humanity of the people he met. Abolitionist rhetoric often emphasized the degrading aspects of slavery, but Allen discovered that the freedpeople were “much, very much, less degraded than I expected.” For him, that threw the evil of slavery into even sharper relief. “They seem human beings, neither more nor less,” he wrote after a few days in the Sea Islands. “It seems trite and common-place to say so, but I must say that the wickedness of slavery never seemed so clear as when I saw these people (about 240 on this plantation), so entirely human as they appear, and considered how they have been treated, and how little reason there is that they should be selected from all mankind for this awful abuse.”

    Throughout his eight-month stay, Allen remained impressed by the lack of degradation (a word he used often) among the former slaves he encountered, even as he learned more about the abuse they had endured. He was staying on what had previously been John Fripp’s thousand-acre plantation, where he taught school in the big house. Among the formerly enslaved people still living there, Fripp had a reputation as a “kind master.” “Still,” Allen wrote, “the whipping-post stands within twenty feet of the school-room window (which used to be the drawing room) — a sprawling Asia-berry tree, leafless now and completely covered with long gray moss. It looks weird enough. Mrs. F. used to stand at the window to see the flogging.” Among other masters, even less “kind,” there were tales of “putting a hot poker in a girl’s mouth, and sprinkling red pepper in her eyes.”

    In addition to stories of brutal punishments, Allen recorded everything he saw and heard: the arrangement of the Quarters, the location of the slave cemetery, the provisions the slaves had received (a peck of corn a week; some meat and eight yards of cloth twice a year). Trained in philology, he also paid attention to the grammar and the vocabulary of the local dialect. “It is really worthwhile as a study in linguistics,” he wrote after a few weeks. He kept gathering examples during the rest of his stay, and at the end of the war he wrote an article “to put them on record” before they vanished in the changed conditions of freedom. 

    Above all, Allen recorded the songs. He arrived in the Sea Islands late enough that he was already primed to appreciate slave music. A few months before his trip, he read an article called “Under the Palmetto,” by the Unitarian minister Henry George Spaulding, which included some song lyrics and melodies that Spaulding had encountered during his own trip to Port Royal. Allen had started to sing some of the songs at home on his own. He first heard the singing for himself on the Sunday after his arrival, standing outside a packed Praise House in the Quarters. All he heard at first was a standard hymn, “Old Hundred,” but it was the style of singing that truly fascinated him, with the song “maintained throughout by one voice or another, but curiously varied at every note, so as to form an intricate intertwining of harmonious sounds.” He added, “no description I have read conveys any notion of it.” He started to copy down all the lyrics and tunes he could.

    Allen went back to Massachusetts in the summer of 1864, then returned to South Carolina at the end of the war to serve as assistant superintendent of schools in Charleston. That summer, he started to write dispatches on conditions in the South for a new magazine called The Nation, which Lucy McKim’s father, James, had just cofounded as a venue for antislavery writers to discuss the problems facing the country after the war. The McKims probably had a hand in ensuring that the magazine’s first literary editor would be Wendell Phillips Garrison, Lucy’s fiancé; once he had this steady job, he and Lucy were married on December 6, 1865, the same day the Thirteenth Amendment was ratified.

    Lucy had been living in the North since 1862, but she remained marked by her brief stay in the Sea Islands. Sometime in the first year of her marriage, she seems to have suggested to Wendell the idea of publishing a book-length collection of slave songs. Serious planning was under way by December 1866, and they began to reach out to contributors in early 1867. They knew of Allen from his articles for The Nation and quickly got him on board; Allen then brought on his younger cousin Charles Ware, who had collected more than four dozen songs in the Sea Islands during his time there.

    The core team for the collection was now in place: Lucy had come up with the idea, Ware was contributing many of the songs, and Allen took over much of the editing in the spring of 1867, when Lucy gave birth to her first child. Allen was teaching in New Jersey at the time, by chance at the same abolitionist academy that Lucy had attended a decade before, and he often traveled to New York to work on the book with Lucy and Wendell. He drafted the introduction in May, before he headed west to start work at the University of Wisconsin. The editors also got in touch with Higginson, who sent them his own collection of lyrics (which he was preparing for publication in the Atlantic) as well as a host of observations and insights. The book was ready by late summer. 

    II

    Published in November 1867, Slave Songs of the United States was the first book on slave music in America as well as the first book on African American culture and perhaps even the first book-length collection of American folksongs of any kind. Among its 136 songs were beloved tunes such as “Michael Row the Boat Ashore” and “The Good Old Way”:

    As I went down in de valley to pray,

    Studying about dat good old way,

    When you shall wear de starry crown,

    Good Lord, show me de way.

    In his meticulous introduction, Allen laid out four reasons (besides sheer beauty) why such songs were worth saving and studying. The first was that they revealed something about slave life. The content of the spirituals illustrated “the feelings, opinions and habits of the slaves,” Allen wrote, as well as their customs and their attitudes toward religion — though Allen himself did not probe very far in that direction. The songs could even provide evidence about the patterns of slave labor, he noted, since the tempo of the songs always depended on the activity they accompanied: what kind of work, what pace of work, and so forth.

    The songs also raised the question of what ingredients had gone into the making of slave culture in the South. This was Allen’s second reason for paying special attention to the tunes and the lyrics of freed slaves. The central question — which later recurred again and again in studies of slave culture — was how much consisted of so-called survivals from Africa, how much was adopted or adapted from white Americans, and how much was the spontaneous product of slave life itself. Culture is always the product of both inheritance and experience, but in this case the question was fraught with considerable ideological weight that often influenced people’s answers. Before the war, slavery’s defenders had argued that slavery operated as a means of education for a barbarous black race, gradually replacing African superstition with American civilization. In a more negative light, some antislavery authors saw the Middle Passage and the process of enslavement as splintering experiences that stripped Africans of their heritage, making them “a mass of broken fragments, thrown to and fro,” as one black writer put it before the war. 

    Now there was new evidence — and a new perspective. To James Miller McKim, for example, some songs and dances that he saw in the Sea Islands seemed like “a remnant of African worship.” Higginson suggested, based on his conversations with some old Sea Islanders, that the phrase “mighty Myo” in the song “My Army Cross Over” might be an African term meaning “the river of death.” “In the Cameroon dialect,” he noted, “‘Mawa’ signifies ‘to die.’” For his part, Allen — making use of his classical training — speculated soon after his arrival in the Sea Islands that one song might be “of African origin, with Christianity engrafted upon it just as it was upon the ancient Roman ritual.” He stuck to a similar interpretation in his introduction to Slave Songs a few years later, though with a somewhat heavier emphasis on white American influences. “The chief part of the negro music is civilized in its character,” he decided — “partly composed under the influence of association with the whites, partly actually imitated from their music.” Yet he added that there always remained “a distinct tinge of their native Africa” somewhere in the mix.

    There had clearly been some sort of cultural exchange between whites and blacks under the slave system. One interesting aspect of that exchange was that it had not operated evenly across the South. This was the third reason Allen gave for collecting slave songs: to see how and where slave culture shifted in form. Close attention to variations in different versions of the songs showed that slave culture actually consisted of several related regional cultures that shared broad characteristics but differed in the details — differences that the editors chose to highlight by grouping the songs they had collected by place of origin: southeastern seaboard, northern seaboard (North Carolina to Delaware), inland, and Gulf Coast. “The songs from Virginia are the most wild and strange,” Allen noted. He and the others even noticed variations from one plantation to the next, and specified the exact source whenever possible.

    Finally, the fourth reason to collect and to study the songs was that they were rapidly disappearing under the new conditions of freedom. Already in 1867, Allen said, it was becoming hard to find unaltered slave songs. Some black Americans regarded emancipation as a new start to their story, with slavery as a shameful prehistory that they did not wish to revisit, and some freedpeople, especially upwardly mobile ones, associated the songs so strongly with enslavement that they had no desire to sing them anymore. The music of slavery seemed not in keeping with “the sense of dignity that has come with freedom,” as Allen put it. 

    But much broader forces of cultural change, such as education, urbanization, and modernization, were also at work. Abolitionist teachers across the South were among the culprits in this cultural loss, since they sometimes considered the old songs barbarous and often introduced new songs in their classrooms as part of their effort to Christianize and civilize. “Even the ‘spirituals’ are going out of use on the plantations,” Allen lamented, “superseded by the new style of religious music” that conformed to white standards.

    This suggests one reason why Slave Songs of the United States ended up as a false start in the formal study of slave culture: at the time, the prevailing view of culture was closer to a ladder than to a buffet. Slave culture, no matter how interesting or affecting, was still seen as something primitive to be overcome on the path to civilization. From the perspective of the twenty-first century, Slave Songs was a landmark book, the only attempt of its kind to preserve the songs with as few modifications as possible. Yet Allen, Ware, and Lucy McKim Garrison’s careful work of collection and interpretation was not seen at the time as a contribution to American historical scholarship or the study of slavery — or to anything, really. Lucy and Wendell wrote an unsigned review in The Nation (one of the perks of being an editor) and arranged for several others, but more popular magazines such as Harper’s and the Atlantic ignored the book. After a reprint in 1871, it was promptly forgotten for more than fifty years. Even then it would take yet another fifty years for many academic scholars to begin to study the songs as Allen had studied them, as both a form of art and a part of American life. As it took shape across the twentieth century, the academic study of slave culture eventually broadened to examine an astonishingly wide range of practices and customs, but it is worth continuing to focus on the study of the songs, especially the spirituals, because they have usually been regarded as the emblematic case.

    Lucy, the driving force behind Slave Songs, soon saw her health decline. After suffering two strokes, she fell into a coma and died in May 1877, at the age of thirty-four, leaving behind her husband and three young children. A few years later Wendell Garrison sent a packet with some of the publication correspondence for Slave Songs to the Cornell University library, thinking that the letters might be of interest to some future scholar. They were deposited in the archive, in a collection about freedmen. 

    In the meantime, music programs at black colleges helped keep the slave songs alive, preserving them through practice and performance. Fisk University in Nashville led the way. Starting in 1871 at Henry Ward Beecher’s Brooklyn church, Fisk’s Jubilee Singers made a name for themselves and raised quite a bit of money for their school by performing shiny renditions of old spirituals in concert halls and cathedrals across the United States and Europe. Over the next generation, students and professors at Fisk, Hampton, and Tuskegee started some efforts to collect, sing, and study African American folk music.

    But this activity lay outside the mainstream of American slavery scholarship, which took a different path as Reconstruction unfolded. At first veteran combatants continued to spar over slavery’s role in causing the war, with former Confederates such as Vice President Alexander Stephens writing tendentious Platonic dialogues saying that slavery was just a sideline in the South’s noble stand for constitutional liberty, while old abolitionists such as Henry Wilson, Grant’s vice president, spun out mammoth volumes showing the centrality of slavery to prewar American politics. Soon a new generation of “scientific” historians at a new breed of research universities attempted to demonstrate their objectivity by draining slavery of its political and moral baggage and tracing its evolution in the legal code. This paralleled developments in the Supreme Court, which narrowed what counted as slavery to a legal definition that did not include social discrimination or economic deprivation. 

    In this environment, slave culture was relegated for a while to the realm of Uncle Remus — a book that William Francis Allen took the time to review upon its publication in 1881, calling it “a contribution from a new and almost unworked field.” Over the next decade, as the egalitarian advances of Reconstruction stalled and were rolled back, the new regime of segregation found its justification, in part, in the nostalgic rehabilitation of the old plantation as a place peopled by supposedly happy slaves and their benevolent masters. Even an old abolitionist such as Thomas Wentworth Higginson was reduced to tears by Thomas Nelson Page’s sentimental plantation story “Marse Chan.”

    This spirit of nostalgia also informed new scholarship on slavery, particularly among progressive southerners who saw in the old plantation a model of both racial and industrial relations. In 1918, in American Negro Slavery, by far the most influential scholarly book on slavery written between Reconstruction and the civil rights movement, Ulrich Bonnell Phillips portrayed plantation life as essentially jolly. “On the whole,” he wrote, “the fiddle, the banjo and the bones were not seldom in requisition.” One of the few songs that he quoted, and one of the few examples of sadness that he cited, involved slaves weeping because their beloved master had died: “Down in de cawn fiel’ / Hear dat mo’nful soun’; / All de darkies am aweepin’, / Massa’s in de col’, col’ ground.” Phillips believed that the plantation had been a benevolent institution, similar to a school or a settlement house, serving to civilize a supposedly barbarous black race.

    Throughout this period, slave culture, and black culture more generally, was often regarded with condescension. The American exhibition at the World’s Fair of 1893 contained no contributions from black culture, and neither did the fair’s Concert of Folk Songs, put on by the International Folk-Lore Congress. “Excepting some selections representative of the music of our North American Indians,” declared one representative in his introductory address, “the utterances of the savage peoples were omitted, these being hardly developed to the point at which they might be called music.” Yet by the time Slave Songs of the United States was reprinted again in 1929, this was no longer the case, as the rise of a new generation of black scholars and artists collided with the new disciplines of cultural anthropology and folklore studies to make slave culture the subject of serious academic and artistic attention. 

    The first and deepest student of the slave songs in this new era was W. E. B. Du Bois, who arrived at Fisk University as a seventeen-year-old sophomore in September 1885, walking in the shadow of the grand Jubilee Hall built with money earned by the school’s famous singers. Du Bois went on to get a second bachelor’s degree at Harvard, did graduate work in historical economics and sociology in Germany, and — when the white administrators of the agency funding his education made the petty decision not to renew his grant — returned to Harvard to become, in 1895, the first black American to receive a doctorate in history. A few years later, he sharply criticized the way American scholars had studied slavery to that point:

    The slaves are generally treated as one inert changeless mass, and most studies of slavery apparently have no conception of a social evolution and development among them. The slave code of a state is given, the progress of anti-slavery sentiment, the economic results of the system and the general influence of man on master are studied, but of the slave himself, of his group life and social institutions, of remaining traces of his African tribal life, of his amusements, his conversion to Christianity, his acquiring of the English tongue—in fine of his whole reaction against his environment, of all this we hear little or nothing, and would apparently be expected to believe that the Negro arose from the dead in 1863.

    By that time, Du Bois had been hired by Atlanta University to continue and expand the school’s annual series of studies on Negro problems, which had started in 1896. With Du Bois in charge, the studies quickly moved to the leading edge of empirical sociological research in the United States, with annual conferences attracting activists and scholars including Jane Addams, Florence Kelley, and Harvard president Charles William Eliot. The topic changed each year, cycling through issues such as education, business, the church, and the family. 

    Du Bois believed it was impossible to understand the present problems and the future prospects of the Negro “without knowing his history in slavery,” so his Atlanta University Studies included some of the only analyses of slave society and culture that existed at the time. His descriptions of slave culture could sometimes take a distinctly negative tone, both because he believed in relatively strict Victorian notions of morality and because he saw black social and economic inequality in his own time as rooted in the lingering effects of slavery (then just a generation past). He even stretched his historical studies back to Africa so that he could give a full account of how customs and institutions changed across the wrenching transitions of the middle passage and emancipation. “There is a distinct nexus between Africa and America which, though broken and perverted, is nevertheless not to be neglected by the careful student,” he explained. 

    In 1903, in the midst of this scholarly activity, Du Bois published an odd but moving collection of essays, simultaneously lyrical and empirical, called The Souls of Black Folk. At the head of all but one of the essays, he paired a few lines of European poetry with a few bars from a Negro spiritual, placing the two forms on an equal cultural level. The one exception was the final essay, where the European poetry was replaced by lines from the spiritual “I Know Moon-Rise,” the same song that Higginson had praised so highly some forty years earlier. (Du Bois seems to have known of Higginson’s article and Lucy’s arrangements, but not of Slave Songs of the United States.)

    Du Bois sketched an account of the songs’ historical evolution — one that doubled as an account of American culture itself. In the beginning, he wrote, was primitive African music, of the kind sung by his great-great-grandmother, seized by a Dutch trader some two hundred years before:

    Do ba-na co-ba ge-ne me, ge-ne me!

    Ben d’nu-li, nu-li, nu-li, nu-li, ben d’le.

    Du Bois no longer knew what it meant, but it had been passed down through his family all the same. The next stage of development was one he called “Afro-American,” with songs “of undoubted Negro origin” and “peculiarly characteristic of the slave.” Among these he listed “Nobody Knows the Trouble I’ve Seen” and “Swing Low, Sweet Chariot.” The third stage started to show the influence of white American culture. The result remained “distinctively Negro,” Du Bois noted, as in the songs “Bright Sparkles” and “I Hope My Mother Will Be There,” but “the elements are both Negro and Caucasian.” Finally, he identified a fourth step of cultural evolution in the rise of white American songs that were influenced by black melodies, as in “Swanee River” and “Old Black Joe.”

    “What are these songs, and what do they mean?” he asked. Against the popular plantation myth of the time, which deployed slave songs as evidence of happiness and contentment, he declared (as Higginson, Allen, Ware, and Lucy McKim Garrison had decades earlier) that they were in fact “sorrow songs.” “They are the music of an unhappy people, of the children of disappointment,” he wrote; “they tell of death and suffering and unvoiced longing toward a truer world.” At the same time, he saw in them a breath of hope, “a faith in the ultimate justice of things . . . that sometime, somewhere, men will judge men by their souls and not by their skins.” Yet with black political rights being rolled back and a regime of racial segregation rising all around him, he could not help but wonder, “Is such a hope justified? Do the Sorrow Songs ring true?”

    III

    In 1910, Du Bois resigned from Atlanta University. He was frustrated by his inability to attract funding for his work, especially after his public break with Booker T. Washington earlier in the decade. He moved to New York and took a job as director of publicity and research for the newly established NAACP. As he shifted from scholarship to activism, the study of slave culture proceeded along two separate paths in his wake, one anthropological, the other sociological. Both disciplines were in the process of emerging into their modern forms — anthropology from an older racialist ethnology, sociology from a haze of social Darwinist theorizing — and they took on the study of slave culture in part to prove their value in analyzing contemporary social problems.

    In anthropology, the central figure was Franz Boas, a Jew from Germany who had come to believe in the 1880s and 1890s that racial differences had environmental rather than genetic causes, and that the primary causes of racial inequality were prejudice and discrimination. He and Du Bois spoke the same language, and soon became allies. In 1906, Du Bois invited Boas, by that time on the faculty at Columbia, to come to the Atlanta University conference on “The Health and Physique of the Negro-American.” Boas discussed the cultural basis of racial behavior, and while there he also gave the school’s commencement address, in which he praised the genius of African cultures. “I was too astonished to speak,” Du Bois later recalled. He had never heard someone talk about Africa in that way.

    Boas saw the study of folklore as a way to demonstrate racial equality, since a sympathetic understanding of a people’s folklore could go a long way toward explaining their thought and behavior as a result of culture and society rather than genetics. He helped to found the American Folk-Lore Society and had a hand in directing its journal until the 1930s. From the start, the Journal of American Folk-Lore planned to devote fully one-quarter of its space to black folklore. The trouble was that most anthropologists and scholarly collectors lived in the North, while most black Americans — some ninety percent at the time — lived in the South. Boas and his colleague William Wells Newell had some success with collectors based at southern black colleges, but that sputtered after a few years, in part because Boas had many demands on his attention and in part because he was simultaneously pushing for the professionalization of the field, a cause that cut against the use of amateur collectors.

    Everything changed with World War I. The war accelerated the rise of a mass national culture, symbolized most prominently by radio and movies, which by its very nature threatened the survival of various ethnic and regional cultures — even as new technologies made it possible (though still difficult) to record the actual sounds and songs before they died out. Simultaneously, the war fostered a feeling of nationalism, including cultural nationalism, in whose warm glow America’s threatened ethnic and regional cultures were now seen as authentic and worthy of preservation. And the war undermined easy ideas about civilization and progress, opening even greater space for these non-dominant cultures to be seen in a positive light. (It is worth noting that none of this was unique to the United States, as, for example, the modernist primitivism of artists such as Gauguin and Picasso demonstrates.) 

    Perhaps most important for the study of black folklore in particular, the war increased the demand for industrial labor at the same time as it cut off the supply of immigrant workers, thus inaugurating the decades-long Great Migration of millions of black Americans from the South to northern and western cities. This had a huge number of ramifications for American society, one of which was the rise of the “race problem” as a national (rather than primarily southern) issue for the first time since Reconstruction. A range of new social scientific funding organizations suddenly became interested in pouring money into research about black life. At the same time, conveniently, the migration also solved the old problem of the distance between northern anthropologists and southern blacks. Now all Boas and his students had to do was walk down the hill from Morningside Heights to Harlem. With the help of a wealthy white anthropologist named Elsie Clews Parsons, Boas made a renewed push to collect and publish black folklore. Starting in 1917, fourteen single-topic “Negro Numbers” of the Journal of American Folk-Lore appeared over the next twenty years. 

    During these years, Boas’ ideas gained prominence among some academics as well as black intellectuals and activists, in part because Du Bois gave them space in The Crisis, the NAACP’s journal. The collection of black folklore was central to the Harlem Renaissance, serving as a symbol and source of cultural pride. In 1922, the black poet James Weldon Johnson edited an anthology of Negro poetry, which he hoped would raise the status of black Americans by demonstrating their artistry. He noted that the slave songs contained flashes of “real, primitive poetry” and could be counted among the American Negro’s handful of truly important artistic creations (along with the Uncle Remus stories, the cakewalk dance, and ragtime music). 

    Three years later Johnson edited a full volume of Negro spirituals, which he soon followed with a second volume. He celebrated efforts by black Americans to preserve and promote the spirituals instead of neglecting them as unwanted reminders of slavery. “This reawakening of the Negro to the value and beauty of the Spirituals was the beginning of an entirely new phase of race consciousness,” he wrote. It was akin to the rediscovery of the classics in late medieval European culture, which also led to a cultural and intellectual renaissance. In 1925, Alain Locke’s influential anthology The New Negro included an entire section on folklore, called “The Negro Digs Up His Past,” as well as a separate essay by Locke focused specifically on the spirituals. By the time Allen, Ware, and Lucy McKim Garrison’s Slave Songs collection was reprinted in 1929, studies of black folk music were tumbling out of American presses.

    Still, Boas’ ideas about race and culture remained marginal in American society until Nazism discredited “scientific” racism starting in the late 1930s. With genetic explanations of racial difference collapsing, what could explain the apparent inferiority of black Americans? The most influential answer came from sociology, where the study of slave culture had proceeded for several decades along a somewhat separate track — one that descended from Booker T. Washington instead of W. E. B. Du Bois. 

    In sociology, the central figure was Robert Park. A white man who grew up in Minnesota, Park took an extraordinarily circuitous route to becoming perhaps the most influential sociologist in America, a route that included studying philosophy with John Dewey as an undergraduate at the University of Michigan in the 1880s, a decade-long career as a newspaper reporter (which he regarded as his true sociological training), stints in graduate school at Harvard and Germany (in philosophy and psychology), a few years as publicity agent for the Congo Reform Association, and finally a position doing similar work for Booker T. Washington at Tuskegee, where he started in 1905 (after Du Bois turned down the job). A firm believer in industrial education for black Americans, Washington sometimes seemed to join the chorus of plantation nostalgists in viewing slavery as a necessary precursor to his own work. In 1901, in his autobiography Up from Slavery, which became one of the central racial statements of the age, he observed that

    when we rid ourselves of prejudice, or racial feeling, and look facts in the face, we must acknowledge that, notwithstanding the cruelty and moral wrong of slavery, the ten million Negroes inhabiting this country, who themselves or whose ancestors went through the school of American slavery, are in a stronger and more hopeful condition, materially, intellectually, morally, and religiously, than is true of an equal number of black people in any other portion of the globe.

    Park absorbed much of Washington’s worldview as he worked for the next eight years as Washington’s ghostwriter, publicist, and bulldog, building up Tuskegee and tearing down its rivals, including Du Bois. 

    By 1913, Park was ready for a change, and he seized an offer from the University of Chicago’s sociology department, then one of the top two departments in the country, to start teaching a course on “The Negro in America.” Soon he published his first major article, “Racial Assimilation in Secondary Groups with Particular Reference to the Negro,” which grew out of his time at Tuskegee. “Slavery has been, historically, the usual method by which peoples have been incorporated into alien groups,” he wrote. “When a member of an alien race is adopted into the family as a servant, or as a slave, and particularly when that status is made hereditary, as it was in the case of the Negro after his importation to America, assimilation followed rapidly and as a matter of course.” Park went on to have an extraordinarily influential career at Chicago over the next two decades, making the school practically synonymous with sociology in America. Among other things, he trained a generation of important black sociologists such as Charles S. Johnson and E. Franklin Frazier, both of whom did vital work in the 1930s collecting the testimony of formerly enslaved black southerners (in Johnson’s case) and studying the structure and evolution of the black family in slavery and freedom (in Frazier’s).

    While some anthropologists, particularly Boas’ student Melville Herskovits, saw evidence of cultural diffusion in the African diaspora, Park did not. He subscribed to the more common view at the time, which was that the middle passage had acted as a kind of cultural holocaust. In 1919, in an essay called “The Conflict and Fusion of Cultures with Special Reference to the Negro,” Park declared that “there is every reason to believe, it seems to me, that the Negro, when he landed in the United States, left behind him almost everything but his dark complexion and his tropical temperament.” In this view, with freshly imported slaves as blank slates, slave culture could not be either an original production or an adaptation of African traditions to American conditions, but only an imperfect copy of white American culture as filtered through “the racial temperament of the Negro” and the strain of slavery. This was, indeed, the prevailing interpretation of slave songs at the time, as scholars looked at the lyrics, saw similarities to some white hymns, and assumed the line of influence must have flowed in only one direction.

    This view of slave culture may not have allowed much room for African influences or black cultural creativity, but with the collapse of scientific racism in the late 1930s and 1940s, it proved useful as a framework for combating segregation. Slavery had stripped black people of their African culture, the argument implicitly went, so now, with no cultural baggage to hold them back, the only things preventing their full assimilation into American society were prejudice and discrimination.

    For roughly a full generation, liberal scholars pushed hard on the “damage” thesis to demonstrate all the ways that slavery and segregation had left black Americans as mere husks of humanity. It was apparent, for example, in the Swedish sociologist Gunnar Myrdal’s mammoth study of American race relations, An American Dilemma, which appeared in 1944, and for which several of Park’s former students, including Franklin Frazier, served as researchers. “In practically all its divergences,” Myrdal famously wrote, “American Negro culture is not something independent of general American culture. It is a distorted development, or a pathological condition, of the general American culture.” Another member of Myrdal’s team, the psychologist Kenneth Clark, went on to do doll studies showing that segregation caused black children to associate white dolls with being pretty or nice and black dolls with being bad or ugly. 

    This work was successful. In 1952, Chief Justice Earl Warren cited Clark, Frazier, and Myrdal to help justify ending school segregation in Brown v. Board of Education. Over the next few years, the “damage” thesis made its way into two important new books on slavery as historians joined the crusade against segregation. Kenneth Stampp’s The Peculiar Institution came first, in 1956. Determined to use slave testimony to overturn Ulrich Phillips’ plantation idyll, Stampp ended up portraying slave life in a pitiful light. His chapter on slave culture, tellingly titled “Between Two Cultures,” emphasized the slaves’ lack of “cultural autonomy.” Not only had slaves “lost the bulk of their African heritage,” Stampp argued, but they “were prevented from sharing in much of the best of southern white culture,” resulting in a “culturally rootless people.” “In slavery,” he went on, “the Negro existed in a kind of cultural void. He lived in a twilight zone between two ways of life and was unable to obtain from either many of the attributes which distinguish man from beast.” Stampp allowed that slave songs and folklore were important exceptions, but the overall picture was almost irredeemably bleak.

    A few years later, Stanley Elkins took out the “almost.” Using insights about the supposedly “closed” nature of North American slavery along with psychological studies of “infantilization” among Nazi concentration camp inmates, he set out to show that “Sambo,” the lazy, childish slave of southern lore, was not a stereotype but rather a fact — the natural product of the American slave system. Elkins thought this was the only explanation for how the “native resourcefulness and vitality” of West African culture, as described by anthropologists such as Herskovits, somehow degenerated “to such a point of utter stultification in America.” Slave songs and folklore, which may have complicated that assessment, did not enter into his analysis. The fantasy lives of slaves, he noted in passing, were “limited to catfish and watermelons.” 

    Arguments about the damage inflicted on slaves and its lingering effects on black Americans crested in 1965, at the same moment as the civil rights movement achieved its major legislative goals. That spring, Daniel Patrick Moynihan, then the assistant secretary of labor, drew on Elkins’ work as he prepared a report for Lyndon Johnson’s administration about the problems facing black families and the need for a wide range of federal programs to support them. Moynihan also drafted a speech that Johnson delivered in June at Howard University, which was intended to outline the rationale for the policies that Moynihan’s report proposed. “Much of the Negro community is buried under a blanket of history and circumstance,” Johnson declared, urging federal action to undo the damage. But this did not work out as planned. Moynihan’s report was unfairly pilloried for a relentlessly negative view of black society and culture (probably a tactic to persuade politicians — Moynihan was motivated by a deep sympathy for his subjects), and instead of paving the way to new programs it ended up serving as a convenient marker for the death of a view of slave culture defined by “damage” and the rise of new studies interested in exploring the varieties of black resistance and cultural vitality under slavery. 

    IV

    By that time, two or three decades’ worth of scholarship had emphasized the damage wrought by slavery and segregation, portraying black culture as little more than an incomplete or pathological version of white American culture. A generation of black activists and intellectuals — Ralph Ellison was usually quoted as the emblematic example — was getting tired of this paternalistic attitude, of seeing mostly white scholars and politicians run their culture down and tell them what to do. Some of them began to assert (partly out of fear, partly out of faith) that there was, in fact, a distinctive black culture worth preserving and even celebrating — that, as the new civil rights legislation broke down the formal barriers of segregation that had kept black Americans apart, they could not and would not simply be assimilated as undifferentiated Americans. The experience of the civil rights movement itself bore this out, buoyed as it was by black leaders, black churches, and black songs, including spirituals.

    Meanwhile, thanks to the GI Bill, the vast expansion of American higher education after World War II, and the simultaneous decline in racial and ethnic (and gender) barriers to access and employment, there were suddenly whole new groups of people writing and teaching and studying history, and they brought with them their own experiences, questions, and concerns. Not only black scholars but also the scholarly sons and daughters of Jewish and Italian immigrants might have some interest in, and insight into, processes of oppression and acculturation. And, finally, there arose at precisely that moment a “new sensibility” (as Susan Sontag put it) among educated Americans which collapsed old cultural distinctions between high and low. Pop cultural productions such as rock music and folk music and movies, previously disdained or dismissed, became worthy of serious analysis and explication. It became possible to study pop, folk, and other non-mandarin cultural expressions in a way that was critical and historical, not just anthropological or sociological.

    The result of all this was an explosion of slave culture studies throughout the 1970s. The black scholar Sterling Stuckey forecast the shift in 1968 in an essay called “Through the Prism of Folklore”: “No study of the institutional aspects of American slavery can be complete, nor can the larger dimensions of slave personality and style be adequately explored, as long as historians continue to avoid that realm in which, as DuBois has said, ‘the soul of the black slave spoke to man.’” Over the next decade, this basic idea guided several different scholars who pursued it mostly independently and who sometimes felt as if they were discovering the source material for the first time: John Blassingame’s The Slave Community (1972), George Rawick’s From Sundown to Sunup (1972), Eugene Genovese’s Roll, Jordan, Roll (1974), Herbert Gutman’s The Black Family in Slavery and Freedom (1977), Lawrence Levine’s Black Culture and Black Consciousness (1978). Though they varied quite a bit (especially Genovese), they were all intended to demonstrate the strength of the community and culture that enslaved people forged largely on their own. (At the same time, Dena Epstein’s Sinful Tunes and Spirituals became the first book to look closely at Allen, Ware, and Lucy McKim Garrison’s collaboration on Slave Songs of the United States; among other sources, Epstein consulted the correspondence that Wendell Garrison deposited at Cornell after Lucy’s death.)

    To understand this wave of slave culture studies, Levine’s case is instructive. Born to Orthodox Jewish immigrants in Washington Heights, he attended City College of New York in the 1950s. He later said his familiarity with the cultural inheritance of the shtetl and the ghetto helped him more easily grasp the nature of black folk beliefs, but he also thought that anyone who did the necessary work to enter deeply into that cultural world could have seen the same thing. After getting his PhD at Columbia and taking a job at Berkeley in 1962, he joined the local branch of CORE. He worked on fair housing and picketed stores that did not hire black employees. In the spring of 1965, he and Kenneth Stampp served as the department’s representatives to the Selma-to-Montgomery march; they happened to spend the night in Tuskegee, of all places, on the way from Atlanta to Selma. Back in Berkeley, Levine had a research sabbatical the next academic year and used it to start a project on Black Protest in Twentieth-Century America — essentially a long history of the civil rights movement. 

    At first Levine was reading only about the leaders, not the masses of people who made protests and movements possible. Like many other historians working on similar topics at the time, he looked at the previous generation of scholarship, saw that black people had “been rendered historically inarticulate by scholars,” as he put it, and set out to fix that problem. Dipping into anthropology, he realized that folklore and folksongs might provide a point of entry. Soon they took over his whole project as he realized that black folk culture (like all culture) was essentially a form of philosophy. “In their varied forms of oral expression,” he wrote, “they thought about who they were, what their situation was, how it could be changed, and how best to teach their children how to survive without succumbing to the forces that had them in their grasp.” The way he interpreted black culture and slave culture would undermine the views of Park, Myrdal, and Elkins — and even of Kenneth Stampp, who “was not amused by it,” Levine recalled, when his young colleague showed him a draft.

    Levine recognized that culture was not a fixed thing but a process, one in which it was possible to trace African folkways interacting with the Euro-American world to forge a new African American perspective. This was most clearly apparent in the songs, which Levine worked on first, presenting a paper about them at the American Historical Association conference in 1969 (he was given a whole session to himself) and publishing it in an edited collection two years later. “It is to the spirituals that historians must look to comprehend the antebellum slaves’ world view,” he wrote, “for it was in the spirituals that slaves found a medium which resembled in many crucial ways the cosmology they had brought with them from Africa and afforded them the possibility of both adapting to and transcending their situation.” 

    Yet he threw the whole origins question overboard as ultimately irrelevant. Where the cultural forms came from was less important, he said, than what people did with them. He carefully worked out how songs were created and re-created through improvisational group sessions in which pre-existing lines were mixed together with new tunes and lyrics, and how that process balanced “individual release” with “communal solidarity,” and how slaves tweaked the Christian doctrines they had learned from their white masters and preachers to fit their own needs. Like Du Bois and Allen, Ware, and Lucy McKim Garrison before him, he believed the songs expressed deep sorrow but were relieved by an abiding hope. He concluded:

    Slave music, slave religion, slave folk beliefs — the entire sacred world of the black slaves — created the necessary space between the slaves and their owners and were the means of preventing legal slavery from becoming spiritual slavery. In addition to the world of the masters which slaves inhabited and accommodated to, as they had to, they created and maintained a world apart which they shared with each other and which remained their own domain, free of control of those who ruled the earth.

    In many ways, the culture studies of the 1970s laid the foundation for the next few decades of research about slavery, at least in the sense that slave culture became indispensable to any attempt to analyze the institution. Instead of establishing only that slave culture existed, however, the main questions shifted to how that culture changed across time and space as well as how it developed in a complicated dance with the surrounding culture of slaveowners and other whites, with mutual influences in both directions. The essentialist assumptions that lurked in the background of some of the major studies of the 1970s, which occasionally implied that there was a distinctively “black” family or culture and treated origins in slavery as the measure of authenticity, faded in those years of relative optimism about race relations. More recently, as that optimism has waned in the face of ongoing discrimination and injustice, essentialism has seen a resurgence among authors who posit unique racial inheritances and forms of knowledge, eternally separate but equal. Meanwhile, slave culture, at least in the form of songs, has receded a bit into the background as scholars, black and white, concern themselves with issues that are either overwhelmingly macro in scale (imperialism, capitalism) or extremely micro (reconstructing the experiences of a single enslaved person from traces in the archive). 

    One major exception is a big recent book by David Hackett Fischer called African Founders: How Enslaved People Expanded American Freedom. Calling his study an “inquiry into what happened when Africans and Europeans came to North America, and the growth of race slavery collided with expansive ideas of freedom and liberty and rule of law,” Fischer goes region by region to trace how specific clusters of Africans (and their cultures) interacted with specific clusters of Europeans (and their cultures) to make something new. In New England, for example, Akan-speaking groups brought an ethical philosophy of doing good and doing well, which dovetailed with New England Puritanism to create what Fischer calls a “new syncretist ethic” focused on expanding rights through petitions for freedom, schooling, and the right to vote; in the Chesapeake, people such as Frederick Douglass and Harriet Tubman built on a regional tradition of leadership exemplified by George Washington and Thomas Jefferson to push for freedom for all. 

    Some of the conclusions can feel a bit pat, but on the whole Fischer’s book is effective, thanks to the wealth of detail that he includes along the way to show exactly how specific peoples and cultures mixed. At the very end he borrows from Du Bois, who, more than a century earlier, concluded his own essay on “The Sorrow Songs” with a list of “three gifts” that Africans brought to America “and mingled them with yours”: gifts of story and song, of sweat and brawn, and of the Spirit. Fischer adapts and expands the list for his purposes, but it includes a gift of language, a gift of music, and a gift of spirit and soul — gifts that culminate, for him, in the spirituals. He quotes several of them, including “Nobody knows the trouble I’ve had / Nobody knows but Jesus” and “I want to climb up Jacob’s ladder, / But I can’t climb it till I make my peace with the Lord.” He also cites William Francis Allen, Charles Ware, and Lucy McKim Garrison’s Slave Songs of the United States. “In a large literature,” he writes, their book, now more than one hundred fifty years old, remains “the most important work” on the subject.

    What lessons does the long history of the study of slave culture hold for us today, especially for those who are not professional historians of slavery? 

    First, the history of the study of slavery in America is in many ways the intellectual history of America. It is not only recently that we have reckoned with slavery. Not at all. We have been reckoning with it from the beginning. Slavery first became a major problem in American life around the Revolution, and Americans have continued to wrestle with the nature of slavery and freedom ever since — not to mention all the other issues connected with slavery, such as race and labor, agriculture and industry, culture and conquest, opportunity and inequality. Not only has thought about slavery reflected the changing contours of American intellectual life, but in many cases it has shaped those contours, as founding figures in various scholarly disciplines — Boas in anthropology and Park in sociology, but also Herbert Baxter Adams in history, Francis Lieber in political science, and plenty of others — saw slavery and its legacies as central problems that their field would have to puzzle out in order to prove its worth.

    The story of slave studies suggests also that older scholarship is valuable. Despite their avowed interest in the past, historians tend to dwell on recent work when it comes to our own field. This may be because we believe in history as a progressive science, or because we want to be a part of the conversation, or simply because we (like most humans) are attracted to things that are shiny and new. We often think of older scholarship as incomplete or incompetent or biased — all of which is true, to be sure, but no more true of scholars in 1822 or 1922 than in 2022. It is precisely because those older scholars had a different view of the world and of history that they can help cure us of our habit of following the herd in writing about all the faddish varieties of historical capitalism or whatever other topic is now academic catechism. Moreover, although historians often think of near-contemporary accounts as blind to their full situation, those accounts can be penetrating and revealing because their sense of the stakes, and of the constellation of relevant questions, is second nature, in the same way that near-contemporary translations often capture something about a book that later translators miss. 

    We can also intuit from the study of slave culture something about the way culture works. All culture — slave culture, certainly, but also abolitionist culture, academic culture, American culture — is syncretic, variegated, improvisational, combining influences in ways both obvious and subtle as it wends its way through history. As many early writers recognized, slave songs were a complex combination of African inheritance and American experience, and slaves shaped their owners just as surely as owners shaped their slaves. Whether as an act of description or prescription, it is always false to say that any culture is closed or isolated or pure. (And it can lead to hideous politics.)

    The most stirring lesson of slave studies, certainly, is that no group or individual lacks some kind of cultural autonomy. Victims are always more than their victimization; people are defined more by what they do than by what is done to them. Inner resistance is possible against even the most obscene oppression. Our contemporary obsession with trauma sometimes threatens to turn us into a catalogue of everything that has been done to us and to our ancestors, just as abolitionists sometimes made slaves into nothing more than a catalogue of the brutalities that they had suffered. But the skill for coping knows no racial or historical boundaries. The true subject of slave studies is human resilience. Slave songs were one of the ways of coping, as Higginson and his abolitionist friends learned in the Sea Islands, but they were also something more. They were evidence of undefeated spirits. That they continue to resonate with us today is a testament to the power of the people who first sang them in slavery — and, perhaps, to the universal need we all feel, at one time or another, for the good Lord to show us the way. 

    The Holocaustum of Edith Stein

    Edith Stein, a soulful modern thinker, was murdered in Auschwitz in August 1942. Born to a Jewish family in 1891, she was baptized into the Catholic faith on New Year’s Day 1922. In October 1933, she began the process of becoming a Carmelite nun, in which capacity she would take the name Teresa Benedicta of the Cross. Pope John Paul II beatified Sister Teresa Benedicta of the Cross on May 1, 1987 and eleven years later, on October 11, 1998, he canonized her. One year later he declared Saint Teresa Benedicta a patron saint of the European Union. 

    Despite her conversion, or perhaps she would have said by way of her conversion, Stein continued to see herself as a Jew. On Easter in 1933, with a deep foreboding of the tragedies to come, Stein, as she described it, “spoke with the Savior to tell him that I realized it was his Cross that was now being laid upon the Jewish people, that the few who understood this had the responsibility of carrying it in the name of all, and that I myself was willing to do this, if he would only show me how. I left the service with the inner conviction that I had been heard, but uncertain as ever as to what ‘carrying the Cross’ would mean for me.” In his homily on the occasion of her canonization, John Paul II described Stein and her beatification as opening “a new encounter with her God of Abraham, Isaac and Jacob, the Father of our Lord Jesus Christ.” He further prayed that “her witness [would] constantly strengthen the bridge of mutual understanding between Jews and Christians.” 

    In the thirty-five years since he uttered these words, John Paul’s vision has not come to pass. After the Nazi genocide, in the wake of catastrophic Jewish suffering, the Church’s canonization of an apostate Jew who was murdered in Auschwitz felt to many Jews as if the Church was pouring salt onto the wounds of historical Jewish persecution. This was made all the worse when, in 1984, Carmelite nuns, with the support of the Vatican, established a convent at Auschwitz in Stein’s honor. The Polish Church described the convent as “the sacred sign of love, peace, and reconciliation which will testify to the victorious power of Jesus.” International outrage ensued, and a ferocious controversy, with the Vatican agreeing five years later to move the convent, though the re-location did not happen until 1993, after further protests. 

    For many Catholics, Stein remains not just an exemplar of Christian life and death, but also a symbol of the hybridity of human identity. When declaring Stein a patron saint of the European Union, John Paul II expanded his earlier characterization of her as a “bridge between her Jewish roots and her commitment to Christ” to “a banner of respect, tolerance and acceptance which invites all men and women to understand and appreciate each other, transcending their ethnic, cultural and religious differences in order to form a truly fraternal society,” which was symbolized by the European Union. In more academic contexts, Stein is often depicted as epitomizing the fluidity of identity as well as the human capacity for self-creation. In both religious and scholarly circles, many have pointed to Stein’s philosophical work on the problem of empathy, completed under the tutelage of Edmund Husserl, the founder of phenomenology, as an example of life imitating art. Stein’s ostensible hybridity is taken as evidence of her arguments about empathy, and vice versa. 

    But the thought, the life, and the afterlife of this extraordinary woman are more complex than these triumphalist stories might suggest. She was without question a decent, honorable, brilliant, and intellectually honest person who, by all accounts, exemplified kindness and care for others. Stein was indeed ready and willing to be sacrificed along with her people. But precisely in the context of anticipating their annihilation, Stein also criticized Jews for over-valuing life in general, and their own lives in particular. She saw herself as a prophet, writing in 1930 of “the urgency of my own Holocaustum,” and she believed that the fate of the Jewish people in the hands of the Nazis was, as she put it, “atonement for their disbelief.” Stein exhibited a fascinating combination of supreme confidence, some might say arrogance, and humility. In a meditation that she wrote in the last year of her life, she cast herself as a new Queen Esther, who had returned to history to save her people for a second time. But as opposed to the Esther of the Bible, who saved Jewish lives, Stein returned to the world to usher Jews to “the final battle” that would at last bring them to Christ and thereby bring Christ’s return. The new Esther and the old Esther had different notions of Jewish salvation.

    So too, ironically, if not tragically, the divergent Catholic and Jewish interpretations of Stein’s life and death point not to an empathetic loosening of rigid identities, but to a failure of empathy on all sides, and perhaps even to the reality of persistently fixed identities. Far from bridging Jewish and Christian identities in the service of Jewish-Christian dialogue, Edith Stein’s real contribution to Jewish-Christian dialogue is the challenge for each side to recognize the deep, perhaps insurmountable, chasm that lies between Judaism and Christianity, a chasm of which Stein seemed at times unaware while at other times all too aware. 

    These larger questions of the nature of personal identity and the specificity of religious identity make thinking about Stein’s life and philosophy more urgent in our time. It is simply not possible to rest with a singular judgment of Stein. She reminds us that moral exemplarity does not exist without its own ambiguities, and that one can only claim otherwise by discounting the fragility of being human.

    It is important to begin by acknowledging Stein’s courageous solidarity with the plight of Jews in the darkest hour of their suffering. This is especially true when recalling others of Jewish descent who at this cataclysmic historical juncture not only denied their Jewishness but took the Nazi genocide of European Jewry as an opportunity to denounce Judaism and a persecuted people. Here the contrast between Edith Stein and Simone Weil could not be starker. Weil, also a brilliant thinker, lost her teaching post in 1940 because of a statute issued by the Vichy government denying rights to those of Jewish descent. Her response was to write a groveling letter to the minister of education stating that “mine is the Christian, French, Greek tradition. The Hebraic tradition is alien to me, and no Statute can make it otherwise.” Although she is often described by her admirers as a “saint” because of her self-mortifying concern for the suffering of others, Weil did not hesitate at this time to declare stingingly that “Israel” is “repulsive” and the “Great Beast of religion,” adding that “[a] pharisee is someone who is virtuous out of obedience to the Great Beast.” Seven years earlier, Stein had also lost her teaching job after the Nazis seized power in Germany in 1933. Her response was to write an autobiography, Life in a Jewish Family, whose purpose was to counter the Nazi’s “horrendous caricature” of the Jews and to depict for the German nation the “goodness of heart, understanding, [and] warm empathy” of Jewish individuals and families that she had known growing up in a Jewish family. In April 1933, she wrote a letter to Pope Pius XI begging for a response to the Nazis from the Catholic Church. As she pointed out, “Everything that happened and continues to happen on a daily basis originates with a government that calls itself ‘Christian.’… Isn’t the effort to destroy Jewish blood an abuse of the holiest humanity of our Savior, of the most blessed Virgin and the apostles?” Stein noted in her diary in 1938 that she knew that her letter had been received by the pope, but she never received a reply.

    It is precisely because of Stein’s human decency that Jewish outrage at Stein’s canonization has felt so painful for all involved. In its narrowest form, Jewish and Catholic discord over Stein’s legacy hinges on whether it was appropriate or even honest for the Church to declare Stein a Christian martyr. Stein was initially considered for beatification as a “confessor,” that is, on the basis of how she lived an exemplary Christian life and not on the grounds of how she died. Church doctrine requires verification of two miracles attributed to a “confessor”; but no miracles were in evidence. For this reason, the case for Stein’s beatification changed course by proclaiming Stein not a Christian wonder-worker but a Christian martyr, which clearly implies that she died in Auschwitz because of her Christian faith. As an historical matter, this is plainly false. Following a Nazi order for the deportation of all of Holland’s Jews, Stein, along with her sister Rosa, who had been inspired by Edith to convert to Catholicism, were taken by the SS on August 2, 1942 from the Dutch Carmelite Monastery in Echt. By the following week, according to an official document provided by the Red Cross of the Netherlands, Edith and Rosa had been murdered in Auschwitz “for reasons of race, and specifically because of Jewish descent.” 

    The motivations for the Vatican’s alacrity in beatifying Stein as a Christian martyr — instead of waiting for confirmation of miracles associated with her, as often is the practice in cases for beatification — remains a topic of heated speculation. Other questions have been raised about the miracle that ultimately qualified Stein for canonization after beatification. In 1987, a two-and-half-year-old American child, Benedicta McCarthy, who had been named for Stein, swallowed what amounted to sixteen lethal doses of Tylenol and went into a coma. Her parents prayed to Stein for little Benedicta’s recovery. When she did recover, her doctor, Ronald Kleinman of Massachusetts General Hospital, while noting that children did often recover from such overdoses, commented: “I’m saying it was miraculous. I’m Jewish. I don’t believe per se in miracles, but I can say I didn’t expect her to recover.” Referring to the fact that Kleinman was Jewish, Reverend Kieran Kavanaugh, who was involved in investigating this miracle, remarked that “Dr. Kleinman’s willingness to give testimony and witness, to me that was a miracle itself.” 

    Even if one believes — correctly, I think — that in beatifying and then canonizing Stein the Vatican was not seeking directly to promote Jewish conversion, it is difficult not to conclude that the rush to beatify her was, at least in part, an attempt to burnish a terribly tarnished Catholicism after the Holocaust and the Church’s neutrality if not complicity in the Nazi genocide. In 1990, for his book Making Saints: How the Catholic Church Determines Who Becomes a Saint, Who Doesn’t, and Why, Kenneth Woodward interviewed Father Ambrose Eszer, the German Dominican who pushed for Stein’s beatification as a martyr for her faith. Eszer insisted that “Today, many Jewish writers don’t admit that the Catholics did anything for the Jews. But I know that in the case of Edith Stein she was killed because the Catholic Church did something to the Jews.” A decade later, in Papal Sin, Garry Wills elaborated that Eszer was referring to what came to be the Vatican’s argument that Stein was a Christian martyr and not a Jewish one. When, in 1942, Catholic bishops joined Protestant ministers in opposing the deportation of Dutch Jews, they emphasized that they were especially concerned about the deportation of baptized Jews. The Nazis agreed not to include baptized Jews so long as the bishops and ministers would not protest the deportation of all others of Jewish descent. While most appeared ready to accept this deal, the bishop of Utrecht urged his parishes to continue to criticize Nazi deportations. Apparently this led the Nazis to revoke the special dispensation for baptized Jews, and it was in this context that Stein and her sister were sent to Auschwitz. Eszer, along with the Vatican, believed that this provided causal evidence that Stein was martyred for her Christian faith. In light of historical reality, this reasoning is a stretch. 

    While Eszer’s, and ultimately the Vatican’s, characterization of Stein’s Christian faith as the historical cause of her murder may seem dubious at best, the truth is that its tone deafness is a common feature of the majority of Catholic — and Jewish — responses to Stein’s beatification and canonization. Consider this statement by the Jewish historian Arthur Hertzberg about Stein’s death: “As she inhaled the Zyklon B in the gas chamber, did Edith Stein really think she was dying as a sacrifice for the Church?” Hertzberg’s remark suggests that he did not grasp the genuineness of Edith Stein’s theological conviction that she could die as a Jew as a sacrifice for the Church. For Stein, this was not a contradiction, as she continually insisted that, in her words to her Jesuit confessor Father Hirschmann, shortly before her death, “You don’t know what it means to me to be a daughter of the chosen people — to belong to Christ, not only spiritually but according to the flesh.” Indeed, in 1939, three years before she died, Stein included the following in a handwritten will: “I ask the Lord to accept my life and death…for the expiation of the unbelief of the Jewish people and so that the Lord may be welcomed by his own people and his kingdom come in majesty.” 

    Although she made these statements before she, or anyone else, knew of the depths of the genocidal horrors to come, Stein was making what is for many people, and especially Jews, a deeply disturbing and very old Christian claim that Jewish unbelief in Christ is responsible for Jewish suffering. Stein was a supersessionist, meaning that she believed, as most Christians historically have believed, that the Church replaced, or superseded, the synagogue, just as the New Testament replaced the Old. Yet Hertzberg’s anger is stirred not by the intricacies of Jewish and Christian theological disputation, but rather by an all-too-typical Jewish obtuseness to the very possibility of Christian faith. The willful obliviousness of most Jewish responses to Stein’s canonization stems not from the psychological pain that some may feel about the Church glorifying an apostate Jew who was murdered in Auschwitz, which is certainly understandable, but from the refusal to conceive of the possibility that Stein could have genuinely believed herself to be a Jew who would die for Christ. Jews, moreover, allow for the possibility of conversion to Judaism, and they regard the transformation as so complete that one is legally forbidden to mention a convert’s prior life to her. The integrity of Edith Stein’s conversion must be respected. She was no longer “ours.” 

    Such obtuseness is matched only by the Church’s tone-deafness to Jews and Judaism, which stems not from what is surely an internal Catholic decision about whether Stein should be counted as a Catholic saint, but rather from the assumption that canonizing Stein would be good news to Jews and improve relations between Jews and Catholics. Debates about whether Stein died as a Christian martyr or Jewish martyr, or even only as an apostate Jew who had converted to Christianity, are symptoms of a larger problem, which concerns the possibility of understanding or even experiencing perspectives other than our own. This was precisely the philosophical question — the question of empathy — that consumed Edith Stein in her academic work prior to her conversion to Catholicism in 1922. 

    Born to an observant Jewish family on Yom Kippur in 1891 in Breslau, Stein was the youngest of seven children. When Edith was two years old, her father died suddenly, leaving her mother to run the family lumber company, which with her at the helm became a thriving and prosperous business. By all accounts (including her own), Stein was an extremely bright and hard-working student who, along with her sister Erna, became one of the first women to be admitted to the university in Breslau. In her second year studying psychology, Stein encountered Edmund Husserl’s Logical Investigations, which had appeared in 1900 and 1901 and quickly became one of the most important works in modern philosophy. Stein chose to spend the summer reading Husserl and, in 1913, set out to Göttingen to study with him. With the outbreak of the First World War, she returned to Breslau and joined the Red Cross, serving in 1915 as a nurse in an Austrian hospital for infectious diseases. In the meantime, Husserl, whose son was killed in Flanders during the war, had received a professorship in Freiburg. After finishing her work for the Red Cross, Stein joined him there, completed her dissertation on The Problem of Empathy in its Historical Development and in Phenomenological Consideration (translated into English as On the Problem of Empathy), and received her doctorate summa cum laude in 1916. Stein stayed on working for Husserl as his private assistant. 

    What can we know with certainty? Is knowledge outside our own experience possible? Whether we can know or experience perspectives other than our own is an old philosophical problem, often associated with skepticism about the existence of other minds or even about the existence of a world external to ourselves. With his famous “I think therefore I am,” Descartes gave modern expression to this problem when, in search of certain knowledge, he systematically doubted his senses, his ability to differentiate between reality and dreams, and even the external world, the imagining of which, he argued, could be the product of some evil genius. But what Descartes could not doubt was his own doubting — that is, his thinking. Descartes also believed that he found a way to reason himself away from skepticism by arguing that infinity, a mathematical concept, is one that neither he nor anyone else could have come up with had God not implanted this idea in his mind. Infinity leads the way to an argument for the existence of God, which leads to an affirmation of both the external world and the possibility of certain knowledge. 

    While Descartes is thought of as a rationalist because he grounds knowledge in concepts of the mind, the opposing early modern school of philosophy known as empiricism, which insisted that the mind, in Locke’s words, is a “blank slate” and that knowledge comes from our senses, also grappled with the difficulty of knowing anything or anyone outside of our own minds. The empiricist critique culminated in Hume’s sharp distinction between the information which derives from our senses and that which derives from human imagination, which imposes order and continuity not only onto the external world but also onto a stable self. Hume came to doubt even Descartes’ thinking “I.” The self, Hume skeptically concluded, is not a stable entity, but only “a succession of parts, connected together by resemblance, contiguity, or causation.” 

             Neither the rationalist nor the empiricist approach seemed able to refute the other, which led Kant to worry deeply about the very possibility of knowledge at all. Kant sought to defeat this skepticism by arguing that, in his famous statement, “thoughts without contents are empty; intuitions without concepts are blind,” meaning that the empiricists and rationalists are both right: we need sense experience (“contents”) and we need mental concepts, such as causality, because human experience and human knowledge simply are not conceivable without both. This brings us back to Descartes. Kant argued that he solved Descartes quest for certainty of the external world (for which Descartes had required God) by showing that any conception of an “I” requires time. It is because I can track my experiences of myself over time that I am able to consider myself an “I” in the first place. Time also confirms the reality of objects external to the mind, which are necessarily experienced and conceived spatially. Time and space, then, are what Kant calls the pure forms of sensible intuition, and human experience and cognition are simply impossible without them.

    Kant derives the pure forms of sensible intuition on the basis of what he termed a transcendental argument. This is a kind of logical argument about what is required for human experience to be possible in the first place. The first question is not, what do we know?, but more fundamentally, what is it possible for us to know? This Kantian innovation is important for understanding Edith Stein’s work on empathy and beyond, for three reasons. First, there is the price of Kant’s “transcendental idealism,” which allows us to rest easy knowing that there is an external world and that knowledge of the external world is possible, but comes at the cost of knowing what Kant calls a “thing in itself.” While Kant thought he had proven that there are objects in the world outside of our minds, he also concluded we can never know these objects as they intrinsically are, because our knowledge of them is always filtered through the mind’s sensible forms and categories. All post-Kantian philosophy would grapple with this problem in one way or another. Second, even if he succeeded in establishing objects outside of the individual’s mind, Kant had little interest in the problem of other minds, what is often referred to now as “intersubjectivity.” The problem of intersubjectivity would also become a cornerstone of post-Kantian philosophy. And third, Husserl’s development of his phenomenological method, which is what attracted Stein to him in the first place, offered a framework for responding to exactly those two problems. 

             Husserl was born in 1859 to an assimilated Jewish family in Moravia. He converted to Lutheranism in 1886. Jewish conversions to Christianity, and especially to Protestantism as opposed to Catholicism, the dominant religion of the Austrian empire, were rapidly increasing at this time, with some scholars estimating that between 1868 and 1900 Jewish conversion to Christianity increased by as much as eighty percent. Many converted because baptism remained a requirement for most professions, businesses, and governmental employment. But Husserl linked his conversion to Protestantism to a conversion to philosophy: in a letter of 1917, he described the influences “which drove me from mathematics to philosophy, as my vocation may lie in overpowering religious experiences and complete transformations. Indeed, the powerful effect of the New Testament on a twenty-year old gave rise to an impetus to discover the way to God and to a true life through a rigorous philosophical inquiry.” 

    Not that Husserl’s philosophy was religious; not at all. Husserl came to equate the true life of rigorous philosophical inquiry with what he called phenomenology, a method for analyzing our experience of ourselves and our observations of the world rooted in the concept of “intentionality,” which suggests that an individual’s consciousness is not wholly autonomous but always reaching (or intending) towards an object of consciousness: “the I is not thinkable without a not-I to which it intentionally relates.” It is the intensity of the intentional relationship between consciousness and its object, the purity of the focus, that gives the phenomenological attitude an aspect of secular revelation. 

    Husserl coupled his conception of intentionality with a reevaluation of Hume’s account of experience so as to move beyond the problems left in the wake of post-Kantian idealism. The problem with Hume’s account of experience, argued Husserl, was that he, like the British empiricists before him, approached experience as the perception of discreet sensations bundled together by the imagination. But, continued Husserl, we do not perceive objects in the world by way of discreet sensual impressions. We perceive them as part of a prior conceptual whole. Perception, argued Husserl, includes processes and events as well as static objects and their relations to each other. If this is correct, we need not worry about Kant’s “thing in itself,” for experience simply is the givenness of the world. 

    At the same time, phenomenology could build upon Kant’s attempt to provide an account of the underlying conceptual structures of science through transcendental argumentation, which Husserl also projected back onto Descartes’ method of doubt. Husserl called this process epoché, an ancient Greek philosophical term for the “suspension of judgment.” Epoché is a meditation on the essential structures of consciousness — a kind of careful distillation of the experience of consciousness. It begins with everyday experience and then attempts to bracket all aspects of the everyday in order to get to the essential structure of the “transcendental ego,” which lies hidden beneath the mundane details of life. Here, too, Husserl likens phenomenology to a religious conversion. Epoché affects a personal transformation in which the most ordinary and fundamental assumptions about personal identity are suspended: “I am not an ego who still has his you, his we…. All of mankind, and the whole distinction and ordering of the personal pronouns, has become a phenomenon within my epoché.” Just as a religious convert reevaluates the mundane world through the prism of the divine, phenomenology reconceives conventional assumptions about personal identity through the prism of the transcendental ego.

    The inclusion of “all mankind” with the operation of a single consciousness — the socialization of consciousness — raises a tension that pervades all of Husserl’s phenomenology. On the one hand, epoché, the procedure of isolating the essential features of consciousness, shows that consciousness always reaches (or intends) toward the consciousness of others. In this sense, the self is always bound up with other selves. On the other hand Husserl insists in some of his later work that the transcendental ego is self-contained and purely subjective and even, following Leibniz, “monadic.” This tension lines up with another one that continues to plague (or inspire) Husserl’s interpreters: is Husserl a phenomenological realist who ultimately grounds the self in a real world of genuine and embodied others? Or is he a transcendental idealist, since everything seems to come down to the mind of the self-contained, self-conscious subject? This is where Edith Stein and the problem of empathy come in.

    Husserl had addressed empathy — Einfühlung, or “feeling into” — and characterized it not as a feeling per se, but rather as a “transcendent perception” of a foreign corporeality similar to but also different from one’s own. Einfühlung differs from Mitfühlung, usually translated as “sympathy,” or “feeling with,” in that the latter preserves a distinction between self and other, while the former suggests a degree of constitutive commonality — we might call it an overlap of being — that makes possible an entry into the experience of another in which the other’s experience becomes my own. Husserl described empathy, the perception of another as co-originating with a perception of oneself, but he did not clarify much beyond this. Stein’s dissertation was an attempt to provide a fuller phenomenological account of empathy. Although she presents her work as merely the ramifications of Husserl’s investigations, Stein moves beyond him in further stressing the necessity of other people for self-knowledge. On the subject of empathy, she was a deep and original thinker. 

    Empathy, according to Stein, provides not just knowledge of another but also knowledge of the self. In making this argument, Stein, following Husserl, disrupts neat distinctions between interiority and exteriority, between self and other. Such divisions stand, for example, at the heart of Mill’s conception of sympathy as an analogical relation. Following Locke, Mill argues that it is because of my own inner experiences that I am able to understand the experiences of others: “First, they have bodies like me, which I know in my own case, to be the antecedent condition of feelings; and because, secondly, they exhibit the acts, and outward signs, which in my own case I know by experience to be caused by feelings.” When I see someone blush, I can recognize this as an outward sign of their inner state of shame or embarrassment because I have had that same internal experience. Sympathy, in such an account, is a generalization from, a scaling up of, individual experience. 

    Stein offers two correctives to Mill’s account. First, she contends that his characterization of empathy actually makes it impossible, since it begins with a conception of myself as “imprisoned within the boundaries of my individuality.” Second, phenomenological analysis corrects this problem by recognizing the constitutive role of bodies in perception and knowledge: I experience sensations in my body, but I also can observe these sensations from a third person perspective. This means that there is not a distinction between inward shame and the external act of blushing. They are one and the same and are both anchored in what Stein calls “the continuity of being.” Empathy helps us “obtain the world’s second and third appearance, which are independent of my perception.” 

    Appreciating the link that Stein makes between memory and empathy makes her argument a bit clearer. Stein distinguishes between the act of experiencing and the content of experience. Both memory and empathy are second-hand experiences, in that both are the experience of something that is not happening to me in the moment in which they are experienced. When I remember something, the content of a previous experience is experienced not as something happening to me in the present, but as an experience of the past. When I empathize, I also experience a “content” (such as embarrassment or shame), but it is displaced, it is not happening to me in this moment. In this way, memory and empathy may be likened to the experience of oneself as another. 

    To some extent, it may seem that Stein is still making an argument from analogy. Just as a memory requires that the act and content of experience were one and the same for me in the past, so too empathy seems to necessitate a prior integration of the act and content of an experience in my experience of myself. This might suggest that I can only empathize with someone if I have had a similar experience of my own. But this is actually where Stein’s theory of empathy gets most interesting. Stein contends that empathy for another does not begin and end with my own experiences. She knows that we certainly can and do misunderstand the experiences of others. Yet for Stein, the possibility of error in empathy speaks to empathy’s strength, not its weakness. It is helpful to quote her at some length:

    To consider ourselves in inner perception, i.e., to consider our psychic ‘I’ and its attributes, means to see ourselves as we see another and as he sees us… Inasmuch as I now interpret it [the other’s psychic life] as ‘like mine,’ I come to consider myself as an object like it…. This is how I get the ‘image’ the other has of me…. the reiterated emphatic acts in which I comprehend my experience can prove to be in conflict with the primordial experience so that this empathized “interpretation” is exposed as a deception. And, in principle, it is possible for all the interpretations of myself with which I become acquainted to be wrong…. Inner perception contains within it the possibility of deception. Empathy further offers itself to us as a corrective for such deceptions along with further corroboratory or contradictory perceptual acts. It is possible for another to “judge me more accurately” than I judge myself and give me clarity about myself.

    At its finest, empathy can be a form of self-correction by way of another’s perception of me. In this way Stein arguably saves empathy from accusations that feeling with another is ultimately only self-projection. Indeed, she stresses that far from affirming our perceptions of ourselves, empathy can be an instrument of self-criticism. 

             As is apparent even from this brief explication, Stein’s consideration of empathy is technical but deep in its focus on discrete affective experiences of empathy. There is also something moving, and expressive of her temperament, in her choosing empathy as a subject for close phenomenological investigation. And there is a great irony here: Stein’s sophisticated account of empathy foreshadows empathy’s failure in the competing Catholic and Jewish arguments about her legacy. Stein concludes her study by arguing that empathy opens the door to “the understanding of spiritual persons.” There she writes: “Whenever we come into contact with realms of value that we cannot enter, we become aware of our own deficient value and unworthiness.” But in celebrating Stein’s hybrid Jewish and Christian identity as a marker of Jewish and Christian reconciliation, Christians have not become aware of their “own deficient value and unworthiness”; instead they have looked at Jews and seen only themselves. And in vehemently denying Stein’s hybrid identity and opposing the Church’s canonization of Stein, Jews have also not become aware of their “own deficient value and unworthiness,” but have looked only within and, not surprisingly, seen only themselves. 

    Most strikingly, Stein’s analysis of empathy also allows us to recognize how the inability to see another can simultaneously be a failure to see ourselves. What is perhaps most notable about the lack of empathy in Jewish and Christian responses to Stein’s canonization is their shared inability to recognize how their respective reactions conflict with their own distinct commitments. From an internal Jewish theological perspective, Stein, despite her conversion, was, as she believed, still a Jew, albeit, from a traditional Jewish point of view, a bad one. That Stein was, from the internal perspective of the Jewish tradition, both Jewish and Christian is the harrowing challenge that Stein, and Stein’s alleged synthetic Jewish-Christian identity, puts to Jews. And just as the Jewish response to Stein’s life and death is problematic from an internal Jewish point of view, so too the Catholic understanding of Stein’s life and death is problematic from an internal Christian theological perspective, if that theological perspective is committed to Jewish-Christian dialogue, which was certainly the case for John Paul II, who more than any other devoted himself to the tortured question of Jewish-Christian relations after the Nazi genocide. But if we understand dialogue as a serious exchange between two positions, then the Catholic view of Stein as a Jewish-Christian synthesis essentially silences any distinct Jewish perspective, not just in terms of Stein’s self-understanding but in terms of Judaism’s distinct point of view as such. That the Church, despite what at times appears to be a valiant struggle to the contrary, cannot overcome its supersessionism — and thereby, arguably, its anti-Judaism — in its continual denial that Judaism has an integrity of its own is the harrowing challenge that Stein, and arguments about Stein’s synthetic Jewish-Christian identity, puts to Christians. She remains a test of both faiths, and of the possibility of the moral and psychological self-overcoming that she studied in her inquiry into empathy. 

    It is important to stress that Stein herself, Sister Teresa Benedicta, fully embraced Christian supersessionism. Her understanding of her Jewishness is predicated on the distinct language of the New Testament. Her statement to her Jesuit confessor Father Hirschmann — “you don’t know what it means to me to be a daughter of the chosen people — to belong to Christ, not only spiritually but according to the flesh” — reflects a particularly Pauline distinction between spirit and flesh. Stein seems to have Galatians 5 in mind:

    Walk by the Spirit, and do not gratify the desires of the flesh. For the desires of the flesh are against the Spirit, and the desires of the Spirit are against the flesh;… the fruit of the Spirit is love, joy, peace, patience, kindness, goodness, faithfulness, gentleness, self-control; against such there is no law. And those who belong to Christ Jesus have crucified the flesh with its passions and desires.

    Stein’s allusion to Galatians to describe herself is intimately tied to her view that, in keeping with most (but not all) interpretations of Paul, the problem lies with Jewish clinging to the flesh and denial of the spirit. When she reported that she “spoke with the Savior” on Easter in 1933 and “realized it was his Cross that was now being laid upon the Jewish people,” her willingness to carry the cross on behalf of the Jewish people was owed to her view that the Jewish people needed to (finally) recognize that their “desires of the flesh are against the Spirit.” Stein believed that Jewish suffering was a direct result of the Jewish refusal to believe in Jesus –— that, theologically understood, the Holocaust was punitive. This was further articulated in her response to Kristallnacht in November, 1938: “That is the cross which falls upon my people. Oh, if only they could see the light! That is the fulfillment of the curse which my people have called upon themselves!”

    In her autobiography, Stein again affirms the Pauline distinction between the flesh and the spirit and its corollary belief in the spiritual inferiority of the Jews, who abide only by the flesh. That is, of course, the most ancient Christian canard about the Jews. Referring to the “rather frequent occurrence of suicide among Jews” after Hitler’s rise, Stein coldly remarks: “A Jew is able to endure severe hardship and untiring labor coupled with extreme privations for years on end as long as he sees a goal ahead. Deprive him of this goal and you destroy his vigor; life then appears meaningless, and so he can readily decide to throw it away. The true believer, of course, is deterred from such a course by his submission to the will of God.” Stein also at times traffics in other anti-Jewish stereotypes. About her friend Eduard Metis, for example, who “had one attribute which set him apart from all my other companions: he was an orthodox and observant Jew,” she remarked that she found his “Talmudic sophistry” “repugnant,” and described him in the following way: 

    He had a delicacy of feeling that might almost be termed maidenly. He was tall and slim; his face, rather thin, was usually slightly flushed; outwardly he gave no indication of being ill, but he suffered a great deal from migraines and many days was unable to work at all. Since I was always in excellent health during my university years, I pitied him for being less robust. 

    These pronouncements and others suggest that Stein did not see herself as a hybrid after all. Her worldview was Christian through and through; and like many Christians before her and after her, she simply wished that Jesus’ own people would see the light. 

    Stein is clear that, since she was a teenager, Judaism was not a living option for her. She was increasingly attracted to Christianity during her university years, but it was only after she read Teresa of Avila’s autobiography in the summer of 1921 that she decided to convert to Catholicism. As Stein reported, “I picked at random and took out a large volume. It bore the title The Life of Teresa of Avila, written by herself. I began to read, was at once captivated, and did not stop until I reached the end. As I closed the book, I said, ‘That is the truth.’” While Stein was unaware of this, historical scholarship has since uncovered the fact that the sixteenth-century Spanish saint was of Jewish descent, as her father and grandfather were forced to convert to Catholicism during the Inquisition. And although she did not publicly acknowledge it for obvious political reasons, Teresa was apparently aware of her lineage. It seems safe to assume that Stein would have been heartened to hear this. 

    Teresa’s description of the different stages of grace that are experienced as an increasingly deeper interiority likely resonated with Stein’s earlier attraction to Husserl’s phenomenology and its practice of epoché. Just as bracketing is an attempt to weed out the mundane exteriorities of lived experience in order to get to the transcendental ego’s essential structure, prayer for Stein, following Teresa, is a movement from the activity of the world to the passive reception of God’s grace, which “is love, and love is goodness giving itself away.” While phenomenology is hardly a mystical doctrine, it does proceed by keeping the world away from consciousness so as to isolate from all its distractions an essence of experience and therefore an apprehension of truth. And in keeping with her preference for spirit over flesh, Stein writes: “the work of salvation takes place in obscurity and stillness” and “the soul no longer sees or hears anything, the body no longer feels pain when injured, and in some cases becomes rigid like someone dead. But the soul lives an intensified life as if it were outside its body.” This is somewhat reminiscent of the phenomenological structure of empathy. 

    Yet despite whatever formal parallels there may be between phenomenology and Teresa’s account of prayer, Stein’s conversion to Catholicism was not just a movement away from the Judaism of her birth, but also a movement away from Husserl’s philosophy, which she came to equate with “the obscure faith of the intellect.” In her contribution to a Festschrift on the occasion of Husserl’s seventieth birthday in 1929, Stein imagined a conversation between Edmund Husserl and Thomas Aquinas. Adopting Thomas’ point of view, she contrasts what she calls “Catholic philosophy” to phenomenology. Whereas Husserl’s phenomenology is egoistic, idealist, and insists on a divorce between reason and faith, Catholic philosophy is theocentric, realist, and ultimately rejects a distinction between reason and faith. These points taken together may explain why Husserl, in Catholic Austria, was attracted and converted to Protestantism, while Stein, in Protestant Germany, was attracted and converted to Catholicism. Broadly speaking, Husserl’s and Stein’s differences, as she describes them, map onto Protestant and Catholic differences more generally. Although Husserl likened phenomenology to a religious conversion, he, following Luther, posited an absolute distinction between philosophy, which is fundamentally rational, and faith, which is fundamentally non-rational, if not irrational. Philosophy for Husserl is an approach to “knowing that does not know any revelation or that does not recognize it as an already given fact…[it] is a-theistic.” Stein, in contrast, following Thomas, believed that faith rightfully informs philosophy just as philosophy clarifies faith. This is because God’s reality infuses all of creation, and not just the inner lives of the faithful. Stein learned this as well from Teresa’s autobiography.

    Stein saw parallels between her life and Teresa’s. In her autobiography, Stein describes herself as a child as introverted, bookish, and detached from life with others in terms similar to Teresa’s description of her own early years. Despite the alienation from other people that both Teresa and Stein felt as children, Teresa focuses on her grandmother’s pious presence much as Stein focuses on her own mother. Most notably perhaps, Stein shared Teresa’s struggle to reconcile her love of her family with her love of God. In a hagiographical essay on Teresa, written in 1934, Stein comments that “after the interior battle came a difficult outer one. In spite of all his piety, Don Alonso [Teresa’s father] does not want to be separated from his favorite daughter. All her pleas, and the advocacy of her uncle and siblings, are in vain.” In her memoir Stein writes movingly of her deep love for her family and especially her mother. Painful as it was for Stein’s mother to accept her conversion, they continued to have a close relationship. But Stein knew that her decision to enter a monastery would be incomprehensible to her mother. And indeed it was. Stein entered the Carmelite monastery in Cologne in 1933. She remained there until the last day of 1938, when, for the sake of the safety of her sisters in Cologne, she found refuge in the monastery in Echt. During her first year in the monastery, Stein’s mother did not acknowledge her letters, though she did ultimately resume limited communication with Edith. She died in Breslau in 1936, and was thereby spared the fate of her daughters and other family members. 

    The relationship between Stein and her mother (as well as her family more broadly) is a testament to the kind of empathy that Stein described in her dissertation, and which Christians and Jews in arguing about her death seem unable to achieve. Once again, in Stein’s words: “Whenever we come into contact with realms of value that we cannot enter, we become aware of our own deficient value and unworthiness.” Perhaps Stein learned about the possibility of this kind of empathy from her life in a Jewish family. 

    After her conversion, however, Stein came to reassess her account of empathy. Whereas her dissertation was premised on her view that human experience alone provided both the fact and the possibility of empathy, Stein now believed that God’s intervention was necessary. As she put it in her most mature work of Christian philosophy, Finite and Eternal Beings, which appeared in 1936, “many sources of error are…hidden from us so long as God does not, through a genuine interior shock — through a call in the interior — take the bandages, which cover the interior of each human being in a special way, from our eyes.” It is hard not to read this reference to God’s removal of bandages from our eyes without being reminded of the canonical anti-Jewish image of the blindfolded and broken synagogue, and of Paul’s words in Romans 11:7: “Israel has not obtained that which it seeks; but the elect have obtained it, and the rest were blinded.” 

    The ambiguities of Stein’s moral exemplarity are captured in her extremely Christian belief that God’s love begins and ends with the annihilation of Jewish particularity. To be sure, this may seem a surprising conclusion, given how the Church has presented Stein and how the Church has attempted to understand itself after the Nazi genocide. But Stein’s Jewishness, as she affirmed again and again, was in the service of the fulfillment of the Christian message, which for her meant sacrificial atonement for Jewish unbelief. (There were other German Jewish intellectuals, most famously Franz Rosenzweig, who believed that Judaism was a stage on the way to Christianity, though Rosenzweig eventually revised his spiritual itinerary and offered instead a kind of inverted supersessionism in which Christianity is always trying to catch up to Judaism.) She wrote of “the urgency of my own Holocaustum” before the Nazi genocide, but the term that she uses which eventually became the name for the genocide itself — neatly captures her theological vision, as well as the starkly divergent Christian and Jewish perspectives on both Stein and the catastrophe. The term comes from the Septuagint’s translation of the Hebrew ‘olah, a burnt offering, a sacrifice that is wholly consumed by fire. The word is used in Leviticus and in Samuel with reference to Temple sacrifices, but it is also used in Genesis 22 when God commands Abraham to sacrifice his son Isaac, referred to as Abraham’s “only son,” as a burnt offering. From a Christian perspective, Genesis 22 is a foretelling of God’s sacrifice of his only son Jesus. Stein is on strong Christian theological grounds, then, in understanding herself and the Jewish people as a holocaustum. But for most, though not all, Jews, and perhaps also for many non-Christians, the horrors of the Nazi genocide make the ascription of any theological meaning to it, any redemptive purpose, perverse and grotesque. It is for this reason that Jews have increasingly referred to the Nazi genocide as the Shoah, a Biblical Hebrew word for catastrophe with no theological connotations attached to it. 

    Edith Stein was willing to pay the ultimate price for her convictions. That in itself is courageous and rare. She started as a Jew and ended as a Christian, and Jews, who themselves believe in the possibility of conversion, should let her go and respect the authenticity of her spiritual experience. Christians, meanwhile, must teach themselves to revere her while looking askance at the anti-Jewish dogmas that she endorsed. As a philosopher, she has a lesson to teach about the human quality that our society most desperately needs now. And yet Edith Stein’s most enduring legacy may be not her doctrine of empathy, but the hard questions about the value and meaning of human life that her life and her thought broached. The questions that she left us are more valuable than the answers.

     

    The New Statue

                                     Morning Song

    Love set you going like a fat gold watch.

    The midwife slapped your footsoles, and your bald cry

    Took its place among the elements.

     

    Our voices echo, magnifying your arrival. New statue.

    In a drafty museum, your nakedness

    Shadows our safety. We stand round blankly as walls.

     

    I’m no more your mother

    Than the cloud that distills a mirror to reflect its own slow

    Effacement at the wind’s hand.

     

    All night your moth-breath

    Flickers among the flat pink roses. I wake to listen.

    A far sea moves in my ear.

     

    One cry, and I stumble from bed, cow-heavy and floral

    In my Victorian nightgown.

    Your mouth opens clean as a cat’s. The window square

     

    Whitens and swallows its dull stars. And now you try

    Your handful of notes;

    The clear vowels rise like balloons.                                                       

    Sylvia Plath

    19 February 1961

             When my son was born, I was shocked to realize that among all the poems I knew, hardly any were about a baby or about becoming a mother. For a long time I had been accustomed to find, on almost any occasion of substance, a line of verse rising unbidden to consciousness, unerringly telling me what I was feeling. But the joyous line that had risen spontaneously and immediately at childbirth —”For unto us a child is born, unto us a son is given” — was followed by no others, and an unaccustomed silence lay heavy on my mind with the absence of any resonance between my life and a poem commenting on it.

             One of the poems that I did know (remembered from childhood because my mother had quoted it) opened with a putative dialogue between a mother and her newborn baby:

     

    Where did you come from, baby dear?

    Out of the everywhere into the here.

     

    I eventually read the poem (by George MacDonald, the Victorian novelist), and while I recognized the wit in the graphic decline of the enormous invisible “everywhere” into the diminished visible “here,” as a whole the fantasy was too sentimental for me:

    Feet, whence did you come, you darling things?

    From the same box as the cherubs’ wings.

     

    I flinched at that as I did at Mother’s Day cards.

             When, as an adult, I read Blake’s Songs of Experience, I at last found (in “Infant Sorrow”) a newborn baby speaking credibly of its own birth-agony. Outraged by its forced eruption from warm amniotic comfort into an unfamiliar and chilling world, and rebelling against both its restrictive swaddling clothes and its father’s constraining arms, the helpless baby screams cries unintelligible to the horrified parents, who wonder what demonic force is obscured behind the cloud of their struggling infant’s flesh. The exhausted baby, in its first intellectual moment, thinks it best to retreat into a silent sulk:

    My mother groaned! my father wept.

    Into the dangerous world I leapt:

    Helpless, naked, piping loud;

    Like a fiend hid in a cloud.

     

    Struggling in my father’s hands:

    Striving against my swaddling bands:

    Bound and weary I thought best

    To sulk upon my mother’s breast.

    A poet — imagining the words a terrified newborn might shriek if it had languageexposes the pieties of the usual “baby poem.” A fierce empathy with the baby’s sufferings at birth prompted Blake’s glimpse here into the disillusioned state he called “Experience,” while his earlier “Infant Joy” (from the Songs of Innocence) screened out the real baby, entering instead into the new mother’s projection (onto her actually silent baby) of her own self-absorbed joy. The mother’s fantasy that her infant (the Latin infans means “unable to speak”) begins life by complaining of its lack of a name prompts, with exquisite reciprocity, her own mirroring response: “What shall I call thee?” The baby declares that its name is “Joy,” and the mother, completing the circuit of dialogue, utters a blessing: “Sweet joy befall thee!” As the dialogue opens, the baby speaks first :

    I have no name

    I am but two days old.—

    What shall I call thee?

    I happy am

    Joy is my name,—

    Sweet joy befall thee!

    The whole second stanza belongs to the mother, as she ecstatically reinforces (with “pretty” and “sweet”) the exclusive symbiotic delight of shared harmonic naming-and-echoing. The baby smiles while the joyful mother sings, and her repetition of the sixth line is no longer a wish but a concurrent fact, confirmed by the song, the smile, and the closing period:

    Pretty joy!

    Sweet joy but two days old,

    Sweet joy I call thee;

    Thou dost smile, 

    I sing the while

    Sweet joy befall thee.

    The ecstatic narcissism of this (imagined) dialogue is surreally critiqued by Blake in “Infant Sorrow.” The “experienced” mother — disabused of her naive girlish image of a loving dialogue with her child — is groaning in her birth-pains, the father is weeping in alarm, and the baby is furious.

    These were to me real poems, confronting both the “innocent” virginal fantasy of purely joyful motherhood and the dark trauma of experience — both equally human, both requiring acknowledgment, both known to any sheltered girl who has become a mother. 

                           

              And so, when I first saw Sylvia Plath’s “Morning Song,” her narrative of how a clueless young wife gradually becomes able to love her infant, I felt astonished relief. A modern poet had at last told the story of her gradual initiation into motherhood. As “Morning Song” opens, a couple stand awkwardly around their newborn baby, conceived in love but now an unfamiliar stranger to its parents. Petrified by anxiety into immobile “statues,” the couple fear what they may have done in admitting a “new statue” into their uneasy “museum.” The house is now merely the curator of its own past, a museum of former selves, unable to conceive of a future with this unfathomable inhabitant. The naked creature intrudes into the scene by making an unfamiliar animal sound, while the parents, in joint unease, echo and magnify, with their adult voices, the infant’s inarticulate cry. 

             Plath’s wonderfully unexpected third stanza expresses, in its peculiarly slow and evolving syntax, the mother’s gradual perception-by-negatives of her new state: “I’m no more your mother than. …” The husband and the collective “we” vanish permanently from the scene, leaving the wife, a single “I,” to clarify her relation to the child she addresses. Unable as yet to conceive of that relation in human terms, she resorts instead, in the pivot of the poem, to the vague climatic terms of cloud and wind. She progresses haltingly to acknowledge that in giving birth she has signed the warrant for her own eventual death, her literal “effacement.” The sentence (which at first almost defeats understanding) offers each halting realization slowly, each segment suppressing the realms of animal and vegetable to become purely mineral, each segment answering an unpredictable question that itself stems from the words just uttered and issues in an unpredictable answer generating yet another question:

    “I’m no more your mother than”—

      than what? 

    “than the cloud that”—

    that does what? 

    “distils”—

    distils what? 

    “a mirror to”—

    to do what? 

    “to reflect”—

    what? 

    “its own slow”—

    slow what? 

    “effacement at”—

    at what? 

    “at the wind’s”—

    the wind’s what? 

    “hand.”

    As the new mother stumbles along the corridors of this intellectual labyrinth, every expected “natural” foresight in pregnancy of what motherhood has to offer (love, curiosity, nursing, “baby-talk”) is subtracted; the self-effacement becomes increasingly inorganic, un-mammalian. The syntax of this tercet, so peculiar and arresting, displays Plath’s talent for saying something — “I don’t know where this experience is leading me” — without making the statement explicit. It mimics, in its pacing, the experience itself. A comparable thought-process precedes each of Plath’s powerful words in this late phase: “What is it in this phenomenon that makes me call it ‘drafty’? What makes it a ‘museum’?”   

    The second half of “Morning Song” takes place some weeks later. The baby has been put to bed, and the maternal ritual of the first sleep-deprived months has begun. The new mother, awakened in the middle of the night by the child’s demanding cry (so aggressively different from its soft breathing in sleep), hastens to transfer the baby and its noise into another room. There, while she nurses the child in solitude, the day slowly dawns at the windowpane. Plath, by her title inserting the time of day into the poem, is transforming a known genre. The traditional “morning song” — an aubade, from the French aube, dawn — shows two lovers in bed, regretting the arrival of the sun (as in Donne’s “The Sun Rising”). The baby’s open-mouthed hunger-cry modulates over time, as it nurses, into musical cooings of satisfaction, repeated like the notes of a melody. In her startled recognition that the small foreign animal in her arms is now emitting not merely sounds but syllables (vowels and consonants linked into the first phonemes, “ma-ma”), the elated young mother comes to feel that in its possession of even rudimentary language, her baby is another human being like herself. Her spirits rise with the baby’s rising notes, and the atmosphere becomes one of festivity, with imaginary birthday balloons.

    And that is the plot, a relatively bare one (sketched almost a year after Plath’s baby was born), of the speaker’s gradual transformation from wife into mother. As the poem opens, the wife (with her husband in attendance) is in her bed at home as the baby is born; she feels for the first time the clash between her former idealizing simile of conception and pregnancy (that love had set in motion a “fat gold watch” inaugurating a golden time) — and the jarrings of childbirth, the midwife’s slap and a naked cry. The cry is shockingly perceived as repellently “bald,” unadorned, featureless, intimidating. The speaker and her husband fuse to a single “we” at the birth-moment against the baby “you.” But in the second half of the poem, the wife leaves the marital bed for the nursing of the child; she becomes (and remains) a single “I,” and the husband does not reappear. The marital duo has ceded (at least temporarily) to the maternal one.

    Plath was a terribly hard-working poet from her teen years on (as evidenced by her technically fully articulated, if still mostly formulaic, juvenilia). Very early, she had formulated a fully conventional idea of a happy life: she would be a gifted and sexual wife to an exceptional and universally admired husband, and would give birth to many babies (always imagined as babies rather than as children). After her marriage, as soon as she was attempting pregnancy, she raced to find independent imaginative forms for that envisaged life, many of them invented to express in both themes and styles a new, rich, and relatively untreated poetic enterprise: motherhood. 

    A full year before she had her first child, Plath wrote a single-stanza nine-line (for nine-month) poem entitled “Metaphors,” comparing (too archly) the pregnant body to a playful set of equivalents. Each of the nine lines of “Metaphors” has nine syllables, presenting ill-assorted and jesting definitions of the swelling body and its ultimate direction. The pregnant first-person speaker is “an elephant, a ponderous house,” and, ridiculously, “A melon strolling on two tendrils.” The last metaphors initiate pregnancy’s dangerous momentum: “I’ve eaten a bag of green apples, / Boarded the train there’s no getting off.” Too self-conscious and effortful in its casting around for whimsical metaphors, the poem nonetheless is anticipating more work ahead to illuminate motherhood. Less than two months before her child was born, Plath rewrote “Metaphors” into “You’re,” doubling its size into two stanzas of nine nine-syllabled lines addressed to her fetus (but with the same disorienting overplus of description). One moment the fetus is “gilled like a fish,” at another it becomes “my little loaf,” at yet another, it reverts to “our traveled prawn,” before ultimately becoming “A clean slate.” “Metaphors” and “You’re,” with their incoherence of imagery and artificiality of tone, cannot become interesting poems. A better poem, “Mushrooms,” written halfway through pregnancy, did not attempt an individuated embryo, gilled or baked or traveled: instead, it imagines an undifferentiated chorus of half-formed fetuses masquerading metaphorically as speaking mushrooms. They push up irresistibly through loam to air, uttering in tercets (often an “incomplete” form by comparison to couplets or quatrains), their compact two-beat threats of eventual victory:

    Overnight, very

    Whitely, discreetly,

    Very quietly,

     

    Our toes, our noses

    Take hold on the loam,

    Acquire the air.

    Like fetuses, they require no external feeding: “We // Diet on water, / On crumbs of shadow.” Although they are initially “meek,” they become “nudgers and shovers / In spite of ourselves,” propelled by genetic force:

    We shall by morning

    Inherit the earth. 

    Our foot’s in the door.

    “Mushrooms” convinces on its own terms, not presuming on the “human” personality or individuality of the fetus, but aware that it is unstoppable in its arrival.

             It will not be surprising to any reader of Plath that her youthful imagination, long before it was alerted to pregnancy, had specialized in disastrous outcomes (see her doomsday poems in the Juvenilia). Now, with a focus not global but personal, the dooms become biological. The anxiety felt during pregnancy by any mother is displaced, in “Stillborn,” onto the writing of poems, as Plath, a few months into motherhood, finds yet another seam opening in possible biological sources of poetry. But the elegy for the stillborn embodies none of the grief a mother would feel when her expected child does not survive. When unsuccessful poems become fetuses in formaldehyde, Plath’s brittleness offers a derisive grotesquerie:

    These poems do not live: it’s a sad diagnosis.

      . . .

    O I cannot understand what happened to them!

    They are proper in shape and number and every part

    They sit so nicely in the pickling fluid!

             Plath never ceased to explore motherhood, in many poems evoking maternal joy even in tragic contexts. Before I return to that joy, it must be conceded that tragedy had the ultimate victory, and so I glance ahead here to her final visual tableau of maternity, “Edge.” It opens with a posthumous tomb-sculpture of a mother and two children, but its second tableau softens to recount the slow hemorrhaging of still-living garden flowers. It is austere in its conviction of the conceptual finality of death, but surprisingly lavish in its gradual farewell to the young bodies of the children. As always in her best poems, Plath’s investigation of a topic has both intellectual weight and emotional resonance. In “Edge,” the objective, intellectually accurate, immobility of marble “motherhood” co-exists with the lamenting heart’s mimicry of the gradual dissolution of the body. The chill of the sepulchral group (conveyed in the third person: “The woman is dead”) coexists with the weeping sounds of mourning: “odours bleed / From the sweet, deep throats of the night flower.” In a bold move, Plath diminishes her multiple dying flowers to a single one, but the single flower, remembering its past companions, bleeds from multiple throats. Motherhood both is (in sculpture) and is not (in life) eternal. 

             As “Edge” closes, the spectator looks upward and says, of the indifferent moon, that in the long view of history, “She is used to this sort of thing.” Of course she is; history is one long bloodbath. But Plath being Plath, fact and intellect, compelling as they are, are not allotted the last word of the grave-garden. As the spectator looks up from earth, the moon, humanized, becomes visible as a skeletal hood of bone. But Plath insists on making the moon audible as well, and creates a cold flat music for her witch-cloak: “Her blacks crackle and drag.” Plath could not envisage her own death except by making the tomb-sculpture include her children; unless she had her two children with her, she would not be the person she is, but some past self long relinquished. Motherhood is preserved, even after death. “Edge” is an irreproachable poem, but because it memorializes her deathbed self, it belongs thematically with Plath’s meditations on death rather than with her poems on living motherhood.

    So I turn back here to my central topic — how motherhood allowed Plath to invent a poetics of motherhood — situations, elements, analogies, feelings — and to write sane and joyous poems about and to her children. Although she was never a religious believer, her interest in imaginative emblems of motherhood inevitably led her into the territory of the Christian Nativity myth. The birth of Christ to Mary offered Plath temptations to sharp modern contrasts, of which the most amusing is Plath’s playful lyric in which the Magi come to the wrong address. Yeats’ magisterial poem “The Magi” might have daunted any successor from appropriating that title, but Plath was bold enough to call her poem, too, “The Magi.” Yeats’ solemn portrayal of “unsatisfied” Magi compelled to return from Calvary to “The uncontrollable mystery on the bestial floor” is comically repudiated in Plath’s re-imagining of the encounter of a baby girl with such would-be male sponsors.

             The scholarly Magi, guided by a moving star, journeyed to greet the Second Person — the Son — of the Holy Trinity of Father, Son, and Holy Spirit. Plath’s irony and humorous skepticism mock the idea of wise old men as the best company for a newborn baby. Instead of the biblical gold, frankincense, and myrrh, modern male sponsors in our rational era would bring as gifts to the cradle the Platonic Triad of abstractions: “The Good,” “The True,” and “The Beautiful.” Such philosophical sponsors are a two-dimensional sort of “papery godfolk” who have followed the wrong star to the wrong crib. Because they are looking for a rationalist God, a “lamp-headed Plato,” Plath waves them away airily: “What girl ever flourished in such company?” She does not suggest what “company” her baby girl might thrive in. She certainly would need better patrons than the “disquieting Muses” of Plath’s own christening, or the star-followers of the Christian story, or the rationalist world’s bearers of the Platonic triad. No invented modern benefactors can fill the gaping absence, and Plath’s closing flippancy, though memorable, cannot conceal the lack of suitable protective elders for the modern female baby. 

             It took a miscarriage and almost a full year of living with her first child to enable Plath to write “Morning Song” for her second. It has many virtues: it does not fall into the sterile unlived jokes of “Metaphors” nor into the uneasy repetitions in “You’re”; it doesn’t impose the Gothic gloom of bottled babies in “Stillborn”; it restrains itself from repeating either reverent biblical mythology or the frustration of Yeats’ Magi. Its discipline continues the purity of “Mushrooms,” in which, by pluralizing her fetus and reducing it in category from mammal to vegetable, Plath could create an apprehension of threatening organic growth without imposing on her invisible fetus a humanity not yet perceivable. “Morning Song” has the inevitability of birth-momentum — what will happen next once a child is born but lets the child remain ungendered and unnamed, only a step — one could say — from its fetus-existence.

             “Morning Song” hints immediately at biological momentum: the “new statue” utters a “bald cry” — unlovely, unsettling, and insistent, and the first sign of the mother’s response to her child is unnervingly intellectual, non-human, as abstract as wind and cloud. That relation, not yet humanized, has its own ongoing momentum, as distillation effects effacement. Surely this is the most detached portrait of motherhood in literature. Without the strangeness of its first three stanzas, “Morning Song” could not have gained the measured steps into love by which it progresses, never putting a foot wrong in its steady pace.

             Just as ontogeny recapitulates phylogeny in “You’re” (in which the fetus has gills as it goes about becoming human), so motherhood in “Morning Song” begins its metaphors for the baby in the strange realm of a remote insect-species as the child produces its “moth-breath.” (Alliteration at the end of a word rather than the beginning —moth/breath” — although almost invisible, is one sound-connection that often gives Plath’s poetry an unusual texture.) At the next level of being, the new statue has now advanced in species from insect to animal, uttering an urgent cry from a mouth like a cat’s; the cry produces a companion animal, a “cow-heavy” lactating mother. The mother’s swift removal of the child and herself into another room takes a further step — the grounding of the “I” of the mother in a physiological function, nursing that is impossible to the male. The collective “we” of the couple never returns after the double establishment of the female “I” — first in the abstract terms of “I’m no more your mother” and secondly in the down-to-earth self-description by the young woman: “One cry, and I stumble from bed, cow-heavy.” The inalterable momentum of the cosmos (the cloud, the effacing wind) creates the rising dawn that “swallows” the night stars, past selves, as they dull after the irrevocable change of motherhood. The penultimate moment of the poem — in which the child, choosing its “notes,” becomes a human creature of intentional melody — arrives prefaced by the traditional “Et iam” remembered from the Latin poets, the “And now” that lifts an ongoing temporal curve into the present. Plath may be remembering Keats’ closing of his autumn ode: 

                                                 And now

    The redbreast whistles from a garden-croft,

    And gathering swallows twitter in the skies.

    The completeness implied in the conventional end-point — “And now” — wraps mother and baby and reader in the ecstatic moment in which the baby becomes human, the mother perceives the first sign of communication, and the reader feels the lift of the mother’s imagined balloons.

             It was the impeccable constancy of pace that moved me when I first read “Morning Song”: everything now on its way, the sequence of phases confident, the ending happy — not triumphant like the victory of those mushroom-invaders, but mutual, intelligible, reassuring. Later, as I saw further into the poem, its shadows troubled me: the permanent vanishing of the husband; the disappearance of the marital “we” in favor of the maternal “I”; the severe apprehension of personal effacement by the very decision to give birth; the permanence of universal natural “elements” of inorganic and organic existence (cloud, wind, birth, cry, nourishment, mind, death) within the swift transience of a single life from birth to the sepulcher; the conceptual incompatibility of abstractions (cloud, mirror, wind) and individuals (mother and infant).       

    My first emotional and structural responses to “Morning Song” did not yet admit me into Plath’s subtlety of sound, her claim on our subliminal response to the phonetic transfusion of the poem even as we gather the plot, the architecture, the pacing, and climax (or climaxes) as we traverse it. I hadn’t at first seen that the “moth-breath” was so styled because it reproduced part of the word “mother,” or that “mirror” echoed “mother” in rhythm and length as well as in sound. “Midwife,” “mother,” and “mirror,” all trochees (with stressed / unstressed rhythms), fit together as a birth-trinity; the cosmic wind’s invisible “hand” condenses itself into the audible “handful” of notes. Even the changes of agent in the closing lines make a container for the reader’s sense of closing events. We look in different directions — north, south, east, west — as the human gives way to the non-human: the organic dyad of mother and baby disappears into the inorganic dyad of window and dawn; the organic living body of the child produces the inorganic “notes” and “vowels.” Only a very flexible mind can hold in a single instant a whitening window swallowing stars, a baby’s proffering of melodic notes, and a mother’s vision of phonemic sound-balloons defined by their vowels. The “thickness” of such coalescences gives weight and solidity to Plath’s conclusion, bestows on the “new statue” an earned animation, and prompts in the new mother an awakening of love.

             Plath’s meditations on motherhood continue to deepen, enabling the reach and success of “Parliament Hill Fields,” a poem written when Plath suffered a miscarriage less than a year after her daughter Frieda’s birth. For the poem, Plath invents a yes-no structure replicating the existence/nonexistence of the fetus as it bleeds out, and replicating the movement of thought as well, as it returns repeatedly to a trauma. She interrupts her verse-narrative in this way with sad but stoic addresses to the never-to-be-known fetus:

    Your absence is inconspicuous;

    Nobody can tell what I lack. 

    . . . 

    I suppose it’s pointless to think of you at all.

    Already your doll grip lets go.

    . . . .

    Your cry fades like the cry of a gnat.

    I lose sight of you on your blind journey.

     

    The day empties its images

     Like a cup or a room.

    The day discards its hopes as the womb discards its burden. For consolation, Plath reminds herself that she has a living daughter at home, and summons up, as she walks, the glow-in-the-dark picture on the nursery wall. One by one, in her imagination, the objects in the picture begin to reveal their colors, but as Plath tries to install haloed angel-presences, each image collapses into transparent falsity; each “Blue shrub behind the glass / / Exhales an indigo nimbus, / A sort of cellophane balloon.” She returns home apprehensively: “The old dregs, the old difficulties take me to wife.” 

             It was only a week after that depressed requiem for a miscarriage that Plath turned to the past and wrote “Morning Song” for comfort, but its closing joy took on weight, I believe, through her continuing sorrow over the lost pregnancy. The instability of her mood is such that two days after writing “Morning Song” (February 19), Plath’s mind, meditating on motherhood, raises two unnerving specters: she composes “Barren Woman” (February 21) reflecting her fear of infertility, and its parallel “Heavy Women” (February 26) about disillusion after childbirth. The pregnant women “Smiling to themselves” and “beautifully smug” await birth in an ominous landscape where “the axle of winter / Grinds round, bearing down with the straw, / The star, the wise gray men.” “Bearing down,” the women will experience grinding tragedy as an inevitable consequence of giving birth. 

             Plath had thought that her life would offer, as sources of joy, the rich aesthetic stimuli of conception, pregnancy, and motherhood, and after her miscarriage her imagination clung to those stimuli even as suicide (presided over by a funereal yew-tree and a moon “with the O-gape of complete despair”) rises into competition with them.

     

              Although it is true that Plath’s suicidal depression and the violent poems that it produced won her fame, the tragic evidence of her mental illness has so dominated anthologies that her efforts to record and express joy tend to recede out of sight. Except for “Morning Song,” the selections from Plath’s poetry in The Norton Anthology of Poetry introduce readers solely to the grim Plath, each poem bearing its portrait of ruin: 

    “I crawl like an ant in mourning” (“The Colossus”);

    “The tulips should be behind bars, like dangerous animals” (“Tulips”); 

    “I have suffered the atrocity of sunsets” (“Elm”); 

    “If I’ve killed one man, I’ve killed two–” (“Daddy”); 

    “The dew that flies / Suicidal,” (“Ariel”);

    “I rise with my red hair / And I eat men like air” (“Lady Lazarus”).

    But as Plath, with her acute ear, completed and disciplined experience into form, the work of analysis permitted her an impersonal joy in creation that inspires even in her tragic poems an inescapable vitality. Even when Plath cannot maintain her joy in motherhood, even when a menacing darkness encroaches upon the child, the initial hope and joy appear and reappear in Plath’s lines.

    “Nick and the Candlestick” begins in a sunless underground cave, lit by a single candle. We see Plath transform the hellish cave into a place beautiful enough to house her beloved new child, Nicholas. The vowel-sound of the word “love,” touch by touch, (“Love, love,” “hung,” “rugs,” “of”) decorates the space of the poem, while alliterations (“roses,” “rugs,” “[Victo-]riana”) link the images: 

    Love, love,

    I have hung our cave with roses,

    With soft rugs–

     

    The last of Victoriana.

    Excoriating “the mercuric / Atoms that cripple,” Plath confirms to her son the cosmic new era produced by his birth:

    You are the one

    Solid the spaces lean on, envious.

    You are the baby in the barn.

    Still, the secularized parallel with the birth of Christ weakens, rather than strengthens, the poem. To demonstrate the inspiration that Plath found when she turned her gaze to motherhood (as well as her anticipatory guilt as she planned her death), there is no better illustration than “Child,” where even her hyperboles turn tranquil:

    Your clear eye is the one absolutely beautiful thing.

    I want to fill it with color and ducks,

    The zoo of the new

     

    Whose names you meditate–

    April snowdrop, Indian pipe,

    Little

     

    Stalk without wrinkle,

    Pool in which images

    Should be grand and classical

     

    Not this troublous

    Wringing of hands, this dark

    Ceiling without a star.

    Plath could include the joy of “Child” at the same time as she was foretelling her death in “Edge.” The longer she lived, the more inextricable the alternate truths became.

             There were, and are, many difficulties in inventing poems transmitting the labile emotions surrounding the birth of children. As Plath’s efforts suggest, it is hardest of all to attach poems to pregnancy when it is uneventful: while the cells are merely multiplying, no intellectual cause wakes the imagination. When Plath treats only the physiological events, the poems (“Metaphors,” for example) are unfruitful; as soon as there is an emotional event (a miscarriage, say) the poems reach fullness and credibility. Since so much of the biology of fertility is routine, biology alone cannot provide subjects for poems. Lived responses to motherhood — because there is so little access to them in the poetry of the past, and because biology itself seemed fatally governed by that indifferent moon of the impersonal universe — have not been easy to galvanize into poetry. But Plath had courage: even when life seemed meaningless, she actively sought out new genres of childbirth and motherhood (a miscarriage poem, a Thalidomide poem, a posthumous poem).

             What medicine had to offer Plath — electroshock, inefficient medications, “talk-therapy” — was too little. (A few years after Plath’s death, Robert Lowell began treatment with lithium, which, for all its drawbacks, enabled him to live.) Plath, who studied with Lowell, admitted her debt to his autobiographical Life Studies, but their styles distinguished them — his, ever more forcefully adjectival and “plotted,” hers, more surreal and fanciful and excited (she had a weakness for exclamation marks, which she vigorously deleted in revision). “To add to the stock of available reality” — R.P. Blackmur’s definition of the purpose of art — requires the day-by-day courage to devise an individual style, which issues from a strongly individual sensibility inseparable from a desire to play with language. (Pound defined poetry as “the dance of the intellect among words”). In Plath’s Unabridged Journals, published in 2000, thirty-seven years after her death, we follow, with pity and confusion, the incessant nightmares and tormenting emotions of an incurable patient hurled hither and yon by episodes of insanity. But from those chaotic journal pages, Plath somehow, by a perfectionism of instinct and intellect, drew her painstaking and commanding map of madness — madness disciplined, made strikingly euphonious, rhythmic, plotted, and controlled. 

    In the Journals during periods of illness, she is baffled, hysterical, malicious, vengeful, paranoid, frightened, self-loathing, and lonely beyond belief; and on the other hand, at her less anguished moments she is boastful, idealistic, hard-working, vain, self-congratulatory, self-sacrificing, resolute, and — in her best and sanest moods —  tender, kind, and loving. She had an eye that noticed the smallest details of life, and her poems of motherhood are full of them: who else would say of motherhood (as she does in “Thalidomide”): “All night I carpenter //A space for the thing I am given, /A love // Of two wet eyes and a screech.” In her poems of motherhood, her highly-colored lines strive constantly for truth, irony, comedy, and wit before they are bleached out by death.

    The paucity of convincing poems about motherhood remains evident. Few people are educated to the level needed to write original verse. (Most of the great English lyrics came from writers who knew several languages, usually including Latin, were beset by imagination, had a keen sense of poetic genres, were delighted by etymology, had read hundreds of fine poems and knew many of those by heart, and possessed an instinctive ear for cadence. The autodidacts — Clare and Blake in England, Whitman and Dickinson in America — taught themselves by reading intensely, not least the Bible and Shakespeare, and often by loving another art, see Whitman on “the trained soprano” and Dickinson on “The fascinating chill that music leaves.”) Few women writers become mothers; and too few mothers have the time, energy, money, and talent to write works of genius. Successful women writers have been, for the most part, single, protected, and rich. Too late for the eras of patronage and too early for reliable birth control, many talented women gave up creative hopes. 

    Plath did have a patron — a wealthy woman novelist named Olive Higgins Prouty — and she grew up in a house full of books, the child of two teachers. But the familial heritage of depression proved a lethal one. Plath’s father Otto refused for four years to be examined or treated for an illness that he insisted was lung cancer but that was actually treatable diabetes; Sylvia, by her own account, felt that she died when her father did, and first attempted suicide (almost successfully) at nineteen, then gassed herself at thirty-one; and her son Nicholas — “the baby in the barn” — after years of depression, hanged himself at forty-seven. The darkness finally defeated, too early, the gifts even of this first adequate and observant poet of motherhood. 

     

    The Metaphysician-in-Chief

    On February 22, 1990, Vaclav Havel spoke to a joint session of the United States Congress as the newly elected president of a free Czechoslovakia. Just a few months earlier, he had been detained (the last in a long line of arrests) by the StB, his country’s infamous secret police; he said he didn’t know whether he would be going to jail “for two days or two years.” But a mere three weeks later, as the satellites of the Soviet Union began to topple, overwhelming demonstrations throughout the country forced the Communist Party to agree to the first genuine elections since the Soviet-backed coup in 1948. Nearly two months to the day of his last arrest, Havel emerged as the only viable candidate for the presidency, and was rhapsodically elected on December 29, 1989, with a level of consensus unseen since Washington.

    As a playwright, a philosopher, an essayist, and one of the eminent adversaries of his country’s regime, Havel initially regarded his election as “an absurd joke.” (Philip Roth privately described it as Josef K. making it to the Castle.) And Havel was not the only one: Michael Žantovský (known for his translations of James Baldwin, Norman Mailer, and Joseph Heller) was elected to the Senate; Eda Kriseová (a journalist and short story writer) became one of Havel’s key advisors; and Jaroslav Kořán (who helped introduce the works of Henry Miller and Charles Bukowski into the Czech language) was elected the mayor of Prague. Never before or since had so many literary intellectuals found themselves in the house of power. Indeed, at the time of Havel’s address in 1990, the reality of Czech politics seemed to be almost the stuff of satire, befitting the sense of irony that had proved such an effective weapon against totalitarianism for nearly half a century.

    This showed itself in the language that Havel used in his speech to Congress, in which he spoke in terms that were utterly foreign to American politics and its everlasting anti-intellectualism. He stood before the assembled politicians and declared that “consciousness precedes being” — a line that weirdly received enthusiastic applause. It was classic Havel: the Goethe of government, the Metaphysician-in-Chief, addressing not just the state of the nation, but the state of the soul. I am reminded of the title character in Saul Bellow’s Humboldt’s Gift, who riffs on the candidacy of Adlai Stevenson (an image of Aristotle’s “great-souled man”), hoping that he will bring about an administration in which poets will be consulted for foreign policy and cabinet members will cite Joyce in their deliberations.

    Havel was frequently referred to as a “Philosopher King,” though almost never with the acknowledgement that this is, in fact, a smirking term, an oxymoron that denotes one of the oldest problems in Western thought: the inherent conflict between truth and power. Plato concocted a utopian and essentially satirical (which is not to say unserious) exercise to try to deal with this problem. The conclusion of this exercise was that philosophers were ultimately unfit for power, because the pursuit of truth — which is all too easily corrupted — is imperiled by politics, with its deceptions and duplicities. The joke that closes The Republic is that we should sooner expect kings to become philosophers than philosophers to become kings. And this “joke” was precisely the moral and philosophical challenge that confronted Havel the moment he went from being a dissident whose mantra was that one should always strive to “live in truth” to being the first democratically elected president of his country in over four decades. 

    Before 1989, Havel had lived his life in opposition, in what Hegel called “the labor of the negative.” But when this labor suddenly finds itself in the positive — becoming constructive rather than deconstructive — what is the dissident to do? Havel’s trajectory broaches the most fundamental questions about morality and power. Is morality enough for governance? Is compromise, which is the heart of democratic politics, an ethical defeat? How much realism can the moral man stomach? Do high ends require equally high means? Were Vaclav Havel’s ideas, which he forged as a rebel, compatible with the responsibilities of power? Did they prepare him adequately to rule?

    In many ways, Havel’s ascension was a real-life dramatization of a tension that had always existed in his thought: the uneasy relationship between an acute sense of irony (what Czesław Miłosz called “the glory of slaves”) and an often impossible moral seriousness. Generally, this is a tension that afflicts the oppressed: the acknowledgement of irony is one of the key resources of the disempowered, whereas the empowered, while fully outfitted with irony, are routinely disabled from acknowledging it. (Irony is incompatible with the solemnity, not to mention the majesty, of office.) Acutely conscious of this, Havel spent much of his early years as president attempting to qualify his new position. In Summer Meditations, written in 1991 on the eve of his reelection, he insisted that a person of his experience is not only suited to politics but destined for it, as long as they maintain good instincts and good taste: “The sine qua non of a politician is not the ability to lie; he need only be sensitive and know when, what, to whom, and how to say what he has to say.” But this must be contrasted with what Havel had written only a few years earlier, in 1985, as a dispossessed citizen in “The Anatomy of Reticence”:

    A dissident runs the risk of becoming ridiculous only when he transgresses the limits of his natural existence and enters into the hypothetical realm of real power… Here he accepts the perspective of real power without having any genuine power… he leaves the world of service to truth and attempts to smuggle his truth into the world of service to power without being able or even willing to serve it himself.

    Plato could not have said it better. And yet this reluctance to seek power turns out to be one of the better measures of a person’s fitness for it — as ambition itself is suspect, and a capacity for self-doubt is surely preferable to none at all. (In more recent decades, of course, and certainly in America, ambition has lost the odium that was attached to it, and perhaps for the better.) The question that Havel’s purist position raises, ultimately, is whether the intellectual’s role in society is necessarily an adversarial one, whether the intellectual can acquire power and exercise it without betraying his vocation or the truth. 

    From his biography we know that Havel had never demonstrated any visible ambition for power, and that he was acutely aware of its toxic effects on the psyche. He was also, by all accounts, undestined for leadership, being not especially gifted with charisma, oratorical power, or bureaucratic savvy. Nor is there much evidence to suggest that he was ever motivated by a sense of vanity — that he wished to be seen as a martyr, or as a symbol of his country’s triumph over totalitarianism. Havel could have easily remained in an adversarial role for the rest of his life, scribbling implacably and leaving plays and essays for posterity. But like so many of his contemporaries, he was conscripted into a political life. This is a boomerang of totalitarianism’s own making: in seeking to abolish private life, it makes an apolitical life impossible. 

    Vaclav Havel was born in Prague in 1936, which means that he was two years old at the time of the Munich Agreement and the Nazi annexation of the Sudetenland; and six when Reinhard Heydrich was assassinated in the streets of Prague by a secret Czechoslovak paratroop unit; and twelve at the time of the communist coup (“Victorious February”), which saw his family’s property confiscated and transferred to the state. The Havels were a landed family, known for their real estate and their presence in the film industry, and provided a prominent image of prosperity in a new era of national independence: a cultured and conscious noblesse, modeled by Tomáš Garrigue Masaryk, the founding father of the First Czechoslovak Republic, which lasted from 1918 to 1938, and one of the central influences on Havel’s political philosophy. Masaryk, himself a Philosopher President, studied phenomenology with Husserl and taught at Charles University in Prague before becoming a member of the Austro-Hungarian parliament in Vienna. Masaryk was an Enlightenment humanist, devoted to the principles of liberal democracy and the right to self-determination. As a politician, Havel frequently located himself in this tradition and the republican values that he believed ran deep in the national psyche.

    Branded a “bourgeois element” by the communist authorities, Havel’s class background disabled him from attending any academic gymnasium or studying the humanities. He almost certainly received a good education at home, but he was by all accounts an autodidact. After dropping out of a technical university in his early twenties, he began working at the Theatre on Balustrade, one of the few creative avenues open to him in the early 1960s, when the political culture in Czechoslovakia began to thaw. In 1963, his first (and still his finest) play, The Garden Party, was performed. The play is a satire on the insanities — and the inanities — of communist bureaucracy, and the absurdities of language in which its characters find themselves trapped (unknowingly) in cyclical deliberations, which allow them to speak only in party-line platitudes. 

    Crucially, The Garden Party exploits the intrinsic connection between linguistic manipulation and political manipulation, mocking the socialist langue de bois and showing how the “ritualization of language” leads to the ritualization of thought. Like much of Havel’s work, it dramatizes the menace of banality, and reminds us that one of the many tyrannies of totalitarianism is the tyranny of cliché. We also see — in this regard Havel is the heir to Kafka and Hašek — the existential dread that comes from being trapped within a system, which can only be undermined by replicating the features of that system, thus reinforcing (and making oneself complicit in) one’s own alienation, something that Havel would attempt to break out of in his own life, in what he described as a search for an “absolute and universal system of coordinates.”

    Czech politics in the 1960s turned out to be strangely literary. The same year in which The Garden Party premiered, the Czechoslovak Writers Union organized a symposium to discuss Kafka’s work for the first time since it had been banned as “decadent anti-realism” during the Stalinist period. The symposium is now widely cited as the opening shot of the Prague Spring. Since it was met with no retaliation, the Writers Union convened again every year, and in 1967 it decided to air its grievances with the regime, an event that assisted in the ascension to power of Alexander Dubček, whose democratic reforms set out to create “socialism with a human face.” (Before too long Soviet troops put an end to that fantasy.) Havel was not in the forefront of the Prague Spring. His activities in 1968 were mostly rearguard, and he was already displaying a non-partisan, anti-ideological approach, in contrast to his contemporaries, who were largely on the side of reform communism. He nonetheless penned an eloquent essay titled “On the Theme of Opposition,” in which he argued for creating a second political party instead of hoping for transformation within the communist bureaucracy. 

    Havel’s real entry into political life came in the 1970s, the era odiously known as “normalization.” In 1977, he and several other Czech intellectuals in the Committee to Defend the Unjustly Prosecuted, known as VONS, momentously published “Charter 77.” Written in response to the trial of the (truly) underground psychedelic rock band, The Plastic People of the Universe, it was less a declaration than a reminder of human rights, which the Czechoslovak Communist Party had agreed to uphold at the Helsinki Accords in 1975. The Charter pointed out that any in Czechoslovakia existence of human rights, or a “freedom from fear,” was “purely illusory.” The extent to which these rights existed was “regrettably, on paper alone,” and the regular harassment of citizens––not for openly opposing party ideology––but merely holding opinions that differed from it, constituted “a virtual apartheid.” Havel would be imprisoned for nearly four years for his role in helping to draft the Charter, which would circulate as samizdat until the revolution, by then accumulating thousands of signatures. 

    Another of the Charter’s signatories was the philosopher Jan Patočka, who had studied with Husserl and Heidegger and had written extensively on Plato, Aristotle, and Masaryk. Patočka’s existential phenomenology, which heavily influenced Havel, was based on the very Hegelian notion that history is the process of overcoming alienation, and that “authenticity” comes from engaging in “truth-revealing” activities, which liberate the mind from the technological and political currents of the “everyday” world. Importantly, the phenomenological standpoint intuited a connection between the utilitarian and the totalitarian, in regarding an unspeculative, mechanized life as essentially tyrannous, and its emphasis on personal responsibility constituted a direct threat to state ideology, as it did to the whole “system” of anonymous, depersonalized power.

    Patočka, who was nearly seventy, was subjected to many lengthy police interrogations for his role in the Charter. These sessions were so stressful that he was eventually admitted to the hospital for chest pains, where he died within a week, in March 1977, of a cerebral hemorrhage. But not before firing off two extraordinary essays. One of these was “The Obligation to Resist Injustice,” which appealed to “a higher authority, binding on individuals in virtue of their conscience…” and “something that in its very essence is not technological, something that is not merely instrumental: we need a morality that is not merely tactical and situational but absolute.” “Absolute” is a very strong word. One notices, even at a glance, how this way of thinking is visibly imprinted on Havel’s philosophy. Some of these ideas would be modified — the notion of “authenticity” would be relabeled as “living in truth” — but the language that Havel used, even in his address to Congress years later, is traceable to Patočka’s thought.

    Despite having been a vocal defendant of Charter 77, Patočka’s philosophical approach was essentially Platonic, in that truth, or Truth, was something to be pursued hermetically, outside of the norms of the political order — a necessity, in this case, of living under an oppressive regime. This was the lesson that Plato learned from the execution of Socrates, which made it clear that philosophy could not be practiced in the polis. Where Havel broke from Patočka (and Plato) was in his belief that Truth could be, and must be, discovered socially. He would develop these ideas in his long and imperishable essay “The Power of the Powerless,” written in 1978, between the publication of the Charter and his imprisonment the following year. The essay diagnoses the state of one’s consciousness under what Havel preemptively termed the “post-totalitarian” age — the last days of a walking-dead ideology, in which the system is held together not by the integrity of its ideas but by a hollow devotion to procedure, and — crucially — by a mass falsification of opinion.

    In Havel’s account, life in a post-totalitarian society was characterized by an elaborate fiction, in which people became complicit in their own oppression by their willingness to “live within the lie.” Havel uses the example of an ordinary shopkeeper who places a sign in his window that says “Workers of the world, unite!,” not because he believes it but because he knows that he is expected to do it: it is a signal of his auto-obedience. He also knows that his neighbor, who might inform on him lest she be suspected, does not believe it either; nor does the policeman who questions him, nor the bureaucrat who penalizes him by reducing his pay. They are all simply protecting themselves. Thus, everyone becomes an instrument of obedience, a link in a chain of mendacity and self-oppression, knowingly contributing to a “world of appearances trying to pass for reality.” Any refusal to prop up this world, therefore, is an act of “living in truth.” Such a refusal might consist in writing a novel, putting on a play, or simply changing the punctuation on the shopkeeper’s sign: “Workers of the world, unite?” What qualifies for being a “dissident” (a word Havel always used in scare quotes) is nothing more than recognizing and pointing out the fraudulence of the world of appearances. 

    Under such conditions, civil disobedience becomes a somewhat quixotic affair, with the dissident solipsistically battling forces he is powerless to stop and knowing that he will likely be punished for it. Jiří Němec, a Catholic philosopher and fellow member of VONS, once remarked that Havel had “always written as though censorship did not exist.” This “as though” is an essential test of one’s character, and something that no regime can hope to claim or conquer. Alexandr Solzhenitsyn, another great exemplar of this politically unreachable personal autonomy, insisted that it is a way of preserving a sense of “innocence.” Havel, like Solzhenitsyn, was not just talking the talk: if the Charter was an act of “living in truth,” based on a moral obligation to resist injustice, then it was an act that he knew would likely result in his imprisonment.

    But at what point does such an act become a display of one’s purity? This criticism came from Milan Kundera, who accused Havel of moral exhibitionism (a charge that Havel’s detractors would level against him all his life) for the role that he and Němec had played in drafting a petition for the release of political prisoners after the Warsaw Pact invasion in 1968 — a petition that Kundera refused to sign. “Such action” Kundera argued coldly, “has only a twofold aim: (1) to unmask the world in all its irreparable amorality, and (2) to display its author in all his pure morality.” Kundera would eventually dramatize this in The Unbearable Lightness of Being, in a scene where a newspaper editor ambushes Tomas with a petition for the amnesty of political prisoners, which Tomas considers “possibly noble but certainly, and totally, useless.” The more worldly Kundera, who had already emigrated to France by the time Charter 77 was published, would in time be proven wrong. As a symbol of solidarity, the Charter’s significance was undeniable. Still, it resulted in Havel’s removal from public life for many years — years that might have been more productive had he managed to avoid imprisonment.

    When Havel was released from prison in 1984, he wrote “Politics and Conscience,” a continuation of his argument against “automatism” and mechanized life. He claimed that the values on which all societies are based are self-evident to us — values that are in our world before we recognize them, or even speak of them, and that these values are tangible in phenomena such as aesthetic revulsion. Havel uses an example from his childhood, walking to school every day under the smog of a polluting factory, which “soiled the heavens.” The existence of the factory is not just harmful to the environment, or to human health, he wrote  — it is metaphysically offensive, it violates our “pre-speculative” sense that there is a “hidden source of all the rules, customs, commandments, prohibitions and norms that hold within it.”

    Havel’s critique of the spiritually empty machinery of everyday life went both ways, recognizing that post-totalitarian automatism had an analogue in the West, in the hideously lit spaces of shopping malls and TV screens. He argued that modern consumer societies and techno-capitalist democracies were ultimately unfit as an alternative to communism, because they too released the citizen from a sense of “personal responsibility” and were based on “man-made absolute[s], devoid of mystery, free of the whims of subjectivity.” This was not long after Solzhenitsyn’s infamous lecture at Harvard in 1978, an acerbic sermon in which he denounced the West for making man the measure of all things and domesticating the human soul in favor of the “cult” of individualism. This bi-directional criticism, this “third way,” was not uncommon among dissidents, nor were the appeals to a deeper humanism. Havel was not at all cynical, nor did he stoop to an embittered and reactionary chauvinism, like Solzhenitsyn. Still, his warnings about the culture of nihilism that characterizes “Western totalitarianism” — strong words, coming from him — against which we must rekindle a sense of “higher responsibility” because “man has abolished the absolute horizon of his relations” — these chastisements cut eerily close, and carried clerical overtones.

    Havel’s vision of the polis was essentially Aristotelian: a state grows out of human relations just as a seed grows into a tree. Modern civilization, however far it appears to stray from the natural world, is nonetheless founded on a “pre-objective” conscience about our relationship to one another. (Throughout his work, Havel variously referred to this as “anti-political” politics, “non-political” politics, and “genuine politics.”) The basis of a society is to serve Truth; it should be predicated on the pursuit of a meaningful life, with humans as ends in themselves. Like Patočka, Havel frequently argued that we are bound to “something higher,” though that exact something  — Nature? God? Conscience? — is left rather vaporous.

    This was the intellectual and “spiritual” (a word that Havel used a lot) opportunity presented by the collapse of communism, which looked like the moment when an “existential revolution” was finally ready to launch, the moment when capital-H History had breached a new epoch, amid George H.W. Bush’s declarations of a “new world order” and Francis Fukuyama’s celebrated reflections on “the end of history” (a triumphalism that now seems impossibly quaint, or worse). But 1989 turned out not to be the end of history, of course. It turned out to be the resumption of history, after many decades of interruption by History and its vast social experiments that ended in failure and disgrace. Havel rightly understood that this resumption could not be smooth — that the end of communism signaled not just an ideological failure, but an existential one, which would leave deep wounds in the next century if its consequences were not properly identified and treated. Totalitarianism had attempted to denature — or re-nature — human beings, and succeeded only in deforming them. Since the system infiltrated every aspect of existence, attempting to abolish not only private life but also the privacy of the mind, it followed that whatever came in its wake would require some renovation of our sensibilities.

    Havel recognized that communism’s sudden evacuation from the world left a vacuum in people’s hearts, making them vulnerable to surrogate ideologies such as nationalism and religious fanaticism, as well as political extremism and the “primitive cult” of consumerism. In addition to political revolution, a metaphysical revolution was also needed, precisely because totalitarianism sought to eliminate metaphysical life. How this revolution would be achieved he left characteristically vague. Havel claimed that a new “self-understanding” was needed, that “we must discover a new relationship to our neighbors, and to the universe and its metaphysical order, which is the source of the moral order.” This existential revolution was not limited to post-communist polities. With globalism in view, Havel argued that one of the great challenges of the twenty-first century would be to overcome a “purely national perception of the world.” The only way to transcend this would be to cultivate a “worldwide, pluralistic metaculture” based on a recognition of our common humanity.

    But from which body would this new metaculture emerge and be upheld? The UN? A new North Atlantic Treaty Organization? A Metaphysical Council? Institutions — especially rules-based ones — which arise mainly out of the need to restrain truancy are ill-equipped for such a task. After all, it is not the role of the state to be a smithy of the soul. Its spiritual responsibility is, at most, to create the conditions of freedom in which this work can take place. Jefferson understood this, which is why the Declaration of Independence does not offer happiness, but only a door open for its pursuit. The conditions for this pursuit — human rights and civil liberties — are only the beginning, and securing them is not a guarantee against the nastier aspects of human nature. The question, which Havel addressed only indirectly (it is the question that lies at the heart of the Republic), is whether fit people are needed to build good institutions or fit institutions can build good people.

    Havel was especially perceptive about what Karl Popper called “holistic social engineering” — the willingness to use human beings in an experimental manner, something that Marxist utopian materialism tried for the better part of a century to hammer out. And here it is worth remembering that the Bolshevik coup, which set this experiment in motion, was brought about by an ascension of intellectuals to power — Lenin and Trotsky were both brilliant intellectuals — who made philosophy the business of the state, believing that it was the job of an educated vanguard to organize life for the rest of society. More still, these men justified their actions by their belief, their certainty, that they were doing good.  The corruption of the good is often the worst corruption of all. (Corruptio optimi pessima, though the application of that adage to Lenin and his comrades flatters them.) Conscious of this, Havel was careful not to suggest that an “existential revolution” would be the business of institutions conforming to ideologies and the ideologists who propound them. Where Havel broke with the tradition of social engineering from Plato to Lenin was in his profoundly anti-ideological approach. The revolution would be bottom-up and intellectually libertarian, rather than the top-down approach of state-sanctioned philosophy. It was less a call for a “new man” than an appeal to auto-enlightenment.

    This drew a poignant criticism from Joseph Brodsky, who knew a thing or two about prison and life in a totalitarian state. He agreed that communism had been a human failure as much as a political failure, but he pointed out that this was less an aberration than another example of humanity’s repeated mishandling of the attempt to establish justice. The “metaphysical order” on which new institutions are to be built should thus begin with self-examination and the recognition of our tendency to bungle justice. (A look at how revolutions devour their own and the ways in which the oppressed come to resemble their oppressors is evidence enough.) Among other things, this means that calls for a “perfect society” or a “new man” are to be distrusted on principle, as they are so often the intellectual roots of horrendous social experiments.

    Like Jefferson, Havel seemed to believe that institutions could accomplish only so much, and that most of the important work involved building spiritually fit people before we could expect true justice. He was inclined to see our institutions as potential manifestations of our essential goodness — but here we must remind ourselves, against Havel, that the true legacy of the Enlightenment, a time of accelerated transformation in human consciousness, was that the new constitutions, which allowed people the freedom to pursue Truth, were based on the recognition not of our essential goodness but of our propensity for error. We cannot assume that people will become more virtuous; we must assume the opposite. (“If men were angels, no government would be necessary.”) A state of civility, however we achieve it, should be a constant reminder of how easy it has been for us at times to abandon it. If this is optimism, it is not quite idealism. And what Havel’s picture of the world was lacking was a certain degree of moral realism, which is not the same as cynicism even if it can degenerate into it.

    Havel’s sense of morality must have derived partly from having lived most of his life under a brutal regime. In such circumstances, it is tempting to perceive oneself as good and true simply by being in opposition to power. The recognition of oneself as virtuous is in many ways a luxury of the oppressed (who do not have many luxuries), one that those in power cannot always afford or justify. Thus, Havel’s famous civility — his great strength of character — became his main political weakness. Though he found himself a revolutionary, Havel did not have the rigid, unreflective mind that revolutionaries often need to repress empathy and irony. This is the “old disease” described by Rubashov in Arthur Koestler’s Darkness at Noon — the tendency to “think through the minds of others.” For better or worse, Havel was usually too willing to think through the minds of others, of his opponents and his enemies, and he always seemed ready to give them the benefit of the doubt. This was a reflection, certainly, of his own self-doubt. But it left him ill-equipped for some of the harder responsibilities of politics and governance. 

    The challenge, at home and abroad, was how to build new institutions that could overcome the legacies of totalitarianism, both within the mind and in everyday life. Temperamentally a dissident, Havel was not well prepared to build these institutions, and his idealistic notions of Truth could not find a place in the new circumstances of reform power. He lacked the practical knowledge to construct institutions that would restrain the greed, corruption, and opportunism that followed the collapse of communism in his own country (and elsewhere), in which many former apparatchiks simply changed the color of their ties and went into business for themselves, accumulating pelf in an engorged era of mafia capitalism. One of those characters was Andrej Babiš, a former StB stooge and the country’s second-wealthiest man, who served as the scandal-plagued prime minister of the Czech Republic from 2017 to 2021.

    Havel was also unable to restrain the rise of nationalism in his own backyard, when the Slovaks separated from the Czechs on New Year’s Day, 1993. The country literally fell apart. He had supported the continuation of the union, but he did not oppose the decision to separate, recognizing the Slovaks’ right to self-determination, which had been an important part of Czechoslovakia’s independence from the Austrian empire. He argued incessantly for a new global identity, but he could not dissuade the Slovaks from the cult of their own particularism and from secession from the pluralistic federation to which they had belonged since 1918. Havel was one of many impotent leaders who had to watch as the world failed to respond to genocides in Rwanda and the former Yugoslavia, for which he and many others urged international action to no avail. (Eventually the Americans and NATO took belated action in Bosnia, though not owing to Havel-like strictures of conscience.) His noble calls for an existential revolution and a worldwide metaculture came to seem absurd, and blind to the realities of an increasingly Hobbesian and particularistic world. 

    It should be noted in fairness that the Czech president is a different kind of executive, not ensconced in immense power like the president of the United States, but rather draped in symbolic cloth. A largely ceremonial figure, with the power to sign and occasionally to veto laws, Havel’s duties were different from those of other politicians. It was his job, in a certain sense, to raise questions of selfhood, nationhood, conscience, goodness, civility, destiny, determination, the nature of Truth, and the nature of power itself. Still, his philosophy, which he developed as a dissident, showed itself to be inadequate once he came to occupy a position of power. It also revealed the irony of “the power of the powerless,” because it demonstrated that the powerful, too, are often powerless — to acknowledge and to act on the truth.

    Havel taught that we must trust our conscience. Who will deny it? Yet how does this teaching account for others who do not trust their conscience, or who do not even know what conscience is? How are we to relate politically to people who display no conscience on matters such as Rwanda, or who are content with having power guided only by interests or prejudices? Can you persuade them to have more of a conscience? Do you encourage more self-examination? Surely they will be deaf to such exhortations. (Consider the question of China and human rights.) When Havel became a politician, he experienced the very thing that he had loathed, which is that the calculus of impersonal bureaucratic power domesticates the conscience and restricts its deployment to convenience and expedience. The repeated failure of Western leaders to react to pressing threats and emergencies — the annexation of the Sudetenland, say, a miscarriage of justice that Czechs love to remind the world of — is proof enough of this discouraging understanding of the actual career of conscience in power.

    Now, more than eighty years later, with the Russian invasion of Ukraine, we have had to watch yet again as another aggressive, expansionist regime threatens the sovereignty of one of its neighbors and the stability of Europe as a whole, and once more suffer the shame of having done so little to prevent it. Havel supported NATO expansion in Central and Eastern Europe, which would suggest that he had no illusions about Russia. But this lack of illusion was set in a larger intellectual framework that was rife with illusion. Very few people anticipated that a crabbed and revanchist Russia could once again come under the rule of one man, rebuild a “pact” in its sphere of influence, and strike out against the West. But now that all this has transpired, the view that conscience can suffice as a guiding force in world affairs without the robust assistance of power, and that a “rules-based international order” can act as a deterrent for those who clearly have no conscience, is unrealistic to the point of naïveté. Perhaps naïveté is the occupational hazard of conscience. In any event, the campaign for an increased role for conscience is certainly necessary; but it is dangerous to believe that it is sufficient.

    Havel’s “letters” to the Czech presidents Aleksander Dubček (in 1969) and Gustav Husák (in 1975) have the voice of a kind of court philosopher, one trying to enlighten his despots in the unwisdom of torturing their subjects. (This was a compromise that intellectuals made for centuries.) But when the intellectual then finds himself in power, does it become his job to turn over the task of enlightenment to society? The question we might ask ourselves is, how much should we ask of our elected officials? Do we want them to be gurus, spiritual guides, telling us in their televised addresses that we need to rekindle our relationship to a metaphysical order? Are we willing to tolerate this kind of talk from tax collectors? Or do we expect them only to perform the tasks of government effectively, and to leave questions of the soul to us, whatever our philosophical qualifications?

    Many of the questions that Havel posed in the wake of communism’s collapse anticipated what we now recognize as the great challenges of our century: the move from a bi-polar to a multi-polar world of international relations; the tension between national identity and global identity, particularly in the confrontation with global problems such as environmental degradation; the recrudescence of tribalism, nationalism, fascism, xenophobia, incivility, and a well-heeled techno-utopianism that threatens to make automatons of us all. Addressing these challenges is surely a duty shared by both writers and elected officials. They are both agents in the discourse, but in their own ways. No one (I hope) would dispute that intellectuals have a leadership role in society, and that their responsibilities bend toward an enlightened citizenry. (The definition of this enlightenment can be debated, but there are cruel and violent worldviews that it obviously excludes.) Yet the greatest contribution that intellectuals have made to their societies has generally been made outside the halls of power, and sometimes in opposition to power. When Vaclav Havel went from dissident to president, he tried to make history and found himself at its mercy. His story has an element of late tragedy. We are all, it is true, at history’s mercy, including the non-intellectuals who occupy the presidencies and the palaces. But if justice is possible, it will require qualities of will and determination, and a comfort with power, that are not the strengths of saints.

    Glass of Milk

    Was a swell commandment: drink up, sleep.  She’d

    relinquished the vampy black and absconded to her toddler

    color (muddy sunset) as we, one from each grief stage,

    commissioned to flock her, petal’d her pale strapless, 

    pressed the appliqué along her spine with dancer’s glue,

    all funds sunk into that silk, hence the wan hors d’oeuvres,

    sheepish flasks, White Album on a loop.

    Eventual brood snug in her ova, she straightened, candle-

    brave before that noonday deadline.  Startled nipples

    got plastered.  A dose of almonds

    so no swooning during forevers.  Preview

    of losing it, Skyping with her guru

    to parse the voice of God thunderstruck into the nib of 

    a midcentury housewife.  Waking rapturous, un-entombed,

    to commune with birdsong and him in the mystical

    five am.  An Oona holding hydrangeas, she was.  Soon 

    to vanish into a strobing, off-kilter rainstorm,  

    the frothy whitecaps of a harbor’s embrace and resistance.  

    There stands her hometown man.  A future of borax-

    bleached nappies, the Paxil.  She turned us a keen look

    sailing down satin.  Absolute abandonment, can’t come with,

    the fox’s grin plunging unabashed into snow drift.

    Ouroboros

    Frigid in vibrating daylight, with no distinction

    between indoors and out.  Ailene on the gurney

    asked her children, Am I dying? and received

    a coward’s answer.  How she eyed the ward, panicked,

    more alive than ever.  Once a lounging

     

    teenager, biting the brush end of her braid,

    the lattice more alive than ever with carnations.  

    Braised rabbit hunted that morning, not sleeping, no

    indeed, beneath silver.  Relieved of instinct.    

    Retold in a tempo to correct the grievous echo.