Cycladic Idyll

    Lower your eyes.
    When beauty
    invades your life with such force,
    it can destroy you.
    The two ants hurrying along
    next to the soles of your feet
    are burying their summer dreams
    deep in the ground.
    The load they are carrying
    will not crush them.
    They have measured their strength accurately.

    Your shadow melts into the shade of the tree.
    Black on black.
    Guilt that must remain in the dark,
    so as to go on defining you.
    But the glare of those fragments
    may still hold you.
    You have no need of adjectives,
    of devious subterfuge.
    Every question is a desire.
    Every answer (you know by now) is a loss.
    Stay where you are standing.
    In a while it will overtake you.
    The clouds do not ask where,
    they just continue on their way.

    The Beehive

    The ambition that burned in the breasts and the brushes of the immigrant artists at La Ruche was not enough to warm them on winter nights. Hunger is what lured them to Paris and hunger is what kept them there, a zealous hunger that fortified them against the physical hunger which incessantly rumbled in the bellies of the painters, the poets, and the musicians, many of them Jews, who clawed their way from their respective shtetls to the City of Light. La Ruche, which means The Beehive, was — and still is — a colony of humble artists’ studios in the fifteenth arrondissement in Paris, near Montparnasse. La Ruche was among the many artist complexes in Montparnasse that together sheltered the École de Paris, which is what the French art critics christened the swarms of immigrants suddenly overflowing their academies and their galleries in the early decades of the twentieth century. The title distinguished the foreign contaminants from the École Française, which, according to street gossip as well as literary magazines, the emigres were polluting.

    In a monograph on the painter André Dunoyer de Segonzac which appeared in La Carnet de la semaine in 1925, the art critic Louis Vauxcelles (himself a French Jew as well as a textbook arriviste, who was petrified that his unsavory Semitic brethren would upset his position in the art establishment) proclaimed that “a barbarian horde has rushed upon Montparnasse, descending [on the art galleries of] rue La Boétie from the cafes of the fourteenth arrondissement… these are people from ‘elsewhere’ who ignore and at their hearts look down on what Renoir has called the gentleness of the École Française — that is, our race’s virtue of tact.” (Note that anxious our.) The art critic Fritz Vanderpyl was a good deal nastier. In an article in 1924 entitled “Is There Such a Thing as Jewish Painting?” which appeared in the Mercure de France, he gnashed his teeth:

    In the absence of any trace of Jewish art in the Louvre, we are nevertheless witnessing a swarming of Jewish painters in the past-war salons. The Lévys are legion, Maxime Lévy, Irene and Flore Lévy, Simon Lévy, Alkan Lévy, Isidore Lévy, Claude Lévy, etc… not to mention the Lévys who prefer to exhibit under pseudonyms, a move that would be quite in line with the ways of modern Jews, and without mentioning the Weills, the Zadoks, whose names one comes across on every page of the salon catalogues.

    A year later the magazine L’Art vivant asked significant members of the Parisian art-sphere which ten living artists should be included in the permanent collection of a new museum of French modern art. The prominent Polish Jewish painter Moïse Kisling replied with commendable venom: “Simone Lévy, Leopold Lévy, Rudolph Lévy, Maxime Lévy, Irène Lévy, Flore Lévy, Isidore Lévy, Claude Lévy, Benoit Lévy, et Moise Kisling.” The Jewish painters of Paris had pride.

    Scandalous and indecorous and unconventional as they emphatically were, the School of Paris was still another of the many limbs of the French art world, and it shared vital organs with its more traditional counterparts. Like all artists in France at the time, these young rebels’ careers were dependent on showing in salons. Academic pompiers such as William-Adolphe Bouguereau, the distinguished but kitschy painter who was president of the École des Beaux-Arts de Paris, maintained a tyrannical power over the standards of French art, imposing them most publicly by rejecting works that did not conform to the accepted canons from the official annual Salon of the Académie des Beaux-Arts. This stranglehold was momentously loosened in 1883. After having been rejected by the Salon three years earlier, the Impressionists and Post-Impressionists organized a Salon des Refusés (the second in twenty years), after which the Société des Artists Indépendants was founded. Cezanne and van Gogh were among the participants in the first Salon des Independants. Since this new salon had no jury, it lacked a wider prestige. As an alternative, the competitive Salon d’Automne was held for the first time in 1903, in the newly built Petit-Palais, which had been constructed for the World’s Fair of 1900. Two years later Matisse, Derain, and their cohort famously exhibited paintings in hot violent colors, which prompted Vauxcelles, consistently averse to disruption, to call these artists Fauves or “wild beasts.”

    The painters at the Beehive, and more generally the School of Paris, plotted their progress into the establishment through their acceptance into the new salons. They were not really a “school,” at least insofar as they shared no artistic theories or crusades. All the styles of their time, all the modernisms and all the traditionalisms, were represented under that octagonal roof. But they had other traits in common. While a certain degree of assimilation was inevitable for these newcomers from distant lands, their otherness never left them. Otherness was among the stimulating discomforts that all the members of the “school” shared. Many thousands of hours in Parisian cafes never drowned the aftertaste of their origins. This did not defeat them, though the history books do not honor their tenacity.

    A handful from the School of Paris would carve their initials into the annals of art, but most of their names were uttered for the last time many decades ago. They made up a large part of the throng of Montparnasse in its golden years, of its tremendous artistic commotion, but they were erased and forgotten. Say a prayer for these minor artists, who consecrated themselves to art against circumstance; these men and women whose genius sputtered more often than it glowed, and did their best, which was often very good but rarely good enough; and say a prayer, too, for the open hands and hearts that welcomed and nurtured them, and could always spare a loaf or a smile for these destitute stewards of beauty.

    “You left either famous or dead,” Chagall said about La Ruche, into which he moved the fall of 1911. An overstatement, but barely. At the hive Chagall met Archipenko, Modigliani (a proud Sephardic Jew whose mother perpetuated the improbable origin story that her family was directly descended from Spinoza), Soutine, Kikoine, Kremegne, and Lipschitz, among the few whose names are familiar. The bohemian romance did not end well, and not only because success eluded so many. In the early 1940s history found its cruel way to the Beehive, and many of its artists were carted off to the notorious deportation center at Drancy in a northeast suburb of Paris and from there to their extinction in concentration camps in the east, mainly Auschwitz. (Chagall was long gone by then: when you could afford to get out, you got out.) The Yiddish literature about these people refers to them as “our martyred artists.” But their destruction must not obscure the thrilling lives that they lived. This mélange of visionaries and dreamers made it to the nucleus of modern art in the early 1900s. They were more than extras in the high drama of the twentieth century; for a few decades they created a scrappy, infinitely exciting universe.

    It is often said that La Ruche was designed by Gustave Eiffel himself. Like much of what is whispered about the School of Paris, this is a half-truth. The full tale begins with the World’s Fair of 1900, held in Paris from April to October of that year, just over a decade after the international exhibition of 1889, for which Eiffel built his grand tower. (Dumas, Maupassant, Bouguereau, and Meissonier were among the many writers and artists who signed the protest against Eiffel’s steel monstrosity, the “ridiculous tower dominating Paris like a gigantic black factory chimney… a dishonor to the city.”) General Commissioner Alfred Picard, dubbed by the press the most important man in France, was determined that his exhibition in 1900 would exceed in brilliance and magnitude all its predecessors, and it did. Construction began eight years in advance, and included the Grand Palais, the Petit Palais, the Métro, the Gare d’Orsay, and the Pont d’Alexandre. The fairground spanned 543 acres and included pavilions representing forty-seven countries, all battling to demonstrate their technological and cultural preeminence. It was visited by fifty-one million people over the course of six months.

    The world’s first moving sidewalk, the first regular passenger trolleybus line, and an electrotrain ran through the exhibits. The Grande Roue de Paris ferris wheel, 96 meters high, was the tallest ferris wheel in the world at its opening. The fair also saw the first escalator which won first prize at the exposition, diesel engines, electric cars, dry cell batteries, electric fire engines, a telegraphone, and also the world’s first matryoshka dolls. Thus France ushered in the new century by insisting again upon its status as the epicenter of the globe.

    Reveling in the gargantuan brouhaha of the fin de siècle, the organizers and the attendees were of course blind to the radical political and social and cultural convulsions into which the world was about to be thrown. The boys whose blood would soon soak the Continent were mere toddlers. Einstein’s theory of relativity, and its ramifications for the perception and understanding of the world, was sixteen years away. Philosophers and psychologists such as Henri Bergson and William James would soon describe human experience as essentially fractured and confused and improvisatory. Freud was poised to unleash the unconscious on European culture. The transformations that resulted from these upheavals helped to unsettle the stultifying art world as well. It created the conditions in which the creative tumult at La Ruche was possible.

    When the fair was over, the city auctioned off many of the structures that had sprouted up around Paris since 1892. Most of the others were demolished. In a remarkable stroke of magnanimity, however, the renowned (and hugely successful) sculptor Alfred Boucher, a friend of Rodin’s and Camille Claudel’s mentor, did the city, and the world, a favor: he bought several items from the municipal auction, including a building-sized octagonal wine rotunda that Eiffel had designed for the exposition, statues of costumed women from the Indochina pavilion, and a grand iron gate from the palace of women, all of which were dismantled and then reassembled on the Passage Dantzig in the fifteenth arrondissement. Boucher turned the hodgepodge of structures into La Ruche, which he called the Villa de Medici.

    In fact there was little about the place reminiscent of the Medici. It consisted of bunks, studios, a gallery space, and large rooms into which artists could troop for weekly free life-drawing classes. The eight-sided building did indeed resemble a beehive. (Most of the people inside it certainly worked like bees.) Rent was cheap, and Boucher never made a fuss when it was late. Some of the tenants took unfair advantage — one artist managed to live on credit for twelve years, without producing a single painting or sculpture. A forty-minute walk to the storied cafes La Rotonde and Le Dôme, and an hour by foot from the École des Beaux Arts, La Ruche was well situated in the burgeoning bohemia of Montparnasse, which soon replaced Montmartre as the hotbed of the avant-garde. (The neighborhood sealed its new status when Picasso moved from Montmartre to Montparnasse in 1912.) The Beehive opened officially in 1902, and the Secretary of State for Fine Arts (a French position if ever there was one) attended the dedication ceremony, at which an orchestra played La Marseillaise.

    The inside of La Ruche was circular, with a dim skylight so weak it brightened only the top floor, which was therefore the most expensive. All the rest were cast in gloomy shadow unrelieved by the grimy windows that made up a wall of each studio. The sweaty stink of unwashed bodies — nobody mentions indoor bathrooms and the filth was infamous — mixed with the pungent scent of oils paints that filled the whole building. The misery was heaviest on the ground floor, where rats, roaches, screeching cats, and mangy dogs took shelter with the poorest tenants. The mattresses atop the iron beds were infested with bedbugs, the corridors and the staircases were dusty and stained. All were in a constant state of disorder, so much so that the whole place resembled the backstage of an abandoned theater; dirty busts, statues, vases, moldy fruit, and dead flowers cluttered the hallways, the detritus of innumerable still-life paintings. Every surface was splattered with paint. In the garden surrounding the building there were lime trees, chestnut trees, lilac bushes, and an enormous cherry tree that kept guard over the building like a potbellied gargoyle. From the neighboring slaughterhouses, where Chagall and others painted the condemned cows, bellowing cattle and grunting pigs were audible in the dorms.

    In other words, if you were a penurious but determined painter, it was home. During the day sculptors and painters who did not live there would use the studio space. They had very little to do with the inhabitants who made up its ecosystem. Next to the main building, down a flight of dank mud-encrusted steps, there was a double cottage where Boucher kept his own studio and workshop. When he was not wandering the halls and peeking into the studios of the little colony that he had founded, checking on his bees, it was where he worked. Boucher liked to brag that he had influenced Rodin, but his grateful if slightly condescending tenants wondered aloud whether the influence did not run in the opposite direction.

    Word spread quickly amongst the immigrants with vivid imaginations and empty pockets, and La Ruche expanded considerably in the 1910s. Marc Chagall, Chaim Soutine, Michel Kikoine, Paul Kremegne, Fernand Léger, Alexander Archipenko, Henri Laurens, Paul-Albert Girard, and René Thomsen were among the tenants who benefited from Boucher’s sanctuary. They were followed by many others of various nationalities, ideals, and dispositions, and Boucher assiduously bought up little huts, shacks, and hovels around the main edifice, into which his colony overflowed. There were nearly a hundred and forty workshops by the end. Writers such as the poet Blaise Cendrars (who wrote a poem about the place) and the art critic Maximilien Gauthier often visited. Rumors circulated that the socialist Adolph Joffé and even Lenin himself dropped by at one time. (It is certain that he and Trotsky frequented Le Dôme, one of the cafés in Montparnasse that would become a haunt for the circle of German artists from La Ruche, during his brief exile in Paris from 1909 to 1911.)

    In the eternal battle between commerce and art, the shopkeepers and the restaurant owners surrounding the hive chose the losing side. Boucher knew that, more often than not, his artists ate and drank on credit. To repay this generous folly he converted a nearby house into a makeshift, dilapidated, and exceedingly romantic theater. Between two hundred and three hundred people could squeeze inside for the performances, and the entry fee was optional: everyone paid what they could. With the help of the city he organized productions, until he had the brilliant idea to invite undiscovered actors and directors to try their hand at running the show. The gambit was a wild success: renowned stars of the stage and early film such as Charles Le Bargy, Maurice de Féraudy (the father of Jacques de Féraudy), and Édouard Alexandre de Max had their start there. Marguerite Morena, Jacques Hébertot, and the heart-throb theater actor and movie star Louis Jouvet also appeared at La Ruche. Jouvet (who at that time spelled his name Jouvey) stuck around for several years. It was at La Ruche that he met Jacques Copeau, the theater director and founder of Théâtre du Vieux-Colombier, where Jouvet would go on to earn early celebrity. The next time you see Quai des Orfèvres, remember the Beehive.

    The artists were dependent on one another for introductions to agents and buyers, and for charity (often reciprocal, since good and bad luck share a half life), and for hot tips about which restaurant owners got to work after the early morning bread had been delivered (thievery was a professional hazard). In 1914 the war deepened the mutual dependence. The art market slowed to a glacial pace, salons were postponed, art collectors’ fists squeezed shut, and those who had received an allowance from relatives were suddenly on their own. The artists’ stipends provided by the French government dwindled rapidly.

    Yet creative solutions abounded. The Russian artist Marie Vassilieff, born in 1884, was responsible for one of them. Vassilieff was a revered figure in Montparnasse. She came to Paris in 1907, at which time, as she would tell you herself, she was unbearably beautiful. Legend has it that days after her arrival Henri Rousseau spotted her on a park bench and fell immediately in love. He proposed marriage, she declined — bad breath, she explained.  In 1908, when Matisse found an abandoned convent to use for a studio, he was stalked by a crowd of implacable groupies, most of whom were foreigners, including Vassilieff, and for two years the young master gave them grudging instruction. (This became known as “Matisse Academy.”) From 1910 on, she exhibited her brightly colored cubist paintings regularly at the Salon d’Automne and the Salon des Independants. Vassilief co-founded and served as director of the Académie russe, where many of the artists of Montparnasse would go for free life-drawing classes. After she resigned owing to tensions with coworkers, she founded the Vassilieff Academy on Avenue du Maine. During the war she transformed her academy into a canteen where hungry artists could always find something to eat. A Swedish painter remembered that:

    The canteen was furnished with odds and ends from the flea market, chairs, and stools of different heights and sizes, including wicker plantation chairs with high backs, and a sofa against one wall where Vassilieff slept. On the walls were paintings by Chagall and Modigliani, drawings by Picasso and Léger, and a wooden sculpture by Zadkine in the corner. Vassilieff would put different colored papers around the lights to change the mood of the place. In one corner, behind a curtain, was the kitchen where the cook Aurelie made food for forty-five people with only a two-burner gas range and one alcohol burner. For sixty-five centimes, one got soup, meat, vegetable, and salad or dessert, everything of good quality and well-prepared, coffee or tea; wine was ten centimes extra.

    Literary events, music shows, and legendary parties distracted the indigent artists from the bleak historical moment. These bashes, which bombinated with the chatter of many languages, would last until the early morning, since the police considered Vassilieff’s canteen a private club and so did not impose a curfew. Matisse, Picasso, Modigliani, Soutine, Zadkine, Cendrars, Léger, the Swedish sculptor Ninnan Santesson, the Russian Marevna, and the Chilean Manuel Ortiz de Zárate were all regulars. Vassilieff, like everyone else, had a soft spot for Modigliani, which he tested regularly by wreaking havoc while grotesquely drunk. Marevna, who wrote a lively but not always reliable memoir of life at La Ruche, recalls one evening at the canteen when Modi (which is what everyone called him) stripped naked while reciting Dante to the frantic delight of giggling American girls.

    When the war was finally over, and rationing ended and unemployment ebbed, the French capital reclaimed its glamorous cultural status. Woodrow Wilson became the first American president to visit Paris when he came for a six-month stay to assist in negotiating a new map of Europe. Ernest Hemingway, James Joyce, Josephine Baker, Ho Chi Minh, Leopold Senghor, and many other luminaries and eventual luminaries surged into the city. In Montparnasse especially, the revival was blinding. The streets buzzed and the wine flowed. Food was easier to come by. When Lucy, Aïcha, and KiKi — the famous artists’ models of Montparnasse — stripped in studios and nightclubs across the city, they were fleshy and carefree. The great cafes had come back to life.

    La Rotonde was acquired and expanded in 1911 by a man named Victor Libion, who ran it for the next nine years and was like a father to the artists of La Ruche, whom he would allow to sit all afternoon nursing the same small coffee. When they first arrived in Paris, Krémègne, Soutine, and Kikoïne, who had all studied together in Vilnius, or Vilna, as they would have called it, always sat at the same table. In fine weather, the celebrity model Aïcha would lounge on the chairs out front, and her boyfriend, the La Ruche resident Sam Granowsky, a Jewish painter known as “the cowboy” for his tall Stetson, along with the artists Mikhal Larionov, Natalia Concharova, and Adolphe Féder (another denizen of the hive), would clean the cafe to earn extra cash.

    Even when the bombs stopped, the memory of horror tinctured the merriment and the swaying hips in and around La Ruche. The School of Paris was in some sense essentially melancholy. These artists from elsewhere had intimate knowledge of hardship. They remembered it from their childhoods, and at La Ruche the hard times persisted for almost all of them. At their most lighthearted they were never silly, even the ones who dabbled in Surrealism. The Soviet novelist and journalist Ilya Ehrenburg recalled that “we stayed at La Rotonde because we were attracted by each other. The scandals were not what appealed to us, and we were not even inspired by new and bold aesthetic theories. Quite simply… the feeling of our common distress united us.” He was speaking specifically of the Jewish artists who had come to Paris to escape the pogroms ravaging the villages from which their families sent them anguished letters. Yiddish writers such as Sholem Asch, Oyzer Varshavksi, and Joseph Milbauer used to drop by La Rotonde, perhaps on their way from or to the Triangle Bookshop, a Yiddish bookstore and small publisher just a short walk away.

    A mathematician named Kiveliovitch ran The Triangle Press and Bookstore at number 6 Rue Stanislas. In a former life he had been a student of the legendary French mathematician Jacques Hadamard. To attract the La Ruche crowd, the Triangle published a series of booklets about famous Jewish artists, short monographs with black and white reproductions. Jacques Loutchansky, Adolf Féder, Leopold Gottlieb, Moïse Kisling, Pinchus Krémègne, Jacques Lipschitz, Marc Chagall, and Abraham Mintchine came regularly to leaf through the stacks in the single narrow room. Some of the artists of the Beehive and its surroundings became subjects for the monographs, which are now bibliophilic rarities.

    One day the Marxist-Zionist activist Y. Nayman sprinted into the store and breathlessly announced that the previous Sunday he had seen the Jewish sculptor Marek Szwarc kneeling in prayer at the Sacre-Coeur in Montmartre. A scandal! Szwarc had fallen “off the path,” which came as a surprise to his coreligionists. Jewishness, for most of the Jews in Montparnasse, was mainly an identity imposed upon them by anti-Semitic prejudice, but Szwarc’s Jewishness had been fuller. He practiced Judaism, and was an active member of the small observant cohort at La Ruche. For a few years in the early 1910s, he, Henri Epstein, Moissey Kogan, and other yarmulke-clad residents of the hive founded and ran Makhmadim (which means “delicacies” or “precious things” in Yiddish and Hebrew), a publication dedicated to defining Jewish art, which was funded by the influential Russian art critic Vladimir Stassov. The series had no text and featured only reproductions of drawings by Jewish artists. The issues were thematically devoted to occasions on the Jewish calendar, such as the Sabbath and the holidays. This was an attempt to give some substance to the appellation “Jewish School,” so often used by the critics of the period.  This series is now even more rare than the Triangle’s publications.

    The question of Jewish identity at the Beehive is complicated. Most of the Jews at La Ruche were not interested in developing a uniquely Jewish style of art, whatever that might mean. National identity united them, just as national identity united the Russians and the Italians and the Americans, who all moved together like schools of fish. (In the Jewish case, of course, the national identity was a sense of peoplehood, not a derivation from a nation-state.) On the landings of the staircases at the hive arguments would break out in all languages — Yiddish, Spanish, Russian, Japanese, Polish, German — about the merits of fauvism, about the trajectory and the limitations of cubism or surrealism, about Chardin, Corot, Cezanne, Rembrandt, and so on. It has been reported that often no one bothered to listen to what anyone else was saying, but every once in a while someone committed that fatal mistake and fist fights would follow. An artist needed a group with which to argue.

    One such group, a circle of German Jews, claimed the cafe Le Dôme as their perch in 1903. Le Dôme, just across the street from La Rotonde, was founded in 1898, and was in its many years haunted by artists and writers such as Kandinsky, Henry Miller (who at one point lived in the apartment below Chaim Soutine), Cartier-Bresson, Beckett, de Beauvoir, Sartre, and many others. It was there, such a long way from “the old country,” that the young Jewish painters sat and did their business. Art dealers such as Henri Bing and Alfred Flechtheim would meet them at Le Dôme for office hours. They were there so often that Apollinaire dubbed them Les Dômiers, despite the fact that bands of Scandinavian and Dutch painters also had their own corners in the same cafe. Les Dômiers were more successful in Germany than in France: in 1911 Paul Cassirer showed the group’s work in Berlin and in 1914 Flechtheim held an exhibition in Dusseldorf called Der Dome. He described the group as “foreign artists living in Paris who met in the same cafe and who loved Paris.” Pluralize “cafe” and one has as good a definition of the School of Paris as ever there was.

    They also all worshiped the same women, and regurgitated the same bits of gossip, and they stumbled into and out of the same parties, delirious and semi-conscious hours later. One of the most notorious of these bacchanals began on August 12, 1917 and wound down in the wee hours of the morning four days later. It was the marriage celebration of two artists — Renée Gros and notorious Polish-Jewish wild man Moïse Kisling. Gros had spotted Kisling on the street the previous year, found out where he lived, and knocked on his studio door. The nuptial bash began at the restaurant Leduc, moved to La Rotonde, reverberated off the walls of several nearby brothels, and culminated in the Kislings’ tiny apartment, into which swarms of guests poured into the ensuing debauchery. Max Jacob recited poetry, mimicking esteemed poets of the day, and Modigliani wrenched the bedsheet off the bridal bed, wrapped himself in it, and recited lines from Julius Caesar as Caesar’s ghost. Renée shrieked and chased him from the room when she recognized his costume. Three days later Kisling reported that Modigliani had been discovered entirely naked sprawled on the Boulevard Montparnasse.

    His wedding ranks among the most outrageous of Kisling’s exploits, but it does not top the list. A few years earlier, on June 12, 1914, inflamed with rage regarding a mysterious “question of honor,” Kisling challenged the artist Leopold Gottlieb to a duel. The Mexican cubist Diego Rivera (who years later would abandon the artist Marevna and their love child, move back to Mexico, and marry Frida Kahlo) served as Gottlieb’s second. Early in the morning the small group gathered by the bicycle racetrack at the Parcs des Princes. The two men fired one shot each and then switched to swords. Tempers flared and the duel lasted an hour, ending only when the large crowd that had by then accumulated forced the two men apart. Gottlieb escaped with no more than a cut on the chin, and Kisling with one on the nose, which he called “the fourth partition of Poland.” Magazines and newspapers printed the story complete with pictures that very evening.

    Nearly two decades later, when the clouds darkened and the black curtain fell, Kisling was one of the lucky ones. He volunteered for French army service, then fled to America when the French surrendered and the Nazis occupied France. Until 1946 the Kislings lived next door to Aldous Huxley in southern California. When peace was declared Kisling and his family moved back to France, where he died in his home in 1956. The Nazis failed to destroy his paintings, as they did those of so many of his friends. His works now hang in museums in France, America, Japan, Switzerland, and Israel.

    Many of his peers, however, were murdered. These are some of their stories.

    Moissey Kogan was born in Bessarabia on March 12, 1879. A precocious childhood interest in chemistry gave way to a passion for drawing and sculpture, which led him to the Academy of Fine Arts in Munich in 1903. The great art critic Julius Meier-Greafe, who was instrumental in introducing the achievements of Manet, Cézanne, van Gogh, and other painters of their time, encouraged Kogan to make a pilgrimage to Paris and visit Rodin, which he did in 1905. Rodin advised the young artist to dedicate his life to sculpture. Three years later Kogan returned to Paris and settled down at La Ruche, where he joined Les Dômiers. Kogan’s work evinces the influence of Rodin and Maillol, both of whom admired him. Like Rodin and Maillol, Kogan’s bodies are full, fleshy, sensuous, and simultaneously austere and formally pure. Most of his works depict nude female figures. In every form — drawing, woodcuts, textile and, primarily, sculpture — his line is consistently delicate without sacrificing force. Terra-cotta, bronze, plaster, and wood were his preferred mediums. Kogan eventually became one of the greatest French neoclassical sculptors. His work was admitted into the illustrious Salon d’Automne for the first time in 1907, after which he served regularly on its jury. In 1909 he exhibited at all three exhibitions of the Neue Künstlervereinigung München (NKVM) in Munich, where he became close with Jawlensky and Kandinsky. In 1925 he was elected vice president of the sculpture committee of the Salon d’Automne, a great honor for an emigre artist. He kept a studio near La Ruche at the Cité Falguière (where Modigliani and Soutine both once lived) from 1926 until his death in 1943. In 2002, art historians in Germany discovered Kogan’s name on a list of deportees to Auschwitz. The official documents that would have detailed the circumstances of his death were destroyed by the Nazis during their evacuation and liquidation of the camp. It is a matter of record, however, that Kogan was on Convoy 47 from Drancy to Auschwitz. He, along with 801 others, were likely taken to the gas chambers upon arrival on February 13, 1943. Many of his works were destroyed by the Nazis in their “Degenerate Art” campaign.

    George Kars was born in Kralupy, Germany in 1882. When he was eighteen years old, he left home to study art with Heinrich Knirr and Franz von Stuck in Munich. He traveled to Madrid in 1905, where he met Juan Gris and was deeply influenced by the works of Goya and Velasquez. In 1908 he settled in Montmartre. He spent the First World War on the Galician front and in Russian captivity. At the end of the war he returned to Paris, where he renewed friendships with many residents of La Ruche, including Chagall. Kars had the refined dexterity of an academic painter, but his works are spiced with the styles that dominated Paris in his day — styles which he managed to synthesize seamlessly. Goya’s and Velasquez’s rich blacks darken still-lifes and portraits that also bear the influence of Cézanne. He was enriched by cubism but not overwhelmed by it. His portraits especially display his skills as a colorist. His most exciting works are his drawings; some look so energetic it is as if he just put down his pen. When the Nazis occupied Paris, Kars fled first to Lyon and then to Switzerland. In 1945 he killed himself by jumping out of the fifth-floor window of his hotel, likely after hearing that many of his relatives had been murdered by the Nazis. When his widow died in 1966, his atelier was sold at auction. Many of his paintings were acquired by the French collector Pierre Levy and the Swiss collector Oscar Ghez. When Ghez died in 1978 he bequeathed 137 works in his collection, Kars’ among them, to the University of Haifa.

    Rudolf Lévy was born in Germany in 1875. He enrolled in carpentry school but left to study painting with the artist Heinrich von Zügel at the School of Fine Arts in Munich in 1899. Lévy moved to Paris in 1903, where he joined Les Dômiers. He studied at Matisse’s academy from 1908 to 1910, and then took over as head of the academy when Matisse left. Lévy would often return to Germany, where he befriended Alfred Flechtheim, who exhibited the Dômiers many times in his gallery. During the First World War he happened to be in Germany and was conscripted into the German army. When the war was over he returned to Paris, but traveled often to North Africa where he befriended Max Ernst and Oskar Kokochka. In addition to painting, Lévy was a gifted writer, and wrote novels and poetry in German and French. When the Nazis came to power Lévy found himself in Germany, but moved swiftly to Majorca, and then to the United States. In 1937 he visited Naples with other German artists and remained in Italy for the next two years. He was in Florence in 1939, attempting to escape to America, when SS officers arrested him and transferred him to Milan. On April 5, 1944 he was deported to Auschwitz in Convoy 9. He was murdered five days later. Most of his paintings and writings were destroyed by the Nazis.

    Roman Kramsztyk was born in Warsaw in 1885. He studied painting in Cracow for a year in 1903, where he befriended several artists including Henryk Kuna and Leopold Gottlieb. Several years later these men would together form the Society of Polish Artists, known as Rytm. Kramsztyk studied at the School of Fine Arts in Munich before moving to Paris where, in 1911, his work was accepted at the Salon d’Automne. He lived in Paris for four years at the start of the first war, after which he would spend the rest of his life traveling between Paris and Poland, where he became quite famous. His work was entered in the painting event at the art competition in the Summer Olympics in 1929. Kramsztyk was visiting family in Warsaw when the Germans invaded Poland in 1939. His fate was sealed. In October of the following year, when the Warsaw Ghetto was established, Kramsztyk, along with all other Jewish residents of the city, was imprisoned within its walls. There he assiduously documented the ugliness in a sketchbook. These sketches of the ghetto are the most haunting and lasting of all his works. In one drawing, gasping children with hollow cheeks cling to a father with dead eyes; they are delicately, achingly rendered. In another the skeletal head of a young boy staring hopelessly into space is conveyed with Durer-like grace. In that hell, while doing his grim duty to document the extermination of his own people, frenzied colors and contorted perspectives, all the Parisian innovations, were of no use to Kramsztyk. He drew what he saw. Sometime between August 6, 1942 and August 10, 1942, during the liquidation of the ghetto, he was shot and killed by a Ukrainian SS officer.

    Adolphe Feder was born on July 16, 1886 in Berlin. He became involved in the Bund Labor Movement in 1905, as a result of which he was forced to flee Berlin for Geneva, where he remained briefly before moving to Paris in 1908. There Feder became one of the most active members of La Ruche. He studied at Académie Julian and then with Matisse at his academy. In the 1920s he did illustrations for Le Monde and La Presse, and for books by Rimbaud and Joseph Kessel. When the Second World War broke out, he remained in France, and joined the underground in Paris. He and his wife Sima were betrayed, and they were arrested on June 10, 1942. The two of them were interned for four months in a military prison on the rue du Cherche-Midi in Paris. Four months later Féder was transferred to Drancy. There he managed to produce many oil-pastel drawings and watercolors of life in the internment camp. Féder’s landscapes and still-lifes that predate his internment at Drancy show Cezanne’s influence, though Féder preferred hotter and more luscious colors. But the heat disappears in his works from the internment camp. Perhaps this was due to a lack of supplies, though there was in fact a place to buy paints inside Drancy.  Féder was not an exceptional draftsman, he was an illustrator, but his rudimentary skill somehow makes his drawings from 1942 and 1943 impossibly moving. His Drancy works differ in medium, color, subject, and location, but each person depicted has the same crushed expression. There is no light in their eyes, nor is there hope, or anger, or even sadness. These are, without exception, portraits of despair. Feder was deported to Auschwitz, where he was killed on December 13, 1943. Sima Féder survived the war and donated a number of his drawings to Beit Lohamei Hagetaot, or the Ghetto Fighters Museum, in Israel.

    There are many more such biographies from La Ruche. In 1942 and 1943, the École de Paris was decimated. At the Beehive, life, like art, went on, as it did in the rest of the cold world.

    Christianism

    Under new management, Your Majesty:
    Thine.

    John Berryman

    I

    “And the king went up to the house of the Lord, and all the men of Judah and all the inhabitants of Jerusalem with him, and the priests, and the prophets, and all the people, both small and great; and he read in their ears all the words of the book of the covenant which was found in the house of the Lord. And the king stood by a pillar, and made a covenant before the Lord, to walk after the Lord, and to keep his commandments and his testimonies and his statutes with all their heart and all their soul, to perform the words of this covenant that were written in this book. And all the people stood to the covenant.” A great awakening took place in the kingdom of Judah in the seventh century BCE, or so the king intended it to be. Josiah was the sixteenth king of the kingdom of Judah, which included Jerusalem, the rump state that remained in the wake of the secession of the ten tribes after the death of Solomon. He ruled for thirty-one years, from 640 to 609. Three centuries earlier, not long after the disintegration of the Davidic kingdom, his birth had been foretold by a strange unnamed prophet, who predicted (“O altar! O altar!”) that Josiah would be a great reformer. The Bible records — there are two accounts, in 2Kings and 2Chronicles — that he came to the throne at the tender age of eight, and eight years later, “when he was still a lad,” the young monarch began to “seek after the God of his forefather David.” It appears that there followed four years of intense spiritual work, because it is reported that Josiah began the religious reform of his kingdom in the twelfth year of his reign.

    The Josian reformation, his rappel a l’ordre, proceeded in stages. It is a dramatic tale. It began with a ferocious campaign against idolatry, which involved the physical destruction of pagan statues and altars not only in his realm but also beyond — a “purification” of the entire land of Israel. (The recent weakening of the Assyrian power to the north emboldened the Judean king to extend his campaign beyond his borders.) He also uprooted Israelite places of worship, with the objective of what historians like to call the centralization of the cult — the re-establishment of Jerusalem, and more specifically the Temple, as the only legitimate site of Jewish priestly rites and Jewish sacrifice. In his twenty-eighth year on the throne, in accordance with his plan, Josiah began a massive renovation of the Temple. It was during this project that lightning struck. As often happens on construction sites, an antiquity was found — in this instance, an old scroll. It was the book of Deuteronomy, which was Moses’ valedictory summation of the Biblical commandments and his ethical testament to his people. When the scroll was read to the king, he rent his garments and cried out in anguish at how much had been forgotten. He then summoned the population of Judah, high and low, to the Temple and read the ancient scroll to them, and announced a new covenant, a grand restoration, which was then marked by a spectacular Passover celebration at the Temple. Judging by the scriptural accounts, which is all the evidence that we have for these events, it was an electrifying moment. Zeal was in the air.

    The shocking element of this tale is that Deuteronomy, fully a fifth of the divine revelation at Sinai, the climax of the Torah, was unknown in Israel. How much more of the tradition had been lost — or more accurately, shunned and neglected and indifferently consigned to oblivion? Idolatry, and the cruelty of some of its practices, was widespread. Josiah was himself preceded and succeeded by idolatrous kings. When one reads the Hebrew verses carefully, it becomes clear that the emotion that overwhelmed Josiah when he heard Moses’ farewell address for the first time was not so much guilt as panic. For if Deuteronomy was unknown to the Jews of the time, then so were many of the fundamentals of the religion, which meant that a colossal delinquency, a terrible fall, a vast collective iniquity, had taken place. The king’s first feeling was fear. “Great is the wrath of the Lord that is poured out upon us, because our fathers have not kept the word of the Lord, to do after all that is written in this book.” This explains the vigor, and the violence, of his correction. When Josiah reflected that God is just, he trembled for his country.

    The interpretation of idolatry is one of the largest themes in the history of religion. At stake in its proper definition is the distinction between true and false faith — assuming, of course, that the veracity of belief is still a matter of consequence to believers, which is increasingly no longer the case. The term itself is pejorative: an idol, however it is construed, is ipso facto false. I was raised to recoil from the term, and to admire the many smashings of the many idols that recur throughout the ancient history of my religion. The smashers were my childhood heroes, Josiah included. It was not until I studied the history of art that I began to grasp the ugliness of iconoclasm, the brutality of it, its cost to culture. I remember the dissonance that I experienced two decades ago on the day that the Taliban blew up the monumental Buddhas of Bamyan, because the government had declared the statues to be idols. But this was what our righteous Jewish kings did, and the Lord was pleased! (I had a similarly disquieting experience when I first watched a video on Youtube of a public stoning by the Taliban and thought back to the punishment of seqilah, or stoning, mandated by Jewish law in capital crimes.) Regarded politically, the definition of an idol is: another person’s object of worship. Idolatry is your religion, not mine.

    There is nothing, of course, that could mitigate the practice of child sacrifice, but its moral offense is obviously bigger and more universal than the sin of following strange gods. So let us — anachronistically, to be sure, but we often interpret Scripture in the light of ideas that were developed long after it — pause to think kindly for a moment about ancient idolaters. They were not all savage killers. They were ordinary men and women, living vulnerably in the world, in families and communities, with needs and fears and sufferings, and they took their troubles — erroneously but sincerely — to sacralized carvings of wood and stone, and to religious authorities whom — erroneously but sincerely — they believed had the power to help them. Folk religion (which the monotheisms have certainly not been spared) is one of the primary human expressions. Its coarseness represents the best that the theological imaginations of many people can accomplish: religion is not solely, or mainly, the province of intellectuals, much as it sometimes pains me to say so. I am the unlikely owner of an ancient Hittite idol from Canaan in the third millennium BCE, about three inches tall, finely made of clay — a domestic idol consisting of a single flat body with two heads, one male, one female, presumably designed as an amulet of happy conjugality. On the same shelf, to its right, as a challenge to this icon of domesticity, sits a small clay mask of Dionysus, from Paestum in the second century C.E. When I look at them, I see illusion, beauty, difference, and humanity. Pity the faith that cannot withstand the sight of them.

    All this, as I say, is an anachronistic way of looking — but not completely. As Hume observed, the multiplicity of the gods in polytheistic religions inculcated a climate of tolerance, whereas the exclusiveness of the monotheisms had precisely the opposite social and political effect. The ancient world was violent, but not owing to holy wars. State power availed itself of many divinities and many cults. By contrast, the human costs of the mono in monotheism have been incalculable. (It was not until modernity that we learned of atheism’s equally hideous costs. Evil has a home everywhere.) If it is appropriate to speak of religious pluralism in the ancient world, then it is also appropriate to describe Josiah’s (and Asa’s and Hezekiah’s) extirpation of the idols as a war against pluralism. Or more pointedly, as a war against what democrats and liberals believe. Our models are not to be found in Kings and Chronicles.

    Yet the post-liberals of our day find them there. A few years ago a group of Catholic post-liberals founded a website, which has expanded into books and podcasts, called The Josias. Josias is Latin for Josiah. (The Hebrew original, Yoshiyahu, most likely means “healed by God.”) The first editor of The Josias, Edmund Waldstein, is a Cistercian monk in Austria who — judging by his own contributions to his journal — is a sophisticated theologian, as are some of the other contributors. I have now read a good deal of The Josias and I can report that, except when it surrenders to a genuinely foul invective about what it abhors — abhorrence is one of its main activities — its writings have all the rigor, and all the charm, of dogmatics. In its way it reminds me of orthodox Marxist discourse, in which fine points of doctrine are scrupulously examined without any interest in the scrupulous examination of their philosophical foundations. The difference between theology and philosophy is that philosophy inspects the foundations, whereas theology merely builds on them. How serious can thinking be when its own premises are protected from it?

    As in all doctrinaire writing, the writings of these post-liberals, of all post-liberals, has a settled and self-congratulatory tone, and expresses the mutual admiration of a quasi-conspiratorial fraternity. (Are there are any women among them?) They are the club of the just. The motto of The Josias is non declinavit ad dextram sive ad sinistram, “to incline neither right nor left.” This may sound like an invigorating assertion of intellectual independence, until one recalls that it is also the title of the definitive historical study of the rise of fascist ideology in France. “Neither right nor left” was the motto of a crack-up, of a philosophical desperation. The purpose of The Josias, its founding editor has written, is “to become a ‘working manual’ of Catholic political thought.” But not all Catholic political thought. It is the organ of a particular school, known as integralism. Here is Father Waldstein’s explication of the concept: “Catholic Integralism is a tradition of thought that, rejecting the liberal separation of politics from concern with the end of human life, holds that political rule must order man to his final goal. Since, however, man has both a temporal and an eternal end, integralism holds that there are two powers that rule him: a temporal power and a spiritual power. And since man’s temporal end is subordinated to his eternal end, the temporal power must be subordinated to the spiritual power.” Or in the less reflective words of an American integralist, “the state should recognize Catholicism as true and unite with the Church as body to her soul.”

    Premises, premises. The Catholic post-liberals are animated by a crushing sense that we, America and the West, have fallen. The feeling of fallenness is not theirs alone: it is one of the few things that unites this disunited country, though we differ in our preferred heights. For the integralists, whose very name suggests that the rest of us are disintegrated, what we have lost is the magnificent unity of church and state. That is the fissure that infuriates them, that they wish ruthlessly to repair. They are wounded holists; yet another bunch of moderns with a burning hunger for the whole. They detest “the personalization of religion,” as if there are no religious collectivities and religious institutions and religious movements in our liberal polity, as if social domination and political control are necessary conditions of spiritual fulfillment. It is important to understand who were the authors of the abomination that the American integralists wish to repeal. Whereas some of them can live with aspects of Karl Marx — neither right nor left, remember — it is finally James Madison whom they cannot abide. He, after all, was the diabolical author of the separation, and Jefferson, and Mason, and the other founding fathers of the American dispensation. (And Roger Williams, the founding grandfather, whose banishment from the highly integrated Massachusetts Bay Colony marked the inauguration of the separation.) Integralism as an ideology originated in late nineteenth-century Europe, particularly in France, in the Action Francaise of Charles Maurras (the American integralists remind even the editor of First Things of Maurras, and also of the Catholic phalangists of Franco’s Spain); but now Maurras has been pitted against Madison. What a villain Madison was!

    I call these Christians Christianists, in the way that we call certain Muslims Islamists. Christianism is not the same as Christianity, just as Islamism is not the same as Islam. (There are Jewish parallels in Israel.) Christianism is a current of contemporary Christianity, of the political Christianity of our time, a time in which religions everywhere have been debased by their rampant politicization. The Christianists, who swan around with the somewhat comical heir of an avant-garde, are in one respect completely typical of their day: they are another group in our society that judges governments and regimes and political orders by how good they are for them. This selfishness, which is a common feature of identity, is as tiresome in its religious versions as it is in its secular ones; it is an early form of contempt, and extremely deleterious to the social unity that the Christianists fervently profess to desire.

    I am not a Catholic. I am an ardently Pelagian Jew. I would prefer not to intervene in the disputations of a church that is not my own. The problem is that these are also the disputations of a country, and a civilization, that is my own. The ideas and the programs and the fantasies of the Christianists bear upon the lives of citizens who are not Christians, who answer to other principles. I will give an example. In an ambiguous essay on immigration that treads warily between liberalism and populism, Father Waldstein remarks: “After the horrors of the World Wars of the twentieth century, a new ideal of global solidarity founded in a secular, liberal conception of human rights came to the fore. This aridly rationalistic liberalism, however, cannot provide true universal solidarity, which can only be found in the Social Kingship of Christ.” Never mind, for now, that the “arid” secular liberal conception of human rights, going all the way back to Kant’s sublime idea of a universal right to hospitality, has been infinitely more effective in aiding and sheltering immigrants, in taking in the poor and the weak, than the ethno-nationalist regimes that the post-liberals celebrate, which scorn them outright. The fact that the vast majority of the refugees in Europe are not Christians has dissuaded these governments from acting on the Social Kingship of Christ. Would they help me off the dinghy and on to dry land? When I read Waldstein’s words, I think: those words are not for me. They are, by implication, against me. They cast me outside the circle of universal solidarity, an insulting ban, because for me the notion of the Social Kingship of Christ is nonsense. He is not my king. Such a ground cannot compel my assent. They must give me a better reason to repudiate liberal immigration policy, especially as I hold that we must give sanctuary to more of the miserable. The Christianists can do anything they wish with their church, but they cannot do anything they wish with their country. They must respond to the objections and the anxieties of their non-Christian and non-integralist brothers and sisters. When I see them palling around with Viktor Orban and extolling Nigel Farage as “the defining mind of our era,” their business becomes my business.

    II

    In these times it is common to hear that everything is broken. As an expression of anxiety, the slogan must be accepted. As an analysis of what ails us, it is plainly wrong. Everything is not broken. Some things are, some things are not. The last thing we need in this crisis is to surrender our sense of the particulars and the possibilities. The belief in brokenness, however, has enjoyed a long career in the history of religious and political thought. In those worldviews in which brokenness is the most salient characteristic of the cosmos and the person, the existence of the many elements, the multiplicity of the parts, the clutter of the pieces, is regarded as a problem, a catastrophe, a punishment for transgression, a fate from which we must be redeemed. There is an overwhelming presumption in favor of the one over the many. Every separation is a wound, a crack, an exile, a tear in the fabric, a setback to be overcome. Once there was unity; now it is no more; may it come again soon, amen. Among the adherents of such totalistic views of life, it never occurs to anyone that perhaps multiplicity is the natural condition of the world — that it is not the problem but the solution. They do not acknowledge that the primary fact about anything is individuation: it is this and not that, it is itself and not another thing. In religious language: my soul is mine alone. Individuation is not a writ of loneliness, though loneliness may be one of its results; it is a writ of specificity, of potential. It is how we begin and how we end; what we are before we belong, while we belong, and after we belong. In a life of joinings and partings, it remains constant and irreducible. Attempts to deny it or to flee it are usually disastrous. The scanting of individuation, the attempts to amalgamate the individual soul out of its distinctiveness and to dissolve it into an imaginary whole, all the communitarian ideologies of integration, have often brought misery into the world. Surely there are recesses of the soul that public affairs ought not to reach — or is that the “privatization of religion”? So many innocent people have been hurt by other peoples’ feelings, and theories, of loneliness.

    Integralism is not a post-liberal innovation, of course. Its modern origins, as noted, were in the Action Francaise of the hateful Charles Maurras, whose motto was “politique d’abord!” But there were also non-reactionary versions of the integralist enterprise, most notably the “integral humanism” of Jacques Maritain, who began as a supporter of Maurras but in 1926 revised his views and advocated a Christian democracy that promoted the creative forces of the person — “the holy freedom of the creature” — acting in history. Maritain’s integralism called for a Christendom that would “correspond to the period into which we are entering,” and in a calm tone of constructive realism he renounced the dream of a new unification of altar and throne, of the sacred and the secular. He was explicit about the pluralistic nature of his integralism: “Civil society is made up not only of individuals, but also of particular societies formed by them, and a pluralist polity allows these particular societies the greatest autonomy possible.” Reading Maritain’s large-hearted pages, one is struck by the meanness, the stridency, the resentment, that disfigures the pronouncements of many post-liberal integralists. Our Christianists are sometimes so unChristian.

    I was first introduced to the rich and troubling intellectual universe of twentieth-century French Catholicism, to its epic struggle with the relations of the sacred and the secular, by a small and beautiful book by Jean Danielou called Prayer as a Political Problem, which appeared in 1965. Danielou was a Jesuit and a cardinal, and a member of the Académie Française; a towering scholar who was one of the fathers of modern Patristic studies; the interlocutor of Bataille and Hyppolite and Sartre on the subject of sin; an “expert” invited by Pope John XXIII to participate in the deliberations of Vatican II. Some of the post-liberal Catholics refer occasionally to Danielou’s book. Prayer as a Political Problem is one of the primary documents of the collision of religion with modernity. It is a deep and deliberate book: “for me, the sphere of the spiritual is as rigorous a discipline as that of any of the profane sciences.” I have given Danielou’s book, as a cautionary gift, to Jewish friends wrestling with similar perplexities. This little volume is a precious statement of what I do not believe, and it is an honor to argue with it.

    Danielou begins by noting the “incongruity in the juxtaposition of a private religion and an irreligious society,” which he regards as the lamentable norm in modern Western societies. The obvious course of action is to relieve religion’s confinement to the private realm and find a way to release it into the public realm — “the extension of Christianity to an immense multitude, which is of its very essence.” But Danielou, at least at first, does not seem to harbor holistic aspirations. “How are society and religion to be joined,” he asks, “without either making religion a tool of the secular power or the secular power a tool of religion?” A splendid question! The frontiers are not trespassed. It has an air of patience and friendship. There is no program for transforming the sacred and the secular into one thing. The objective seems to be co-existence, with boundaries and a generous understanding of the realms.

    But soon there follows a less splendid question. “What will make the existence of a Christian people possible in the civilization of tomorrow?” And he continues: “Our task is to discover what those conditions are which make a Christian people.” What does Danielou mean by “a Christian people”? A people composed entirely of Christians? But there is no such people, at least none that conforms to any national borders. Though he has some skeptical things to say about the separationist arrangement, Danielou makes a resounding defense of religious liberty. Yet slowly his argument creeps disappointingly towards holism, and on strangely practical grounds. “Experience shows that it is practically impossible for any but the militant Christian to persevere in a milieu which offers him no support…Christians have need of an environment that will help them. There can be no mass Christianity outside Christendom.” And more generally: “Only a few would be able to find God in a world organized without reference to him.” In other words, to be a Christian somewhere there must be Christianity everywhere. Otherwise heroism would be required for faith.

    Danielou makes the issue more concrete with the case of prayer. “Prayer is a personal relationship with God,” he writes. “Does it not belong strictly to personal life? It is true that it does, but it is also true that the full development of this personal life is impossible unless certain conditions obtain.” He concludes that “the civilization in which we find ourselves makes prayer difficult.” I do not understand this complaint. Prayer is difficult. It is an attempted communication with occult metaphysical entities, a regular approach on transcendence. It requires that one collect oneself from one’s own dispersal. Is there anything harder to do? To want prayer to be easy, to make it banal and frictionless, is to invite decadence into your faith. And if prayer is difficult, it is not because there are people of other faiths, or of no faith at all, in the society in which you live. Communal prayer is one of the pillars of traditional Judaism, and in its strict construction it requires a quorum of ten men; and I can testify that not once in my many attempts to pray in my synagogue was the presence of the church down the street an obstacle to my concentration. There were obstacles, to be sure, and sometimes they included the other nine Jews in the room, but mostly they were inside myself. I would not have the audacity to blame my spiritual infirmities on others, and certainly not on my Christian neighbors.

    Many years ago I got into a spat with William F. Buckley on this very point: he remarked that the social and cultural diversity of New York was making the formation of Catholic identity more difficult, and I replied that my difference from him was not the reason that he might be having trouble keeping his children in church. Anyway, all communities of faith in an open society are confronted by this challenge. (As Danielou sagely observes, “engagement in temporal affairs is at one and the same time a duty and a temptation.”) Are we to conclude, then, with Danielou, that the transmission of religion is impossible in a multi-religious society? This would be perverse: it is precisely the philosophical and political framework of pluralism, of liberal indifference (“neutrality”) to the fortunes of particular confessions, that makes such transmission possible, by leaving it in the hands of the believers themselves. If they fail, however, it is largely their own fault. Of course we could offer them some assistance by constricting and even closing down our society and thereby make their dream of conformity a reality. But close it down to whom, and for whom? Which faith will dominate, and why should other faiths trust it? Should the church or the mosque or the synagogue seize the government for the sake of their children? My children, or yours? These jeremiads against the separation of church and state can only have been composed by people who are confident that they would be the winners.

    Religious traditions with historical memories of persecution should carefully ponder the moral consequences of undoing the separation. They might also consider whether Danielou’s vision of the identification of religion with its environment does not betray one of the central tasks, and privileges, of religion, which is to be counter-cultural. When the church and the state are unified, the state will no longer hear the truth from the church. Social and political criticism will be heresy. The standpoint from which power may be criticized independently and disinterestedly, in the name of values that are supported only by their own validity, or by their supernaturalism, will have vanished. The efficacy of Martin Luther King, Jr.’s agitation was owed in large measure to the force of the religious language that he hurled against the policies of government; a stranger to the state, he came to chastise and to castigate. If there must be no established religion, it is in part because religion’s role in society must be adversarial. There is an embarrassing passage in Danielou’s book in which he notes that “Christianity works alongside those [institutions] which exist, purifying them of their excesses and bringing them into conformity with the demands of the spirit” — and so “it was in this way that it acted on slavery, not condemning it as such but creating a spirit which rendered its continuance impossible.” But over here, in America, where no such alignment of religion with the institutions is required, there were Christians who denounced slavery directly and bitterly, and on Christian grounds. The abolitionists could say it straight; the critics were separated and free.

    Here is Danielou’s sentence again, except that I have altered one word: “Experience shows that it is practically impossible for any but the militant Jew to persevere in a milieu which offers him no support.” I have heard that sentiment my entire life from people who were sincerely frightened by change, by their children’s physical and spiritual mobility. For this reason, traditionalists like to stick together. The Christianists, too, can stick together. They can also secede, in the manner of “the Benedict option,” except of course that they are enjoined by the Gospels to do good in the world. Our society can become a collection of bubbles, a bubble of bubbles — not the best deployment of a multicultural society’s resources, and certainly not any sort of tribute to the various traditions that are terrified of being tested by the world, but we are headed in that direction anyway. As a historical matter, the Jews in the West did persevere in a milieu which, to put it mildly, offered them no support. That milieu was Christianity. There were no sympathetic surroundings to buck them up and ratify their exertions. And ironically enough, it was not owing mainly to social isolation that they survived, and without possessing political power created a civilization. Even though the Jews lived in walled quarters of the city, they had many kinds of relationships with their Christian neighbors, and they were exposed, sometimes by compulsion, to Christianity regularly. (Some of them could not withstand the pressure of their otherness and converted.) But it was not a bubble that protected them and their tradition. A hostile environment permits no bubbles. What protected the Jews was their faith and their will, and perhaps also the steadfastness that is one of the rewards of minority experience. There is indeed a measure of heroism, or at least an extra measure of inner resources, required of all minority faiths, of Jews in Christian surroundings and of Christians in secular surroundings.

    But here are the Christianists, whining that they do not run everything. How much compassion should we muster for the pain of the post-liberals? How hard is it, really, to be a Christian in America? I appreciate the constant dissonance that a religious individual experiences in our secular culture, not least because of its lunatic sexuality. Raising children in a digital society is a traditionalist’s nightmare; the wayward influences get in like water under the door. So resistance must be offered, certainly, and not only by devout Christians. Resisting the world is one of the signature activities of the spiritually serious. “The greatest danger for the Christian,” Danielou says, “does not come from persecution but from worldliness.” But is the burden of resistance not worth the prize of freedom? The Christianists like to mock religious freedom, or as they prefer to call it, “religious freedom,” as a counterfeit corollary of the separation. The First Amendment does not suffice for the apotheosis of their particular Christian ideal. There is something especially obnoxious about people who enjoy freedom and disparage it. Do they have no idea of what the world is like?

    Where in America is a Christian prevented from practicing Christianity? Of course there are occasional tensions between certain interpretations of Christian fidelity and the law, such as selling a wedding cake to a gay couple, and the courts may not always rule in favor of the Christian party in a dispute, but this is not exactly being thrown to the lions. The frustration of a Christian in a non-Christian world is inevitable, but it is ahistorical and self-pitying to mistake frustration for persecution. (The American Jews who insist upon the erosion of their “religious freedom” in America are just as bratty.) There are ghastly wars against Christians taking place in many countries around the world now, but the United States is not one of them. As for the infamously naked public square, I see religious words and symbols, Christian words and symbols, wherever I wander. God is plastered all over America, and so is Jesus. I can live comfortably with the ubiquity of Christianity in my country, because my belief is not damaged by the evidence of a different belief, and because the evidence of their own belief delights many of my fellow citizens, and because other streets display other messages. The public square should illustrate the public.

    And yet, all the glittering Christian iconography notwithstanding, America is not, in Danielou’s phrase, “a Christendom,” even if most of its inhabitants are Christian; and it is not the duty of Americans, even of Christian Americans, to make it one. This was the American innovation: to interpose rights and freedoms between the religious definition of the country and the religious persuasion of its majority. Insofar as Americans are a people, we are not a Christian people, because we have chosen to alienate no gods and no godless, to embark upon a kind of post-Humean experiment in securing a polytheistic tolerance for a monotheistic society. Perhaps, from the standpoint of justice, pluralism is a restoration of polytheism. No wonder it rubs certain true believers the wrong way. “Thou shalt have no other gods before Me”: the One (or the Three-in-One, but never mind) must be the only one — this is the status anxiety of God. There is no room in the cosmos for many gods; but there is room in America.

    The Christianists teach that it is the responsibility of Christians to influence the institutions of government in ways that will be favorable to the realization of their churchly goals. As Danielou writes, “Christianity ought for the sake of its own final end to influence the institutions of the earthly city.” Enter the bizarre Adrian Vermeule, the man of the integralist hour. (It was he who issued that encomium about Nigel Farage.) He has a plan of influence, which he calls “integration from within.” It is his retaliation against the separation. His plan is to infiltrate the government with post-liberals of his persuasion: “non-liberal actors strategically locate themselves within liberal institutions and work to undo the liberalism of the state from within. These actors possess a substantive comprehensive theory of the good, and seize opportunities to bring about its fulfillment through and by means of the very institutional machinery that the liberal state has providentially created.” Politique d’abord! This is a program for a long march, even for a crusade; and even for the kind of organized hostile penetration, stealthy or otherwise, that provoked Sidney Hook in 1953 to lay down a splendid rule: heresy, yes; conspiracy, no.

    “The state will have to be re-integrated from within, by the efforts of agents who occupy strategic positions in the shell of the liberal order,” Vermeule explains. “Less Benedict, more Esther, Mordecai, Joseph, and Daniel.” Those “agents,” you see, “in various ways exploit their providential ties to political incumbents with very different views in order to protect their views and the community who shares them.” Political incumbents: plainly he does not have only Nebuchadnezzar in mind. In the cases of Joseph and Esther, it is worth noting, their alleged efforts at reintegration involved deception: they were under deep cover, and we don’t take kindly to that sort of thing in America. But Vermeule should read his Scripture more closely. Esther, Mordecai, Joseph, and Daniel may have performed certain services for the Egyptian, Babylonian, and Persian states, but they were not placed in their positions by the God of the Hebrew Bible for the purpose of reforming those states, or of making them over so as to achieve a Jewish purpose. They were there to protect their family and their people from eventual hardships, from famine and discrimination and slaughter. That is all. Egypt remained Egyptian and Babylon remained Babylonian and Persia remained Persian. In any event Vermeule is not satisfied with the successes of the Jewish agents. “It is permissible to dream,” he writes, “however fitfully, that other models may one day become relevant” — Saint Cecilia, whose “martyrdom helped to spark the explosive growth of the early church,” and Saint Paul, who “preached the advent of a new order from within the very urban heart of the imperium.” Vermeule is talking about the American government. Who does he expect to persuade with this sectarian rapture? Madison never looked better.

    Vermeule has other bright ideas for his sacred subversion. By the grace of God, for example, the manipulative techniques of behavioral economics have been invented. “We have learned from behavioral economics that agents and administrative control over default rules may nudge whole populations in desirable directions.” Are you nudging with me, Jesus? Like all revolutionaries, even reactionary ones, Vermeule is opportunistic about his methods and teleological about his history: “The vast bureaucracy created by liberalism in pursuit of a mirage of depoliticized governance may, by the invisible hand of Providence, be turned to new ends, becoming the great instrument with which to restore a substantive politics of the good.” In this way, he and his gang will “find a strategic position from which to sear the liberal faith with hot irons.” An American Christian, a professor at Harvard Law School, wrote those words. I seem to recall from the history of Christian painting that hot irons were what Romans did to Christians, not what Christians did to Romans. It is not, in any event, what Americans do to Americans.

    Am I taking Vermeule’s grisly metaphor too seriously? I don’t think so. It is of a piece with the rhetorical violence of his other remarks about liberalism. For Vermeule, and for the other post-liberals, liberalism is not wrong, it is evil. It teaches depravity and tyranny. I confess that I find such an evaluation baffling, it defies what I know about history and what I perceive around me, though I have myself, from within the liberal camp, done my bit over the decades to challenge and to refute certain liberal dogmas. But here is Vermeule, warning that non-liberal or anti-liberal communities in America are in mortal danger, because they “must tremble indefinitely under the axe.” At least the axe sometimes comes with a tax exemption. And here he is, in an even greater panic: “even if the liberal state lacks the time, resources, or attention span to eliminate all competing subcommunities collectively and simultaneously, it may still be able to eliminate any competitor at will, taken individually and one by one.” This is, well, nuts. Vermeule has no reason to fear the jackboot of Nancy Pelosi in the middle of the night. But his extreme view of his position in contemporary America enables him to cast himself grandiosely. He is the lonely knight of the faith who has taken up the Cross to do battle with the Jeffersonian infidels.

    For Vermeule, liberalism is not merely a political ideology, or a political party with which he disagrees — it is nothing less than a religion, “a fighting evangelistic faith,” “a world religion” with “a soteriology, an eschatology, a clergy (or ‘clerisy’), and sacraments.” If he says so. Calling liberalism a religion no doubt makes the war against it feel more holy. Yet liberalism differs strikingly from religion, and even more strikingly from Catholic religion, in at least one fundamental way: it has a different principle of authority, intellectually and institutionally. This difference was established, prejudicially, by Cardinal Newman in an appendix to his autobiography — a text with which every honest liberal must make himself familiar. Liberalism, he propounded, is “the exercise of thought upon matters, in which, from the constitution of the mind, thought cannot be brought to any successful issue, and therefore is out of place… Liberalism is the mistake of subjecting to human judgment those revealed doctrines which are in their nature beyond and independent of it, and of claiming to determine on intrinsic grounds the truth and value of propositions which rest for their reception simply on the external authority of the Divine Word.” Simply!

    Premises, premises. I have always envied people who find too much reason in the world. My view of what hobbles the world is different. Whatever the limits of reason, we are a long way from reaching them. When rationalists seem to be acting imperialistically, they can be challenged rationally, on their own grounds, and a rational argument for humility or restraint can be made; but no argument can be made with anybody who dissociates reason from truth, who repudiates “intrinsic grounds,” who demands of authority that it be “external.” The integralist enemies of reason are Rortyans with chalices. I recall gratefully how I came by my own enthusiasm for reason: when I was a boy in yeshiva, we wondered, as we studied Genesis 1:26-27, which aspect of the human being was the one that demonstrated the divine image in which he and she were said to have been created — what attribute could we possibly have in common with God? We pored over commentators and discovered an answer: the mind. I have been grateful ever since for having a religious sanction for my critical thinking about religion. But He knew what He was doing, right?
    III

    The most ambitious, and the most ludicrous, attempt by the Catholic integralists to discredit the separation of church and state is a work of history. It is called Before Church and State: A Study of the Social Order in the Sacramental Kingdom of St. Louis IX, which appeared in 2017, and its author is Andrew Willard Jones, a theologian and historian at Franciscan University in Steubenville, Ohio. Jones is a learned man, at least about the Latin sources that are useful to his purpose, which is to paint a portrait of a time before the separation, a period in history when wholeness existed and everything went lustrously together, a “differentiated” but fundamentally seamless society that was unified, top to bottom, in its social and political order, by a pervasive belief in Christianity and in the unity of altar and throne. A “sacramental kingdom,” a golden age, a world we have lost. Jones has written five hundred pages about thirteenth-century France “to establish a vision of a social order very different from the liberal,” in which “integrated players” operated in “one field of action upon which both the spiritual and temporal functioned.”

    Jones contends that one of the consequences of the separation of church and state has been the widespread conviction that we live in two discrete realms, the sacred and the secular, and that too much medieval history has been written with this dualist error in mind. His adopts his theology as his methodology:
    I argue that thirteenth-century France was built as a ‘most Christian kingdom,’ a term that the papacy frequently used in reference to it. I do not mean that the kingdom of France was a State with a Christian ideology. I mean that it was Christian, fundamentally. There was no State lurking beneath the kingdom’s religious trappings. There was no State at all, but a Christian kingdom. In this kingdom, neither the ‘secular’ nor the ‘religious’ existed. I do not mean that the religious was everywhere and that the secular had not yet emerged from under it. I mean they did not exist at all.
    The objectives of this pre-separated and numinously integrated kingdom were utterly unlike “the assumptions of modern politics”: they were the “negotium pacis et fidei — the business of the peace and the faith.” In Jones’ account, those are the motives and the intentions of the medieval figures that he describes; they act not for the sake of interests or passions, but for the peace and the faith. “Society was organized around the notion of peace, a peace that was real, and not simply another name for submission.” Jones further asserts that this is how the people of thirteenth-century France understood themselves, and that we must therefore understand them “on their own terms,” because “their language is better at capturing who they were than ours is.” This Christian idyll was exemplified in the figure of the king, Louis IX, the perfect Christian monarch, who reigned from 1226 to 1270, and was canonized in 1297.

    These methodological cautions should not be mistaken for another warning against “presentism” in the writing of history or another exercise in the history of mentalities. Jones is writing triumphalist history, sacred history, in which the hand of God is revealed in the power of His representatives on earth. There is a tradition of such “historiography” in all the faiths. (In the Jewish case, sacred history had to be dissociated from triumphalist history, for reasons I will get to in a moment.) While this may be the way Christians do history, it is not the way historians do history. That is not because historians are the blinkered products of a secular age. Look again at Jones’ claim that the distinction between sacred and secular did not exist in thirteenth century France. What can this possibly mean? It is true that Louis’ France was officially and significantly a religious society, in which earth was universally believed to be subordinated to heaven. But anybody who has studied a religious society knows that the totality is never total. There are precincts of religiously untreated reality, unhallowed spots, everywhere. Consider only the culture of thirteenth-century France, its literature and its music. (Jones has no interest in such things.) It is full of — do you not like the word “secular”? — profane humane experience lived by mentally free men and women who carry desire and alienation and humor through a world in which everything is not clear. The religious polyphony of the medieval church, for example, the stupendous literature of the masses, found regular inspiration in the irreligious songs of ordinary folk. For secularity is not primarily a social or political category. It is a description of a constitutive trait of human existence: its creatureliness. We are dust and clay, even if not only dust and clay; we are animals, even if not like other animals; we live in time, even with visions of eternity; we decay. No “sacramental” interpretation or arrangement of our lives can nullify our intrinsic earthiness. It will never be fully incorporated or completely dissipated. “The secular” was not born on July 14, 1789, and neither was “the individual.” The magical kingdom for which Jones yearns never existed.

    For many of Louis’ subjects, moreover, there was nothing magical about it. For them, it was a realm of oppression and massacre. Jones’ treatment of the political history of Louis’ France is outrageous. He extenuates the Inquisition, which was established during Louis’ reign by Pope Gregory IX, and pokes fun at its “black legend,” which he dismisses as merely another instance of “the mental furniture of the enlightened mind.” He seeks to show that there was no significant difference between the inquisitors and the enquêteurs, or the itinerant magistrates whom the king dispatched throughout the land to settle local disputes, and that both the religious inquisitors and the civil judges were “dimensions of the same project,” which was “the business of the peace and the faith.” Jones’ apologetics continue: “The ecclesiastical and the secular [oops!] ‘inquisitions’ [were they not inquisitions?] were integral [abracadabra!] institutions within a complex social order that was rooted in a sacramental understanding of the cosmos that did not allow for the divorce of the spiritual from the temporal.” Similarly, in the Albigensian Crusade, another glory of thirteenth-century France, a genocidal campaign that exterminated Cathar belief by exterminating Cathar believers, “the business was directed against what was understood as a heretical and violent society in the south.” After all, “a heretic shattered the peace.” Well, yes. That is his or her role. Is it presentism — or worse, liberalism — to suggest that such an acquiescent and exculpatory tone will not do? There is not a trace of horror in Jones’ accounts of the crimes of his church. He is so busy making church and state disappear into each other that he is dead to the consequences of Constantinism for non-Christians. One way of describing a liberal democracy is as an order in which heresy is just another opinion. If this is what writing Christian history from the inside looks like, I invite the Christian historian to step outside.

    The Jews are mentioned six times in Jones’ history of the illiberal paradise in medieval France. They are all glancing references in which anti-Jewish ordinances are cited in passing. The subaltern status of Jews in the kingdom, and the atrocious ways in which it was enforced, is not a theme over which Jones cares to linger. Here is what you will not learn from Before Church and State. The reign of Louis IX was a series of monarchically supervised catastrophes for the Jewish community of France. “The Jews, odious to God and men,” wrote one of the king’s biographers, “he detested so much that he was unable to look upon them.” It was Louis, according to his seneschal Jean de Joinville, the author of the most renowned biography, who declared that “no one who is not a very learned clerk should argue with [a Jew]. A layman, as soon as he hears the Christian faith maligned, should defend it only by the sword, with a good thrust in the belly as far as the sword will go.” Tant comme elle y peut entrer: the king was monstrous.

    Louis organized a devastating attack on the economic basis of Jewish life in his kingdom, which was moneylending. The throne announced that it would no longer enforce the collection of Jewish debts, and it reduced and even cancelled debts that Christians owed to Jews. Jews were forced to re-pay loans from Christians. New controls were imposed on Jewish loans and Jewish goods were confiscated. This was not only a campaign against usury, an economic policy, it was also a campaign against Jews, an ethnic policy — as, for example, in this royal edict: “The Jews must desist from usury, blasphemy, magic, and necromancy.” And alongside this royal campaign the baronial classes were permitted all manner of excess in the economic oppression of the Jews on their lands. Finally Rome was offended by this despoliation, and in a papal letter in 1233 Pope Gregory noted with disapproval that “some of the Jews, unable to pay what security was considered sufficient in their case, perished miserably, it is said, through hunger, thirst, and privation of prisons, and to the moment some are still held in chains.”

    After the sustained royal assault on the economic sustenance of the Jews came the sustained royal assault on their religious sustenance. In 1239, under the poisonous influence of a converted Jew, one in a long line of such apostates who turned virulently against the people that they left, Pope Gregory launched a campaign against the Talmud, which was (and still is) the legal and spiritual foundation of rabbinical Judaism. He asked the vengeful convert to collect the Talmudic materials that offended Christianity and sent them out, with an accompanying invitation to take action against them, to “our dear sons the Kings of France, England, Aragon, Navarre, Castille, Leon, and Portugal.” The only king who accepted his invitation was Louis IX of France. The books of the Jews were ordered to be expropriated by the beginning of Lent in 1240. They were seized as the Jews were in their synagogues. The prosecuting convert drew up a list of thirty-five charges, an inventory of Talmudic passages that allegedly slandered the Christian faith, and between June 25 and June 27, 1240 there occurred in Paris a public disputation, in which the Talmud was put on trial. There are two unofficial Latin protocols of the proceedings and one longer Hebrew account. The defenders of the Talmud included some of the giants of medieval Jewish learning. They lost, of course, and two years later, in June 1242, twenty-four wagonloads with thousands of Jewish books were publicly burned in Paris. It was a cultural disaster for French Jewry. Wrenching Hebrew poems of lamentation were composed about the holocaust of the books. And the king did not relent. In 1247 and 1248 Louis ordered further confiscations of Jewish books and then gave the campaign against the Talmud another royal endorsement in 1253. It continued until the end of his reign. In the year before he died, Louis sponsored the rabid conversionist efforts of Paul Christian, or Pablo Christiani, the converted Jew who had debated Nahmanides in the extraordinary disputation in Barcelona in 1263. He was given royal authority to “preach to the Jews the word of light and to compel the Jews to respond fully.” An anonymous Jew left this testimony of his efforts: “Know that each day we were over a thousand souls in the royal court or in the Dominican court, pelted with stones. Praise to our Creator, not one of us turned to the religion of vanity and lies.”

    There was still another way in which Louis IX distinguished himself in the history of anti-Semitism. He was a Crusading king, and twice went to the holy land to make war on its Muslims (he spent four years there in his first attempt and died in his second attempt); and he was so tolerant of the anti-Jewish atrocities committed by French Crusaders on their way east that in 1236 he was reprimanded by the Pope himself, who had heard reports that the Jews in France were living “as under a new Egyptian enslavement.” “Force the Crusaders to restore to the Jews all that has been stolen,” the pontiff scolded the king, “that you may prove yourself to be an exhibition of good works.” But this papal reprimand is not the distinction to which I refer. Like many rulers in the history of Christendom, Louis sought to segregate the Jews of his realm — to this end, for example, he forbade Christians from serving as nurses to Jewish children and as servants in Jewish homes. But then he went further: he ordered the Jews to wear a badge. The royal ordinance reads: “Since we wish that the Jews be distinguishable from Christians and be recognizable, we order you that, at the order of our dear brother in Christ, Paul Christian, of the Order of Preaching Brethren, you impose signs upon each and every Jew of both sexes — a circle of felt or yellow cloth, stitched upon the outer garment in front and in back. The diameter of the circle must be four fingers wide; its area must be the size of a palm.” Saint Louis!

    None of this is to be found in Jones’ hundreds of dense pages. Instead he has the temerity to write that in the sacramental order of Louis IX, where there was no separation of church and state, “society was organized around the notion of peace, a peace that was real, and not simply another name for submission.” By giving his book the title that he gave it, he clearly implies that life was better and less sinful back then, and that this story of the Middle Ages is in some way of allegorical utility to our unsacramental country. I note in fairness that Jones’ nostalgia has provoked mixed feelings among the integralists. Vermeule plainly states that “there can be no return to the integrated regime of the thirteenth century, whatever its attractions.” But it is a pragmatic objection: Christians must face the sorrowful fact that those “attractions” can no longer be theirs. They were born too late for the Capetian utopia. Of course none of the integralists care to acknowledge all the people for whom the kingdom was dystopian, or that the thirteenth century was not integrated, except in theory. Father Waldstein is even more stubborn, and speaks up for the politics of nostalgia. “I am not going to let myself be bullied out of my nostalgia’” he protests. “I reject the whole notion that nostalgia is something bad.” So do I, especially these days. But surely nostalgia for something bad is something bad.

    I assume that the general picture of the anti-Jewish vehemence of Louis IX is known not only to Jones, but also to some of the admirers of his book — it is not exactly a secret, even if the repulsive details are mainly the possessions of scholars. And so, given their silence on this matter, I assume also that they can live with it. It is an acceptable price for the sacramental kingdom. That is how teleological history operates. In one of his more revealing passages, Vermeule expresses his impatience with too much ethical fussing about his eschaton. “Of course it’s true — it’s obvious! — that there are versions of non-liberalism that are worse than liberalism. At a certain point, however, people can no longer abide perpetually living in fear of the worst-case scenario.” Vermeule would have us assess fascism probabilistically. He is right that worst-case scenario thinking is irritating and easily exploited. But there are situations involving questions of justice in which worst-case scenario thinking is also moral thinking. The moral worth of a society is not quantitatively determined, nor should its commitments to principle await an analysis of risk. Vermeule’s peculiar mixture of spirituality and social science lands him in a morally dubious place. Perpetually living in fear of fascism is precisely how we should be living, now and forever, and especially in an era of fascism’s return.

    Vermeule relates an anecdote about a colleague’s anxiety. “In a fully Catholic polity,” his friend asked him, “the sort you would like to bring about, what would happen to me, a Jew?” He condescendingly admires the “passionate concreteness” of the question, its affecting concern with “the fate of an individual, a people, and the shape that a polity might take.” For him it was a dialogically romantic moment. But there was nothing romantic about the moment for his friend. He was demanding to know if the realization of Vermeule’s political program would require him to pack his bags. Vermeule answered him with more condescension, and recorded his answer in a coy parenthesis: “(Nothing bad, I assured him.)” His parenthetical assurance is not good enough. He might just as well have winked. The roots of his quasi-theocratic ideal are rotten with Jew-hatred, and so are some of his intellectual and political allies, as he might trouble himself to notice on his next visit to Budapest or Warsaw.
    IV

    The most prominent policy of Catholic integralism, its putative contribution to the resolution of our crisis, is “common good constitutionalism,” whose primary author is Adrian Vermeule. The polemical energy of his religious writing disappears into a thicket of legal and philosophical abstractions in his legal writing, even though he prides himself on his aversion to theory, which he regards as another vice of the liberal elite. The idea is pretty simple: that American constitutional practice “should take as its starting point substantive moral principles that conduce to the common good, principles that officials (including, but by no means limited to, judges) should read into the majestic generalities and ambiguities of the written Constitution.” This is the same “substantive comprehensive theory of the good” with which he equipped his post-liberal infiltrators of American institutions. Common good constitutionalism is presented as an exciting alternative to other doctrines of constitutional interpretation, and also as a revival of the classical tradition in law.

    First, the good news. The grip of originalism upon the conservative legal mind has been loosened. Vermeule and his colleagues no longer wish to be trapped in the eighteenth century. They have recognized that the founders were themselves not originalists, and that they differed significantly among themselves, so that there were many views that could be treated as canonical, which is not helpful to scholars and judges who seek to locate a definitive old authority. While it would be an exaggeration to say that these conservatives have discovered a living Constitution, they do seem to represent a new conservative respect for contemporaneity. I guess it is easier to admit contemporaneity into your understanding if you are operating under the aspect of eternity. “For Catholic scholars in particular,” Vermeule observes, “it is simply inadmissible — inconsistent with the whole tradition — to imply that law has no objective content beyond the text and original understanding of particular positive laws, or that [law] is nothing more than the interpreter’s subjective and arbitrary desires.” Law must refer back to an objective source of legitimacy, to some abiding principle that cannot be reduced to the wishes and the partialities of any individual or group. This belief in objectivity, an outcropping of rock in a sea of perspectives, is commendable. There remains the question of what abiding principle Vermeule has in mind.

    Common good constitutionalism is a restoration of moral values to the heart of the legal enterprise. Vermeule takes pains to show that his doctrine is not simply a substitution of morality for law, or that “it reduces legal questions to all-things-considered moral decision-making from first principles.” The relationship between legal rules and “a higher source of law” is more complicated, he shows; and I believe him. These complications, the serpentine methods of interpretation and argument in our schools and our courts, are presumably what rescue the common gooders from arrant “judicial activism” and all the other conservative prohibitions that conservatives anyway violate regularly. But I see no way to deny that in the end Vermeule seeks to establish a meta-historical standard of ethical value as the ground of law — a remoralization of law. (As an antidote, not least, for the demoralization of law professors.) In fact, Vermeule admits to “a candid willingness to ‘legislate morality’ — indeed, a recognition that all legislation is necessarily founded on some substantive conception of morality, and that the promotion of morality is a core and legitimate function of authority.” Well said. I admit that I am not completely horrified by this. I believe in the grandeur of meaningful living with others and I support intellectual ambition in the courts. As far as I can tell, liberal jurisprudence of the last thirty years tried very hard to squeeze first principles, moral principles, as far out of the law as it could; this was called “judicial minimalism,” and by my lights it represented a collapse in scale and an abdication of responsibility. The law was shrunk and intimidated. The liberals unilaterally disarmed, leaving the impression that they stand only for proceduralism and rule-regulated behavior. The higher dimension of law was usurped by a fetishization of text and a reading of statutes that was designed to be as unrepercussive as possible. Of course not all, or even most, of the cases that come before judges require exercises in moral reasoning, but many of them do, and more generally I do not see how you can work in the field of justice without an ever-present moral sensibility, an intense awareness of the pertinence of values. No society can function without empiricism and no society can live for empiricism.

    That liberal diffidence was conceived as a retort to the moral interpretation of law (“there is inevitably a moral dimension to an action at law”) promulgated influentially by Ronald Dworkin, and Vermeule’s breakthrough is Dworkinism for the right. I mean methodologically, and in his larger kind of justification; but Dworkin’s view of liberalism, by contrast, must be anathema to Vermeule, who is another one of those communitarian preachers for whom liberalism is nothing more than a maniacal pursuit of “individualism” and the rest be damned. (The Antichrist is Mill.) Who, really, can be against the common good? But also, what is it? The integralists throw the phrase around like a talisman with healing powers. Its level of generality may be emotionally edifying but it is intellectually crippling. Before it can be critically assessed, it needs to be specified. After all, there are many versions of the common good, and they do not all go together. (There are philosophical contradictions that are not amenable to integration.)

    According to Danielou, “politics exists to secure the common good. An essential element of the common good is that man should be able to fulfill himself at all levels. The religious level cannot be excluded.” An ecumenical religious humanism; fine. Father Waldstein’s characterization of the common good starts out with a lovely thought: “A common good is distinguished from a private good by not being diminished when it is shared” — rather like what the old mystics said about the bounty of light. “For this reason,” he continues, “common goods are better than private goods.” Moving sedulously toward politics, he stipulates that “the primary intrinsic common good is peace.” Still lovely, but still general. And then, the descent into specificity: “the temporal common good is subordinate to the eternal common good, and the temporal rulers are subject to the hierarchy of the Church.” Until the Second Coming, that is, when everything will be celestially transfigured. It makes you miss the generalities.

    And Vermeule? He is made of harsher stuff. His “substantive moral principles that conduce to the common good” include “respect for the authority of rule and rulers; respect for the hierarchies need for society to function; solidarity within and among families, social groups, workers’ unions, trade associations, and professions; appropriate subsidiarity, or respect for the legitimate roles of public bodies and associations at all levels of government; and a candid willingness to ‘legislate morality’….” It is only a list, and we all have our lists. Some of his list seems reasonable, but all of it awaits reasoning. Indeed, it has a catechismic quality. The problem is that it is impossible to read Vermeule’s constitutional proposals without recalling his religious certitudes. Is it revealing that he begins his list with a validation of political power? Even when he writes about America he refers to “the ruler”; but we do not have a ruler, we have a president. (Not surprisingly, Vermeule has espoused an almost authoritarian view of presidential power.) He endorses “soft paternalism” and remarks that “law is parental.” He asserts that the common good may be legislated by rulers “if necessary even against the subjects’ own perceptions of what is best for them” — an uncontroversial observation about political reality even in republics, but without a whiff of democratic deference and haunting in its evocation of Christian rulers of the past.

    Vermeule has no special place in his heart for democracy, which he views mainly as an instrument of liberalism: what matters for him is that the common good, whatever it is, be achieved, and this can be done in a variety of “forms of constitutional ordering centered around robust executive government” depending on “socio-economic conditions.” To attain his goals, “questions of institutional design are not settled a priori.” This may account for his acceptance of illiberal democracies and their authoritarian leaders. The road to heaven is paved with bad intentions. He writes chillingly that whereas the liberal virtues of “civility, tolerance, and their ilk are bad masters and tyrannous when made into idols,” they may be useful “when rightly placed within a larger ordering to good substantive ends” — to wit, “civility and tolerance may be cryptic terms with which to measure the substantive bounds of the views and conduct that will be permitted in a rightly ordered society, but such a society will also value charity, forbearance, and prudence.” Is it a liberal blindness to suggest that charity is not an adequate substitute for social policy, and that nobody who has ever experienced a violation of his rights and an abrogation of his freedoms should suffice with forbearance? Vermeule’s imagination of power is too happy to include its abuses. And so he cheerfully announces that “the claim [in Planned Parenthood v. Casey] that each individual may ‘define his own concept of existence, of meaning, of the universe and of the mystery of human life’ should be not only rejected but stamped as abominable” — stamped by whom? And also that “the state will enjoy authority to curb the social and economic pretensions of the urban-gentry liberals who so often place their own satisfactions (financial and sexual) and the good of their class or social milieu above the common good.” Why not shoot them?

    Enough. I would not trust this man with the Constitution of the United States. In Vermeule’s common good constitutionalism I do not see much common and I do not see much good. There is something pathetic about faith that seeks the validation of power, that needs to dominate a state to prove its truth. Such a faith is too easily rattled. It has forsaken the still small voice. Why is community not enough for the Christianists? Why must they have society? They will have to learn the art of absolutes without absolutisms. To my Christian friends, I say: neither Benedict nor Louis, please. I say also that America was not designed for integralism, because it was founded on the wisest intuition in modern political history — that conflict is an ineradicable characteristic of human existence, that a perfectly harmonious state of affairs is a sign of freedom’s waning, that unanimity is the program of despots, that social consensus is not the condition of social peace. This is even more sharply so in a religiously and ethnically heterogeneous society, in which commonality cannot be complete but only sufficient to the purposes of a fair and decent polity, and differences may overlap but never coincide. The overlap is where the common good, whatever it is, may be found. The overlap is where democracy flourishes. We will never rid ourselves of the tensions of our complexity, and we should be alarmed if we did. Studying the Catholic integralists, I looked back fondly to the days of John Courtney Murray, S.J. and the American integrity of his non-integralism, his profound wrestlings with the religious realities of an open society, his theology of the tensions.

    There is nothing that this country needs more than a common good. In the name of liberalism, and more often of progressivism that is mistaken for liberalism, the American commonality has likewise been severely damaged. The intolerance of the godless is fully the match for the intolerance of the godful. Progressives are attempting to regulate thought and speech and behavior as if they were integralists. What unites all the varieties of contemporary American integralism is that freedom is not what moves them the most. The Christianists have nothing of interest to say about it, and neither do the secular enforcers on the other side. Yet it is this, the inalienable freedom of the mind in matters of belief, its immunity to compulsion, that will eventually defeat them all, as it defeated Josiah. His reformation failed. The people deceived him. The midrash tells that the king sent out pairs of students to survey the success of his campaign against the idols. They could not find any idols in the houses that they inspected, and the king was satisfied with their report. What they did not know was that the dwellers had painted half an idolatrous image on each of the doors to their homes, so that when they closed them upon the departure of the thought police they beheld their forbidden images. The fools, we must learn to respect them.

    On Not Hating the Body

    Mr Leopold Bloom ate with relish the inner organs of beasts and fowls. He liked thick giblet soup, nutty gizzards, a stuffed roast heart, liverslices fried with crustcrumbs, fried hencods’ roes. Most of all he liked grilled mutton kidneys which gave to his palate a fine tang of faintly scented urine.

    …The cat walked stiffly round a leg of the table with tail on high.

    —Mkgnao!

    … Mr Bloom watched curiously, kindly, the lithe black form.

                        JAMES JOYCE, ULYSSES

    Human beings, unlike all the other animals, hate animal bodies, especially their own. Not all human beings, not all the time. Leopold Bloom, pleased by the taste of urine, and, later, by the smell of his own shit rising up to his nostrils in the outhouse (“He read on, pleased by his own rising smell”), is a rare and significant exception, to whom I shall return. But most people’s daily lives are dominated by arts of concealing embodiment and its signs. The first of those disguises is, of course, clothing. But also deodorant, mouthwash, nose-hair clipping, waxing, perfume, dieting, cosmetic surgery — the list goes on and on. In 1732, in his poem “The Lady’s Dressing Room,” Jonathan Swift imagines a lover who believes his beloved to be some sort of angelic sprite, above mere bodily things. Now he is allowed into her empty boudoir. There he discovers all sorts of disgusting remnants: sweaty laundry; combs containing “A paste of composition rare, Sweat, dandruff, powder, lead, and hair”; a basin containing “the scrapings of her teeth and gums”; towels soiled with dirt, sweat, and earwax; snotty handkerchiefs; stockings exuding the perfume of “stinking toes”; tweezers to remove chin-hairs; and at last underwear bearing the unmistakable marks and smells of excrement. “Disgusted Strephon stole away/Repeating in his amorous fits,/Oh! Celia, Ceila, Celia shits!”

    And, to continue with Swift, there was poor Gulliver. The beautiful and clean horse-like Houyhnhms believe that Gulliver’s clothes are him, and that he is as clean as they are — until they realize one day that the clothes come off, and beneath them he is just another smelly Yahoo. Returning to his home, Gulliver is henceforth unable to tolerate the physical presence of his wife and family.

    How crazy this seems, when presented in fiction. One cannot imagine sensible elephants or horses or dolphins shunning others of their kind on discovering that they are, respectively, elephants, horses, and dolphins, with the bodies appropriate for each. And yet — although Swift is an extreme case — this disgust with the body, this anti-corporeal campaign, is a part of the daily lives of most of us, and it is deeply embedded in Western culture and intellectual life. (Not only Western, but that is what I know something about.)

    Consider the elaborate flight stratagem of Western metaphysics, where body-hatred reigned supreme (though not uncontested) for about two millennia. One might have thought that the obvious theoretical position was in the vicinity of Aristotle’s: we are animate bodies, and the soul is the living organization of our matter. And yet what amazing contortions others, and even Aristotle himself, have gone through to deny the idea that we are essentially enmattered.

    To hate the body, it helps to imagine its opposite. It turns out that the incorporeal was a concept that took a very long time to be invented. Homer says that Achilles’ anger “cast many strong souls into Hades, and left the men themselves to be prey for dogs and a feast to birds.” So the body is the person; and even the psuchê is clearly something physical, albeit insubstantial and needing to drink blood in Hades in order to regain its wits. So when did Western philosophy come up with the idea that there is something about us that is totally incorporeal, and that we essentially are that immaterial super-something?

    I pause to celebrate a paradigm of classical scholarship, Robert Renahan’s essay, from 1980, “On the Greek Origins of the Concepts Incorporeality and Immateriality.” Renahan begins by observing that more or less all previous scholars take the concept of incorporeality as obvious and therefore assume that the Greeks found it obvious, too. They therefore retroject it into texts where it does not exist. With painstaking and often withering scrutiny Renahan rebuts them, finding no solid evidence of the concept of the incorporeal — until we get to Plato. (Asomatos, the word often used later on for the incorporeal, could mean, even as late as Aristotle, simply “less dense.”) “For almost two thousand years,” he concludes, “the concepts of incorporeality and immateriality were central in much Western philosophical and theological speculation on such problems as the nature of God, Soul, Intellect. When all is said and done, it must be recognized that one man was responsible for the creation of an ontology which culminates in incorporeal Being as the truest and highest reality. That man was Plato.”

    Plato, he argues convincingly, discovered the idea. Discovered, he says; not invented. For the odd thing about this marvelous article is that Renahan himself, educated at Boston College, a Jesuit University, and on the faculty there for many years, is so convinced of the idea’s obviousness that the way he puts his question is, What barriers were there in the Greek mind that prevented them from attaining, for so many centuries, an obvious metaphysical idea that must be the hallmark of any high culture? He, too, shares the preference for incorporeality. I remember reading this article for the first time while waiting for my daughter at a children’s dance class and thinking: it certainly is not obvious to me. (One reason that women have been so persistently ranked beneath men is surely their failure, as they occupy themselves with childbirth and body care, to attain the obvious idea.)

    For Plato, the incorporeal was intelligent, lofty, lovable, and pure; and the body was stupid, base, disgusting, and impure, a prison for the soul. For many centuries, Platonism ruled the Western world, and in some ways it still does. Even Aristotle was Plato’s pupil in this regard: he made a secure place for the incorporeal, since intellect, alone of our capacities, he said Platonically, has no bodily realization.

    And yet the anti-body metaphysicians were never willing to give up on bodies altogether. Even though Renahan finds the idea of incorporeality so excellent that it is the mark of any truly advanced thought, his Catholic forebears, lining up with Aristotle, did not feel totally satisfied by what this idea could accomplish for them as they tried to explain the world. Bodies seemed so useful for getting around and doing things. Aquinas, Aristotelian at his core, concluded that disembodied souls, necessarily lacking the (bodily) faculty of imaginative perception, or phantasia, would have only a “confused cognition” — until the resurrection of the body restored their wits.

    And what should we make of the bizarre metaphysical idea of bodily resurrection, which apparently sensible Christians (and traditional Jews too) have firmly held for centuries, fully literally, and which gave rise to countless convoluted philosophical debates about whether corpses decay while waiting around for that glorious day? And, since they surely must decay, how exactly would they be reassembled at the right time? And if they decay and smell, as they inexorably do, does that mean that human life is of no worth? (This was Alyosha Karamazov’s problem.)

    And speaking of Dostoyevsky, I cannot mention bodily resurrection without making mention of Nikolai Fyodorov (1829-1903), the Russian Orthodox Christian philosopher and friend of both Dostoyevsky and Tolstoy. Fyodorov insisted that it is our central duty, as human beings, our Common Cause, to promote the bodily resurrection of everyone who has ever lived, since only this would guarantee eternal life for all. He had complicated ideas about how this might be arranged scientifically through a process akin to cloning; but he also realized that this would cause the world a huge population problem. He therefore spoke ambitiously about space travel, and is thought to have been a major influence, indirectly, on the genesis of the Soviet space program.

    Back, however, to Greco-Roman antiquity: everything in the history of philosophy is always so much more complicated than a brief summary makes one think. Even Plato himself wavered in his hostility to the body, as the Phaedrus shows — ascribing intense bodily delight to the best people, although they conceded to body-hatred by forgoing sexual intercourse. And while Platonic ideas remained in many ways dominant, Epicureans and Stoics rebelled against Platonism more radically than Aquinas did later, insisting that all real things are matter (or matter and the void, in the case of Epicurus). Epicurus rudely dismissed Plato: “I have spat upon the Beautiful, and all those who gave on it in an empty fashion.” Stoics, more refined, differed politely, and managed to exert considerable influence over the development of Christian thought. Christian thinkers even appropriated the arcane Stoic ideas of material interpenetration and total mixture to explain how Christ’s divine and human natures were unified. Much later Milton used the same idea to explain how angels had sex: by total interpenetration with one another. In their sexual unions, so unlike us, “Obstacle find none/of membrane, joint, or limb, exclusive bars;/ Easier than air with air, if Spirits embrace/Total they mix.” Make no mistake, this delightful daydream is an anti-Platonist physicalist fantasy, with no place for the incorporeal, at least in the angelic realm.

    Christian ideas are enormously varied. Some schools, such as the Aristotelianism of Aquinas, think of the body as an avenue to the spirit, and do not repudiate it utterly, though they think it must be transcended. (The dogma of the Incarnation always supplies pressure against total repudiation.) Other, harsher thinkers hold that it is simply a Platonic prison and must be utterly rejected. Even in the former group, however, there lurks a tendency to disgust and body-hatred, which often surfaces in a vile misogyny. After all, it was a Jesuit, trained in medieval Aristotelian scholasticism, who delivered to Stephen Dedalus and the rest of his audience of adolescent boys what may be the most chilling sermon on the evils of the body that has ever been given, and from which hardly anyone who took its ideas seriously could ever fully recover to lead a healthy sexual life.

    By now, I think, most people, Christian or not, do not make incorporeality a part of their daily conceptual lives. To judge from the huge popularity of near-death experiences as putative evidence of an afterlife, the typical believer’s afterlife is highly physical, characterized by very intense sights, sounds, and emotions — culminating, it is hoped, in a joyful physical reunion with loved ones. And yet, even if in our secular and materialist culture the prestige of the incorporeal has died, body-hatred lives on. We are all closet Platonists, as recent research about disgust reveals.

    For a very long time, the topic of disgust was thought to be so base, so unworthy of science, that no scientific research was done on it. This all changed, beginning in the 1990s, with the pathbreaking work of Paul Rozin, with various collaborators. His experiments were so imaginative and probing that they established conclusively that disgust is not just a bodily reflex, but also has a complicated cognitive content. The key thought in disgust is a shrinking from contamination, and the key contaminants are what he calls “animal reminders” — oozy sticky things that resemble or actually are elements of our own animality.

    Rozin’s term is not perfect, because animals have many traits, such as beauty, skill, strength, and speed, which humans do not find disgusting, nor do we find those aspects of ourselves disgusting, even when we notice that in these ways we resemble other animals. The characteristic triggers of disgust are the Swiftean properties: bad smell, sliminess, decay. And the objects that typically disgust people are the Swiftean signs of mortal embodiment: feces, urine, sweat, menstrual fluids, snot, and of course the corpse. These are the things that Platonists most abhor. They are signs, all, of mortality and decay. Most people have at least some self-disgust, and endeavor mightily to conceal or temporarily to remove those aspects of themselves — in vain, as Celia was no doubt to learn in short order.

    Disgust is learned: it is not present in infants until the time of toilet training. But it is ubiquitous, and very likely it has to some extent an evolutionary utility. This is the first level of disgust, what I call primary object disgust. In and of itself this revulsion already does harm, because self-hatred always does harm, and it is even worse when it leads to a recoil from close contact with others.

    But there is worse to come. In all known societies, with or without Plato, there is a second level, what I call projective disgust, in which properties of disgust are projected onto a social group that is stereotyped as the animal in opposition to the dominant group’s pure soulness. They are said to be dirty, to smell bad. One must not share water or food with them, or, heaven forbid, have sex with them — or at least not without punishing them for it afterwards. Sometimes the subordinated group is a racial minority, sometimes a “deviant” sexual group, sometimes people with disabilities, sometimes aging people, sometimes just women, who always seem to represent the body to aspiring males by contrast to the intellect and the spirit. I once directed a cross-cultural project with a group of scholars in India, and while we found fascinating nuances of difference (between, for example, the role of disgust in the caste hierarchy and its role in racism against African-Americans), the underlying reality was basically the same. An elite constructs a group that constitutes its surrogate body, in order to keep spirit all to itself, and the surrogate body is regarded as repugnant and punished severely for being an “animal reminder” to the creatures of spirit.

    Projective disgust is ubiquitous, but it is socially transmitted, and it can be resisted.

    I propose five reasons to resist the hatred of the body.

    It is the only thing there is. It is real, and the immaterial is, well, immaterial. As Whitman said, insofar as there are souls, the body is the soul.

    To say this one need not be a reductive materialist. One may still insist, as have many philosophers from Aristotle to Hilary Putnam, that the most elegant, simple, and predictively valuable explanations follow the level of form and structure rather than the level of ultimate material composition. Still, the forms are always somehow enmattered, and the fact of material embodiment is critical to their functional capacity.

    You cannot find your way around without one. Aquinas’ point about the separated souls can be extended to life in general. The body is our link with the world. For this reason, imagining yourself as essentially an incorporeal soul is imagining yourself as impotent.

    There are plenty of philosophical theories of how an incorporeal soul might direct the body — some of them deliberately crude (which Gilbert Ryle famously mocked as “the ghost in the machine”), some seemingly sophisticated (Descartes’s two essences communicating through the pineal gland). But if ever the sophisticated ones were once convincing, they are utterly unconvincing today.

    Insofar as evil and baseness exist, they come not from the body (as dualists imagine), but from the soul. Who is in Dante’s Hell? Not bodies, peacefully decaying, but souls. It is souls that have evil intentions, that betray, that commit murder and torture and genocide. Kant, speaking about his idea of “radical evil” — evil that antecedes all cultural teaching — quickly observed that of course it does not come from the body but from the will.

    Bodies are the seats of beauty and delight. Even the chaste Aquinas defines beauty as necessarily bodily. Pulchrum est quod visum placet: the beautiful is that which, being seen, pleases. So the beautiful is in is essence bodily and it is apprehended by the body. Aquinas singles out sight, sometimes thought to be less bodily than hearing, and smell, and, of course, touch. But even sight has color and shape as its proper objects. If we turn to musical beauty, we will need, with Schopenhauer, to speak of the ways bodies move, reach, strive — musical sounds representing, he thought, the force of erotic striving within us. And why should we pretend to be above the beauties of smell — the manifold scents of a restaurant, for example, so long denied us during covid, or even the scent of a nearby human body? Given that the loss of smell has been a common symptom of covid, our era has given rise to fine essays appreciating this sense.

    And touch, though rarely discussed by philosophers, is far from the least in the canon of beautifuls. Back to Milton: he does not even try to make his angels bodiless, or to deprive them of sexual pleasure. All attempts to represent the beauty of the incorporeal quickly say, “Well, it is indescribable” — which is why literary works about the incorporeal, even Dante’s Paradiso, have a hard time holding many readers. The beauty that we understand and are drawn to is enmattered beauty.

     

    Hating bodies is a form of self-hatred and leads to hatred of others, human and (non-human) animal. Hating what you yourself are is already pointless and makes for unhappiness. But it is worse still when we know that projective disgust is almost certain to follow. Body-haters are bound to find some surrogate for the animal, the bodily, in themselves, whether it be a racial group, a gender or sexual group, or the aging, who come in for a tremendous amount of body-hatred all over the world.

    One particularly significant reason to avoid the projective form of body-hatred is the way it has distorted and poisoned our relationship to the other animals. When humans imagine themselves as essentially immaterial, and therefore “above” the animal (whatever that means), it is no surprise if they neglect the profound kinship that human animals have with other animals. And so it has happened. The other animals are thought of as base and disgusting, and the imputation that we have evolved from animal origins meets with inflamed resistance. Our public debates about teaching evolution in the schools — and whether some other fictional non-theory (creationism, intelligent design) may also be taught as an alternative — are often accompanied with expressions of disgusted incredulity that we wonderful humans could really have apes for ancestors.

    With the fiction of the incorporeal driving a wedge between us and all other animal species, we can all the more nonchalantly treat them as if they were nothing. Since I think our torture and exploitation of other animals is a great moral evil, I would like to point out that things would almost certainly not have reached the present stage of cruelty and neglect but for our lies about who we are — our erroneous view that we are not their fellows and family members, but some spiritual stuff floating around somewhere, in or with a body but essentially not of it.

    [isnignia]However. However. One big reason to despise the body remains: it is mortal and vulnerable, it is the very seat of our mortality. All the other things that disgust us are not so much “animal reminders” as “pain-and-death reminders.” What is found ugly and disgusting is, first, pain; and, second, death and decay, and whatever reminds us of them. The fiction of the incorporeal is above all a fiction of (painless) immortality. Socrates’ friends surround him in prison, mourning his imminent demise. You are mistaken, he says cheerfully. The real me will not die, because it is not bodily at all, but an incorporeal substance merely trapped in the body. The students cheer up — and those that do not, including Socrates’s wife, are made to leave the room.

    Let us begin with pain. Pain, clearly, is both good and bad. It is a necessary part of our self-preserving equipment, a warning signal of potential harm — as anyone who bites her tongue after Novocain at the dentist knows all too well, and as the rare people who are able to survive without any pain-responses know with fear, and with a doomed longing for the useful pain they do not feel. In athletic training, pain is typically a sign of progress. In childbirth, pain is very intense, even at times terrible, but it also a sign of something wonderful in the offing. Yet pain can be too much, unendurable, debilitating, and dehumanizing. Thus we have reason to feel a grievance against the body for giving us that type of useless pain, along with the useful signaling type. And we certainly have reasons to palliate the awful type of pain, although not to do so by totally removing the body’s entire pain mechanism, rendering the person defenseless. Pain, then, is a mixed blessing, but on balance it is not a reason to hate the body.

    Not so death. There is nothing good about death (apart from the fact that it may in some cases be the only relief from unbearable and unquenchable pain). The Platonic fiction shields its believers from the ugly, incomprehensible, but perfectly obvious fact that this loved person is now this corpse, decaying before your eyes. Buying into the fiction of incorporeal immortality is contrary to truth and reason, and yet it shields people from a reality so horrible that one is sorely tempted to give truth a pass. And yet the fiction has as its consequence the body-hatred that I have been deploring and the disgust behavior I have been describing. Lucretius was on the right track when he traced some of the worst in human behavior to the fear of death and the avoidance of self and truth that go with it. In short: the solace of Platonism is purchased at a large cost. Is there some less evasive and less contorted way to face our end?

    The fear and the hatred of death, I contend, is fully rational. Life, one’s own and that of others, is tremendous and wonderful. And the love of life, including the fear of its loss, incentivizes much good behavior: medical research, other efforts to stave off disease and ill health, prudent daily health behavior, and care for the bodies of others. As Rousseau shrewdly saw, the awareness of death, and its badness, can even encourage in fearful humans a type of egalitarian compassion, an embrace of a common humanity that transcends class and wealth and even religion, bringing people together.

    So we should not try to rid ourselves of the fear of death. But it is hard to bear, and it gnaws into us, prompting stratagems of flight. It can lead to body-disgust even in those who are not tempted to embrace the Platonic fiction. Is there, then, a way out of the disgust trap? Platonism removes the fear, but it leads to disgust. And keeping the fear also leads to disgust, or so it seems.

    In our pandemic time we see both tendencies very clearly. The constant awareness of death has made most of us atypically fearful, or at least atypically aware of a fear that often lurks underground in our minds. And we do see much evidence of compassion: embracing a common danger has in many instances brought out the most altruistic and humane in people. At the same time fear has also brought out the worst in disgust-pathologies: racist denigration of Asians as if they were the source of the virus; racism directed at immigrants, comparing them to vermin (a common disgust trope); sheer hatred of other people’s bodies on airplanes, leading to sometimes violent aggression and to more or less constant rudeness. Is there no way, as humans, that we can keep the awareness of death before us, not fleeing into the Platonic beyond while still avoiding the descent into the maelstrom of disgust?

    Let us consider other highly intelligent animals. Elephants fear death, and seek to avoid it for self and others, and even, as we now know, grieve the loss of loved ones with rituals of mourning. Mother elephants even sacrifice their lives to protect their young from speeding trains. That is how vividly they see death ahead of them, and how bad they think it is. But they stop short of body-hatred. They do not adopt a distorted attitude to their potentially crumbling frames that leads to projective aggression against other groups of elephants.

    Do not say, please, that it is because they are less aware. We are finding out more all the time about their communication systems, their social organization, their capacious and nuanced awareness. But we do not find disgust. That pathology appears to be ours alone. In her beautiful memoir, Coming of Age With Elephants, Joyce Poole, one of our greatest elephant researchers, describes the way in which her human community impeded her “coming of age” as a fulfilled woman and mother. The researcher group was highly misogynistic and racist. They deliberately broke up her happy romance with an African man. When she was raped by a stranger, they treated her as soiled and did nothing to deal with her trauma. In elephant society, by contrast, she observed better paradigms of inclusive friendship, of compassionate and cooperative group care. The memoir ends when she returns to the elephant group after a two-year absence, carrying her infant child in her arms. The matriarchal herd not only recognize her, they understand her new happiness. And they greet her with the ceremony of trumpeting and defecating by which elephants greet the birth of a new elephant child. No body-hatred, no disgust, no projective subordinations.

    Are we humans, by contrast, doomed to some type of body-hatred, particularly as we age? There are many reasons to think so. The hatred of aging human bodies by younger humans, so common in American culture, is already a form of self-avoidance, of denial that this is every person’s own future. And as we begin to get there, a trip to the doctor can produce not just ordinary anxiety but a disgust with the whole business of bodies. In the early days of the feminist movement, the book Our Bodies, Ourselves proclaimed women’s proud independence of body-hatred. We will not be told by society that women’s body parts and their fluids are disgusting. We will not be tutored into that self-loathing idea. We will learn to celebrate our fluids, to contemplate them with a speculum, to get to know our female insides. We will learn to give birth without anesthesia, as ourselves, rather than allowing our child to be extracted from us in an unfeeling state by an impatient doctor.

    I was of that generation, and I believed in its revolutionary attitude of body-love, a curious and happy acceptance of ourselves and the stuffs we are made of. But now I note that as the same women age, and men too, they hate going in for a colonoscopy. Most simply do not want to become acquainted with what Whitman called the “thin red jellies within you and me.” The closer we are to death, the less we want to see those jellies, even on a screen. (It is actually exhilarating to make the acquaintance of oneself in such an exam, refusing sedation, and I heartily recommend it.) The attitude recommended by Our Bodies, Ourselves was not far from the mentality, I imagine, of elephants: the body is us and ours, and what could be more normal than to accept it, live in it, embrace it, refusing even to conceive the idea of the disgusting?

    For humans, however, elephantine acceptance is volatile and intermittent, and it is so easy to slip, succumbing to disgust’s many lures.

    There are some people, probably many, who do not succumb, who accept and care for bodies (their own and those of others) without a flight into nowhere, and without disgust-pathology. What can we learn from them? I began this essay with James Joyce’s Leopold Bloom, and I have long thought that he is a helpful model to think with, as we reckon with ourselves. A fictional character is useful for such an exercise, because we are told his thoughts and can follow them. Our task, it seems to me, is to avoid disgust (and its malign consequences) while disliking and even fearing death. The way Bloom manages this delicate maneuver gives us a detailed paradigm of a balanced and generous humanity.

    From his first appearance in the novel, in the passage I quoted as my epigraph, Bloom approaches smell and taste with zest and without disgust at organ meats, with their tang of urine. At the same time he loves non-human animals, and throughout the novel he is always kind and compassionate to them. His ensuing trip to the outhouse reveals a matter-of-fact and even pleasurable relationship to his own excrement and its smell; it is one of the many pleasures of his long day, as he first reads a romance story in the newspaper and then wipes himself with the same paper.Notice that Bloom is always clean. His refusal of disgust does not lead him to soil himself. He loves his bath and the odor of soap, and even appropriates the language of the Mass to refer to his normal pleasure in his bathing body: “Enjoy a bath now: clean trough of water, cool enamel, the gentle tepid stream. This is my body.” He also cares for his clothing, protecting his trousers from the mildewed and crumb-encrusted seat of the coach on which he rides with others to Dignam’s funeral. You might think that freedom from body-hatred would go with being dirty and smelly, but of course this is not the case in the animal realm. Animals groom and clean themselves constantly, in a matter-of-fact and undisgusted way. So, too, in Dublin: the smelly ones in the novel are the Dedalus family and other impoverished Dubliners, who, hating their own bodies, fail to take care of them. And they constantly project their self-disgust onto others, particularly Jews. In Barney Kiernan’s bar, the aggressive Irish nationalist accuses Bloom of being part of a group that has been “coming over to Ireland and filling the country with bugs” — a classic anti-Semitic disgust trope.

    During his visit to Dignam’s funeral, Bloom meditates about corpses and the inevitability of death, as his thoughts stray to his father’s suicide and, later, to his mother’s death and his little son’s death in infancy. He thinks about how all the people around him are dying one at a time, “dropping into a hole one after the other.” And the graveyard reminds him of his own mortality: “Enough of this place. Brings you a bit nearer every time.” This is one point in the novel where he might become disgusted, and he does think about the smell of corpses, filled up with gas. But he never gets drawn into revulsion, and especially not into projective disgust. What features of his approach to life block these baneful tendencies?

    Certainly Bloom thinks that death is very sad, and to be feared. He does not take the path of Stoic apatheia, denying that any of these attachments and losses matters. But three features of his personality keep him decent and compassionate amid the invitations to disgust all around him. The first is science. Surrounded by all the otherworldly language of the Mass, Bloom nonetheless cannot help thinking in scientific terms and asking worldly questions. A thought of corpse fluids leads to a question about when, and how completely, circulation stops at death: would blood still drip out? A thought of bad gas leads directly to curiosity about how gases make corpses look puffy. He thinks with sadness of how the heart is the seat of emotions — and then reflects that it is also a physical pump that one day stops. In the cemetery he starts to ponder whether corpses buried standing up would come up above the earth at some point, and whether the blood coming out of them “gives new life.” The rat and the flies make him ask what a corpse actually tastes like to a fly. Scientific curiosity keeps him from Platonic fantasy and gives him something intriguing and real to occupy his mind.

    Bloom’s second strategy, or more precisely, a perpetual reflex of his mind, is to ask how another person experiences the world. At every threat point, his mobile emotions simply look at the world through other eyes. In the church, although feeling isolated during the recitation of the liturgy, he asks himself how the participants react: “Makes them feel more important to be prayed over in Latin.” He imagines the boring life of the server, who has to shake holy water “over all the corpses they trot up.” In the graveyard he considers the kindly caretaker and what sort of life he must have in that gloomy occupation. He thinks of Dignam’s son, recognizing a kindred sorrow: “Poor boy! Was he there when the father?” Instead of recoiling from the rat and the flies in disgust, he asks sympathetically what the lives of those creatures feel like to them. Molly is familiar with this capacity: “yes that was why I liked him because I saw he understood or felt what a woman is.”

    But empathy by itself is morally neutral: a torturer can use empathy to inflict maximum pain and humiliation on the victim. So we must add that Bloom’s empathy is combined with kindliness. He basically wishes well to those whose perspective he assumes, even rodents and insects.

    And he has one more method for banishing disgust: humor. There are many types of humor in Ulysses. Some of these are in league with disgust. Bloom’s humor, here and elsewhere, is itself kindly, fantastical, leavening life with a sense of the incongruous, prominently including wordplay. This may be the funniest account of a graveyard in English literature, and without the dark side of the fifth act of Hamlet. It skewers pompous solemnity in a way that brings relief. Here is a quintessential example:

    Mr Kernan said with solemnity:

    I am the resurrection and the life. That touches a man’s inmost heart.

    — It does, Mr Bloom said.

    Your heart perhaps but what price the fellow in the six feet by two with his toes to the daisies? No touching that. Seat of the affections. Broken heart. A pump after all, pumping thousands of gallons of blood every day. One fine day it gets bunged up and there you are. Lots of them lying around here: lungs, hearts, livers. Old rusty pumps: damn the thing else. The resurrection and the life. Once you are dead you are dead. That last day idea. Knocking them all up out of their graves. Come forth, Lazarus! And he came fifth and lost the job. Get up! Last day! Then every fellow mousing around for his liver and his lights and the rest of his traps. Find damn all of himself that morning. Pennyweight of powder in a skull. Twelve grammes one pennyweight. Troy measure.

    This passage brings together all of Bloom’s strategies for the avoidance of disgust. He addresses religious fantasy — the classic Christian idea of bodily resurrection — first, with scientific realism, then with joking wordplay (“come forth” and “came fifth”), and finally with empathy for those sad souls on that supposedly glorious day. It should be superfluous to mention that all of Bloom’s mental devices for deflecting disgust while retaining love, and grief, and moderate fear, are also those of Joyce himself in his construction of the novel. We might do worse than to follow his generous invitation.

    With humor, with science, with kindness, let us resist the ignoble and damaging project of disgust. It is no good for us, and it makes the world a lot worse for others. We must not be repugnant to ourselves for our physical being, and other people must not be repugnant to us. As Whitman insists, after establishing that the body is the soul,

    To be surrounded by beautiful, curious, breathing, laughing flesh is enough, …

    There is something in staying close to men and women and looking on them, and in the contact and odor of them, that pleases the soul well,

    All things please the soul, but these please the soul well.

    In a time of plague, and during our society’s current cautious reawakening, if that is really what is happening, can we doubt this?

    Numbers and Humanity

    In the final weeks of World War I, Oswald Spengler published Der Untergang des Abendlandes, tamely translated as The Decline of the West. Its almost a thousand pages of turgid Teutonic prose swept over mangled Europe like a tidal wave, becoming the still-young century’s best-seller. (A second volume was published in 1922, to less rapturous attention.) It offered a diagnosis to a world convulsed in mass-produced death, an explanation of the “last spiritual crisis that will involve all Europe and America.” According to Spengler, the essence of modern civilization — its Faustian soul, he called it — was a type of mathematics that was created in the seventeenth century by Descartes, Galileo, Leibniz, and Pascal. That mathematics had proven powerful but also lethal, for “formulas and laws spread rigidity over the face of nature, numbers make dead.” Now the West and its mathematics, “having exhausted every inward possibility and fulfilled its destiny,” were dying together.

    Never mind that Spengler’s claim about the death of mathematics was incorrect. From Mussolini to Thomas Mann, everybody who was anybody claimed to have read the book. Plenty of people disagreed with the analysis. In 1920, on the brink of winning the Nobel Prize, Albert Einstein wrote to the mathematical physicist Max Born: “Sometimes in the evening one likes to entertain one of his propositions, and in the morning smiles about it.” He attributed Spengler’s “whole monomania” to his “school-child mathematics.” But the most acute critics recognized that Spengler represented a powerful stream of the Zeitgeist that saw in mathematics, as the writer Robert Musil put it, “the source of an evil intelligence that while making man the lord of the earth has also made him the slave of his machines.” (Ulrich, the protagonist of Musil’s great novel The Man Without Qualities, which was set in 1913 and published in 1930, is a mathematician, and his creator was himself a mathematically well-trained PhD.) Even the assassinations of that turbulent age were to be understood in mathematical terms. When Friedrich Adler murdered the prime minister of Austria in 1916, he invoked Einstein’s mathematizations of the universe, which Adler interpreted as legitimating a shift of frames of reference from nation to class. The lawyers who pleaded in his defense, on the other hand, argued that the assassin was not in his right mind, because he suffered from “an excess of the mathematical.”

    Fast forward a century. We, too, live in an age in which the nature of knowledge is intensely political, and in which the powers of number are rapidly expanding. Mathematical forms of knowledge — computation, artificial intelligence, and machine learning, for example — touch many more aspects of the world than they did in the first half of the twentieth century, or indeed, in any previous period of this planet’s history. We stand on the threshold of new technologies — such as quantum computing — that promise to dwarf present powers of calculation. There is no realm of human life today exempted from quantification, a situation that one might think should constitute a crisis for our understanding of ourselves and our world. Yet very few people today would put the relationship of number and computation to other forms of knowledge anywhere near the top of the list of pressing questions confronting humanity, where we propose it belongs.

    We, the authors of this essay, one of whom is a mathematician, are certainly not hostile to mathematics, whose insights have extended usefully into many aspects of the world. Nor do we agree with Oswald Spengler, or with Edmund Husserl, Martin Heidegger, and numerous other modern philosophers who have sought the origins of “the radical life crisis of European humanity” (the phrase is Husserl’s) in some mistaken mathematical turn or other. The issue is not the legitimacy of mathematics, which is no issue at all. The issue is how we should think about both the powers and the limits of mathematics as we apply it to different realms of knowledge. We say powers and limits, because numbers have needs. The powers of mathematics depend on rules that do not apply to many things in the cosmos, from elementary particles to our own thoughts or mental states. The more we extend our mathematical reach toward those things, the more urgently we should all want to ask: what knowledge do we gain and what knowledge do we lose, and at what risk?

    That question should be one of the most urgent of our era. To answer it, we need to understand the peculiar needs of numbers, and the problems that arise when those needs are not met. Alexander Craigie, the narrator of “Blue Tigers,” one of Borges’ last short stories, learned that lesson the hard way. A Scottish logician living around 1900 in Lahore, the fictional Craigie was moved by dreams of blue tigers to scour the sub-continent in search of the implausibly colored felines. What he found instead, in the sandy channels of a mysterious region that was taboo to the neighboring villagers and of the same distinctive blue as the tigers in his dreams, were disks: “identical, circular, very smooth and a few centimeters in diameter.” He pocketed a handful and returned to his hut, where he removed some from his pocket. Opening his hand, he saw some thirty or forty disks, although he could have sworn that he had not taken more than ten from the channel. He could see that they had multiplied, so he put them in a pile and tried to count them one by one.

    “This simple operation proved impossible.” He would stare at any one of them, remove it with his thumb and index finger, and as soon as it was alone it was (they were?) many. “The obscene miracle” repeated itself over and over. The professor returned to Lahore. He carried out experiments, marking some with crosses, filing others, attempting to introduce some difference into their sameness by which he might distinguish them. He charted their increase and decrease, “trying to discover a law,” but they changed their marks and their number in no discernable pattern. “The four operations of addition, subtraction, multiplication and division were impossible. The pebbles denied arithmetic and the calculus of probability… After a month I understood that the chaos was inescapable.”

    In this story, without theorems or technical notation, Borges set out in narrative a basic pre-condition for what is habitually called rationality, and posed a thought-experiment about what happens when that pre-condition does not hold. Logicians call that precondition the Identity Principle, which declares that for any thing, let us call it x, x is the same as x, or x = x. With certain things in certain circumstances, the Identity Principle works famously: a well-behaved pebble, for example, under moderate temperature and pressure, relatively short spans of time, and unaided human eyes, seems to have an identity consistent and unchanging, and can in good conscience be taken to be equal to itself. Moreover, when you put that pebble in the proximity of other pebbles there is no confusion; all of them conserve their identity. For other things, however, it does not work so well. In the case of blue tigers it did not work at all. None of them could be identified as having an identity that remains the same as itself. Hence they could not be grasped by counting, by statistics, or any logical or scientific analysis.

    Not only are blue tigers ungraspable, but according to Borges they are also maddening. At the brink of insanity, at that hour of dawn when “light has not yet revealed color,” Craigie enters the mosque of Wazir Khan. He prays to God for relief. Suddenly a blind beggar appears before him and Craigie gives him the disks. The beggar’s responds: “I do not yet know the nature of the alms you have given me, but mine to you are terrible. Yours are the days and the nights, sanity, habits, and the world.”

    Borges’ conclusion seems to imply that we must choose between two types of attention, two forms of life, two kinds of knowledge, each horrifying in its own way. On one side, the ever-changing, indistinguishable, and uncountable “blue disks,” bringing unreason, chaos, madness; on the other, stable pebbles, countable because unchanging, always the same as themselves, bringing reason, science, and sanity. Writing in 1959, C.P. Snow famously deplored what he saw as the deepening division between the humanities and the sciences as “the two cultures.” In the logician and the beggar, Borges gives us a related but more fundamental and deeper division between cultures, one committed to the rule of the Principle of Identity, the other committed to its absence.

    Borges’ story is another example of his extraordinary ability to dress epistemology in fiction, but its conclusion is misleading. The world does not divide cleanly between “blue tigers” and normal pebbles, nor between insanity and reason. There are infinitely many objects of thought in this world that act like well-behaved pebbles, but there are also infinitely many that act like the ones that Craigie found on the forbidden path. In fact, with the exception of the very peculiar objects of logic and mathematics, every “normal” pebble is also from some perspective a “blue tiger.” Our challenge as humans in the world is not that of giving away one or the other, but of becoming conscious of why in a particular instance or for a particular need we have favored one over the other, and of what we may have gained or lost in doing so.

    Take number as an example of those peculiar mathematical objects whose “eternal sameness” can indeed be established through axioms and proof. Even in the case of number, the task is not easy; it took many steps and was not fully accomplished until shortly after the publication of Spengler’s books, with the appearance in the 1920s of two articles by John von Neumann on the axiomatization of set theory. Von Neumann would accomplish many things, ranging from the Hilbert space formulation of quantum mechanics and the creation of game theory to the logics for computer programming and the conceptualization of the hydrogen bomb. But his insight in these two essays was equally dazzling. He proved that with only one object that is always and utterly the same as itself, one operation upon that object, and an elegant handful of axioms, you can establish firm foundations of sameness and strict identity for the vast edifice of mathematics.

    That object is the empty set, ø, and the operation take-the-set-of, {}. A set is a collection of elements. For example, the set of letters of the word “myth” is {h, m, t, y}. The empty set ø, however, is the set containing nothing. Because ø contains nothing, we can be certain that it never changes: ø = ø always. The same is true of all sets containing only combinations of ø and {}, such as {ø}, or {{ø}}, or {ø, {ø}}, ad infinitum. We may not say as much of the set {h, m, t, y}, for its elements might vary depending on culture and language — h in English is different from h in Spanish, or from h in French; there are several different h’s in Proto-Indo-European, and in Russian h does not exist. But ø, mirabile cogitatu, never changes, it is universal. With ø as the only brick, von Neumann showed that all numbers (and countless other mathematical objects) could be derived, an edifice of eternity. And in case you doubt that existence, logicians and mathematicians can set your mind at ease: ø exists because we say so. It is a rule of the game, an axiom.

    Yet the important point, for our purposes, is that ø and purely mathematical objects are the exception. Of everything else in the world we can say that there is no absolute foundation of sameness, no thing that can be said with certainty to remain strictly the same as itself as it interacts with the world. Even a “normal” pebble strung in an abacus is constantly undergoing change, though that change may not be relevant to the particular use to which we are putting it. No one has yet discovered strict and absolute sameness in the physical world, even at the most basic levels of the universe. Describing the difficulty of determining the sameness of a given electron, proton, or other quantum object in 1952, the pioneering quantum physicist Erwin Schrödinger sounds much like Craigie trying to count blue disks: “This means much more than that the particles or corpuscles are all alike. It means that you must not even imagine any one of them to be marked — ‘by a red spot’ so that you could recognize it later as the same.” And he continues: “If you happen to get 1000 [or] more records of a proton, as you often do, then notwithstanding the greatest psychological urge to say: it is the same proton, you must remain aware, that there is no absolute meaning in this statement. There is a continuous transition between cases where the sameness obtrudes itself to such where it is obviously meaningless.” (Emphasis in original.)

    Apart from purely mathematical objects, everything in the world acts to some degree or from some perspective like a “blue tiger.” But conversely, many “blue” objects can be treated as if they were stable, as if they remained the same, as not only physicists but also economists, psychologists, and indeed all of us demonstrate every day. The continuity of daily life, its legibility, depends upon our countless unspoken postulation of such sameness. Our sciences also depend upon such postulations, but in their case it is important that these not remain unspoken, lest we build our knowledge on foundations whose load-bearing capacity we do not comprehend. Von Neumann again provides a marvelous example, both of the power of sameness and its axioms, and of the need to cultivate an awareness of the limits of that power. His Theory of Games and Economic Behavior, which appeared in 1944, co-authored with Oskar Morgenstern, was a massive attempt to build a foundation for human behavior upon the same object that he had used for mathematics: the empty set. “We hope to establish satisfactorily…that the typical problems of economic behavior become strictly identical with the mathematical notions.”

    Strictly identical! That is a shockingly hubristic claim. Let us dwell for a moment on what it means. If you assume, as von Neumann and Morgenstern did, that the behavior of economies is built out of the desires and the choices of individuals, then establishing “strict identity” means demonstrating that “the motives of the individual” — that is to say, psychology — are reducible to “the mathematical notions.” This is what von Neumann and Morgenstern set out to do. Invoking the example of physics, they began by creating a radically simplified model, an economy of just one isolated individual. Following a tradition already established by Marx and other economists, they named this single-actor economy after the famous literary castaway Robinson Crusoe. They then set out to describe the “assumptions that have to be made” about “the behavior of the individual, and the simplest forms of exchange.” The first assumption or axiom was that the individual seeks to “obtain a maximum of utility or satisfaction” of his various desires and wants, within the given constraints.

    But how do we know that the maximization of utility is a universal law of human nature? There are some who have doubted the proposition. But let us grant this initial assumption, for the sake of argument, and move on to the next. In order to be maximized, “utility or satisfaction” must be quantifiable, or at least rankable. Why should we think that desires are quantifiable or rankable, either by human agents or by economists studying them? It would appear that this assumption about the quantification of human desires is neither empirical nor psychological. The assumption is necessary only so that economics can become a mathematical science, much as in physics time needs to be thought of as the real number line, not because this corresponds to our experience, but because aspects of modern physics would otherwise be difficult, if not impossible.

    To put their assumptions in more formal terms: given any two objects of desire u and v, the subject can always say which one she prefers, or else that she is indifferent, i.e., that she has no preference for either u or v. But what about when there are more than two options on the table, as there so often are? For that we need yet another axiom: for any three or more commodities, objects, or imagined events — call them a, b, c… — all rational agents who prefer a to b and b to c will also prefer a to c. This crucial assumption, called the “transitivity of preference,” is axiom 3:A:b in von Neumann’s and Morgenstern’s Theory of Games and Economic Behavior. The justification? “Transitivity of preference [is] a plausible and generally accepted property.” That does not seem to us a sufficiently examined justification for such a crucial axiom. But with these axioms in hand, they proclaim “that it is possible to describe and discuss mathematically human actions in which the main emphasis lies on the psychological side.” Describe and discuss? Sounds reasonable enough. But they go further: “a primarily psychological group of phenomena has been axiomatized.”

    The phrase exhibits its hubris in sequins and faux fur. At this point we should insist again that we are not critics of mathematics, or even of its application to the study of human behavior. These are powerful tools that, for good and ill, have had a vast and often salutary impact upon human knowledge and (not only human) life. But in making questions of human desire strictly identical to mathematical notions, von Neumann and Morgenstern have forgotten a basic truth and omitted a basic question. In the words of Charles Sanders Peirce, an earlier logician and philosopher of astounding talent:

    An engineer, or a business company… or a physicist, finds it suits his purpose to ascertain what the necessary consequences of possible facts would be; but the facts are so complicated that he cannot deal with them in his usual way. He calls upon a mathematician and states the question. … It frequently happens that the facts, as stated, are insufficient to answer the question that is put. Accordingly, the first business of the mathematician, often a most difficult task, is to frame another simpler but quite fictitious problem… which shall be within his powers, while at the same time it is sufficiently like the problem set before him to answer, well or ill, as a substitute for it.

    The basic truth of which Peirce wisely reminds us is that every mathematical thematicization of objects that are in any way blue is a simplification, a similitude, an “as if.” And the basic questions that von Neumann and Morgenstern chose to ignore, but we insist should never be forgotten, are: how “sufficiently like” is the similitude to the object of study? And how do we decide if the difference is for well or ill? A great deal hinges on the answers to those questions.

    Our answers to them will always be relative to what it is we want to know about. Consider the famous mathematical simplification undertaken in 1736 by Leonhard Euler in his “Seven Bridges of Königsberg” problem. The problem requires one to determine if a dry path can be found across a landscape of four land masses separated by rivers, with the constraint of crossing each of the seven available bridges only once. Euler approached the problem by eliminating every feature of the landscape, retaining only an abstract representation of each land mass and each bridge, treating the former as a node, or vertex, and the latter as an abstract connection, or “edge.” The entire landscape, the width of its rivers and the size of its forests, the height of the hills and fertility of the fields, are reduced to a graph that consists only of nodes and edges:

    It turns out that by counting the number of edges touching each node one can determine if the trip is possible. (Those who want to learn how may turn to Wikipedia.) The simplification is powerful and meets the needs of the particular problem, as well as many others: hence it inspired fields such as graph theory, topology, and the theory of networks. But there are many questions that we may ask about the navigability of the same terrain, and many of them will not be answered by the identification of nodes and edges. For example, where to find a forest in which to paint. For the answer to that question, one might prefer a very different kind of simplification, such as a map. And for the painter seeking verdant inspiration, only the forest itself will do.

    So let us return to Morgenstern’s and von Neumann’s notion of “transitive man” and their axiomatized “Robinson Crusoe.” Is that simplification adequate for the psychological description it purports to provide? For the two modern Central European scientists, it certainly was. For Daniel Defoe, the author of Robinson Crusoe, published in 1719, it most definitely was not. From beginning to end, the book’s eponymous hero is best described as un-axiomatizable because self-contradictory, a weathervane, unable to order, to maintain, or even to recognize his preferences. Years of shipwrecked self-reflection on his desert island do not erase the fluctuating nature of Crusoe’s desires and aversions. Quite the opposite: they heighten his awareness of his inner flux, as here, near the end of the novel:

    From this moment I began to conclude in my mind that it was possible for me to be more happy in this forsaken, solitary condition than it was probable I should ever have been in any other particular state in the world; and with this thought I was going to give thanks to God for bringing me to this place. I know not what it was, but something shocked my mind at that thought, and I durst not speak the words. “How canst thou become such a hypocrite,” said I, even audibly, “to pretend to be thankful for a condition which, however thou mayest endeavour to be contented with, thou wouldst rather pray heartily to be delivered from?”

    This literary moment feels familiar and true to our experience: a moment in which one suddenly becomes aware of the inadequacy, the contradiction, the inconstancy, even the untruthfulness, of one’s own convictions about one’s happiness. Such insights about conflicts, competing desires, and even contradictions within ourselves amount to a kind of knowledge that literature and the arts often confer, and indeed often make into their very subject. And one of the innumerable lessons that we can draw from such humanistic knowledge is this: in so many important aspects of his thoughts, desires, and being, Defoe’s Robinson Crusoe is not von Neumann and Morgenstern’s transitive man, and neither are we.

    And what about Pierce’s “well or ill”? How do we judge that? In this case we could choose the standard that von Neumann and Morgenstern themselves set. Neither their theory of human behavior, nor the field of economics, nor indeed any of the social sciences, have achieved anything like the powers of prediction to which they aspired. What von Neumann and Morgenstern hoped for was to attach the predictive power of mathematics, so effective in the physical world, to the realm of the social and the psychological. Yet nothing like such power has yet been delivered by the quantifications and the models of the social sciences. But that is not a sufficient judgement. We also need to ask about the more general consequences of reducing “blue” aspects of the human to the Principle of Identity. For example, much of the political and economic machinery of the modern world is founded on the assumption that we know what we desire for our happiness, and the maximized fulfillment of those wants is the aspirational “good” that authorizes our political life in modern democracies.

    This “‘economic’ point of view,” as Freud called it, underpins an enormous amount of theorizing about human behavior. Hence, we have designed systems and sciences that define and measure “the good” as the freedom to translate certain of our desires into political and consumer choices. But what if the dynamics of our psychic lives are quite otherwise? What if (to quote Freud again) “the logical laws of thought do not apply” to the psyche? If our social sciences have not yet truly interrogated the nature of our desires and of our happiness — or worse, if they have built themselves on mathematical foundations that confuse that nature — then this machinery is spinning dangerously dogmatic wheels, at we know not what risk to our humanity and our planet. This danger cannot be countered simply by more and better mathematization, as the advocates of homo economicus sometimes imagine. If human nature is not reducible to the Principle of Identity, if essential aspects of our being are irreducibly “blue,” then it is only by learning to recognize the limits of mathematization, and to cultivate more azure forms of knowledge, that we can understand the human world with humanity.

    Becoming conscious of how and why we apply mathematics to the world, and of asking about the “well or ill,” about what we gain and what we lose through such applications: this is a genuinely important task. But it is a difficult one, because on these issues our tendencies are so often bipolar, just as they were in Spengler’s age. The tendency to separate mathematics, logic, and science from poetry, imagination, community, and philosophical-spiritual life must rank among the most important and enduring of modernity’s culture wars. Perhaps this is what John Dewey meant a century ago when he lamented that “this present separation of science and art, this division of life into prose and poetry, is an unnatural divorce of the spirit.” In an essay on poetry and philosophy, Dewey concluded with the imperative: “We must bridge this gap of poetry from science. We must heal this unnatural wound.”

    We have already glimpsed the beginning of a bridge and a balm: the anti-Manichaean recognition that, from physics to psychology to poetry, all objects of our thought are in some ways subject to logical principles such as identity and in some ways not. From such a more inclusive and tolerant position we could begin to examine what mathematics can offer to the study of the human and what it cannot, or what physics and poetry might or might not have in common. The challenge is great because the dualism runs deep, excavated on both sides of the divide. An aphorism attributed to the physicist Lord Kelvin, a version of which was inscribed circa 1930 on the façade of the Social Science Research Building at the University of Chicago, represents one side of the chasm: “When you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.” On the other we can array philosophers such as Henri Bergson and Martin Heidegger or poets such as Aimé Césaire, for whom “scientific thought counts, measures, classifies, and kills” (the words are from Césaire’s Poésie et connaissance of 1941, but we could multiply references). Both dogmatisms are dangerous, though their power may not be symmetrical. In our universities today, it feels as if Lord Kelvin is chasing poetry from the field.

    The “knowledge wars” of our present moment offer a special challenge because the political valences of the available positions are changing. In the 1980s and 1990s the power of science seemed the greater danger to humanities professors. Foucauldian critiques of the politics of knowledge and the constructedness of science were their preferred antidote, while the right parodied deconstruction and the relativization of science as the delusion of leftist scholars. Today academics feel the greater danger emanating from the denials of science and are more prone to stress facticity, while the Wall Street Journal runs editorials decrying climate science and COVID epidemiology as “political.” This may therefore seem like an odd moment in which to call for reflection upon the limits as well as the powers of number, or to advocate a remembrance of the essential “blueness” of the world and a rapprochement between forms of knowledge such as “physics” and “poetry.”

    But politics is a poor compass for critical thought. Besides, one can defend science without falling into scientism. The defense of vaccination and environmental regulation does not require a belief that the only valid questions in the study of human life, and the only valid answers, are “countable” ones. Moreover, there has never been a perfect moment in which to call for a recognition of the limits of the mathematical and a recollection of the importance of the non-mathematical. Pythagoras was declaring the eternal virtues of number already in the sixth century BC, while Heraclitus denounced him as a “swindler in chief.” The thirteenth century poet Henri d’Andeli mocked the battles between the faculties of Paris, who were adepts of logic, and those of Orléans, who favored ancient literature.

    Do you know the reason for the discord?
    It is because they differ about learning;
    For Logic, who is always wrangling,
    Calls the authors authorlings.

    There has never been a perfect moment, but reflection is more necessary now than ever, if we wish to preserve our worlds.

    As these citations make clear, great minds have dedicated centuries of attention to these questions. Among the products of that reflection are the dualisms and the exclusivisms that we have just decried. We do not pretend to have solved these problems, but we do want to propose an exercise in “consciousness raising” that can help us develop an ethics of knowledge with which to navigate the dangers of civil war between the sciences of number and the knowledge of humanity.

    For a start, we must recognize the irreducible “blueness” of any non-purely logical or mathematical things that we want to know about, and accept that mathematical tools applied to those things offer simplified similitudes. Then we can ask, how good is the similitude? What purpose did we want it for? What did we gain and what did we lose with the simplification? For “well or ill,” and from what perspective?

    The wondrous predictive power of mathematical models comes from their ability to identify, abstract, idealize, and simplify: to leave things out. In justifying their “Robinson Crusoe” method, von Neumann and Morgenstern pointed out that Galileo ignored wind currents and viscosity in his model of free-fall. If he had chosen to focus on turbulence, as James Clerk Maxwell once quipped, modern physics might not have gotten off the ground. But what can safely be left out and what cannot? That question confronts anyone who seeks to reduce non-mathematical objects to mathematical models. When the object is the human psyche, the question is very difficult indeed, and repeated applications of the Principle of Identity will likely retard our quest for knowledge as much as advance it.

    The famous “replication crisis” in psychology, the modern science dedicated to the study of the psyche, is only one symptom of the difficulty. The crisis consists of the increasingly apparent fact that the majority of experiments in the field, when repeated, do not yield the statistically significant findings reported in the original. In the social as in the natural sciences, the repeatability of an experiment — getting the same result from the same or similar experiment — is a pre-condition for accepting its results as true. Why is repeatability so low in the discipline of psychology and other social sciences? Multiple explanations have been offered, from deliberately faulty research practices such as p-hacking (the manipulation of data analysis to find statistically significant patterns) to individual and communal biases (publication bias, selection bias, confirmation bias, etc.).

    A more fundamental explanation was already offered in 1843 by Kierkegaard, in a book he entitled Repetition, A Venture in Experimenting Psychology, whose authorship he attributed to a scholar he called Constantin Constantius. (The name already expresses a prejudice for the Identity Principle.) Constantius set out in search of repetition, a quest that is presented as ludicrous from the start. He traveled to Berlin, where he went to see a farce starring a famous comic actor. In his theater box he was suddenly transported. He describes himself lying on the floor laughing his head off, just as he had as a boy, lying by the foaming stream at his father’s farm. Still, he adds in conclusion, he lacked something. What he lacked, it turns out, was a pretty girl to watch. He looks around and finds one, but fails to recognize his own experience, the repetition of his happy childhood at his father’s farm. To stuff the farce to the top, back in Copenhagen he decides to return to Berlin to see if he can replicate his exhilarating experience at the same theater with the same actor. He fails, and therefore concludes that in human existence there is no repetition.

    Kierkegaard’s “venture in experimenting psychology” is a precocious critique of repeatable experimentation in psychology. Literary works such as Proust’s Recherche and Borges’ “A New Refutation of Time” offer a related critique. In those works a person undergoes the experience of being suddenly and involuntarily transported to his past. It is not a memory, a remembrance. It is that moment of one’s past relived. It is the same moment yet different, for when the moment is relived, the person remembers having lived it once before, and much that happened in between. If the person reflects on what that experience may tell them about their notion of time, they should not conclude, as Borges did, that time is thereby negated or refuted, but rather that time can have loops, that it can curl back towards itself, cross itself, and then go on. In mathematics, such crossing points are called “singularities,” a technical word that, in our context, acquires poetic and ontological harmonies. For the experience of repetition changes the experiencer.

    When it comes to human experience, in sum, repetition is a tricky notion. In simple cases, or if we get the simplifications right, there are certainly aspects of the human psyche that can be approached through repeatable experiment, counted, measured, and expressed in mathematical terms. We do not mean to detract from what such findings can teach us. But those aspects will not only be very simplified: they will also omit much of what we care most about. The problem is not that, as one recent study put it, “psychological research is, on average, afflicted with low statistical power.” The more fundamental issue is that much that is human cannot be subjected to the Principle of Identity, nor to the other logical “laws of thought” upon which the more mathematicising branches of academic psychology depend. Can so much be safely left out of what counts as knowledge about ourselves?

    The Polish writer Olga Tokarczuk, who won the Nobel Prize in literature in 2018, described her own experience of studying psychology in Warsaw. “We were taught that …in its essence the world was inert and dead, governed by fairly simple laws that needed to be explained and made public — if possible with the aid of diagrams.” From all of this she drew a simple lesson: “steer clear of psychology altogether. …The psyche is quite a tenuous object of study.” That, presumably, is why she transferred her own efforts from the scientific to the literary. But the literary is also subjected to number nowadays, as universities increasingly orient their humanities toward the digital in their quest for relevance, resources, and recognition as “knowledge.” The “digital humanities” represent the deepest penetration of the mentality of quantification into humanistic study, and its premises must be critically scrutinized. Here again we need to ask: how “like” is the mathematical simplification to what we want to know? And what is lost or gained in the simplifying?

    With regard to the first question, we happily adduce a recent and remarkable article by Nan Z. Da, in which she makes “The Computational Case Against Computational Literary Studies.” “In a nutshell,” she writes,

    the problem with computational literary analysis as it stands is that what is robust is obvious (in the empirical sense) and what is not obvious is not robust, a situation not easily overcome given the nature of literary data and the nature of statistical inquiry. There is a fundamental mismatch between the statistical tools that are used and the objects to which they are applied.

    We will not go through the various problems that Da identifies and the various examples that she picks out (though what she does with a Chinese translation of Augustine’s Confessions is quite breathtaking). Instead we wish to stress her conclusion, framing it in our Peircean terms. Applications of textual “data mining involve a trade-off: speed for accuracy, coverage for nuance.” When well designed, the resulting similitude is good enough to give us a “simple piece of information that is either actionable or that can be quickly labelled and classified along simple features.” But this simplification “always involves a significant loss of information. The question is whether that loss of information matters.”

    Again, the answer to that question will depend on what we actually want to know. For certain industries, questions, and masses of data that no one could possibly read or want to read, such computational methods may meet specific needs. But if what we want to know about is our own potential for reading and for meaning, our own engagements with language and how those engagements can shape or transform us, then the loss is enormous, since it amounts to much of the relevant complexity. The danger, we repeat, does not lie in number, mathematics, or computation. The danger lies in our tendency to ask of these more than they can provide. Perhaps this too is an attribute of our humanity: a yearning for stability at the foundations, a prejudice for calling knowledge only that which approximates certainty, a preference for simplicity in explanation, a desire to banish the uncountable — the “blue tigers” that fill our pockets and our world — to some blind beggar, magical realm, or marginal humanities discipline. If so, then it is all the more necessary that we become aware of that aspect of our humanity, rather than repress it for the sake of uncertain numerical certainties. Today the task seems all the more urgent, not only for scientists and professors of the humanities but for all of us human inhabitants of the Anthropocene.

    The Modernization of Duties

    The conventional belief about the well-known dichotomy of duties and rights is that the former are premodern and the latter are modern. Some have celebrated “the age of rights” while others express concern that modernity takes “rights talk” too far. There is a human rights movement, as if duties require none. The last American secretary of state ostentatiously called together a “Commission on Inalienable Rights,” but no one would ever say that the president he served took personal or political duties seriously. And while some philosophers have been subtle in recognizing that any right implicates a duty, our recent thinkers have mostly battled about how to justify the various rights that moderns claim. Conservative oracles instruct that rights come from God or nature (the Roman Catholics among them having overcome their modern anxiety that rights were liberal and relativistic), while liberals have bickered about whether to establish their foundations in contract, reason, or practices. At the height of postmodernism, academics mused about how rights could persist as they clearly have in “the age of interpretation.” The canonization of human rights at the end of the Cold War, as the international public morality of the end of history, called forth an entire library of writings on where they came from. But there is no interest in whether the duties of citizens or humans remain alive — or what their intellectual tradition looks like.

    The continuing interest in rights, and the commonplace that duties were superseded by them, misses something dramatic in our intellectual history. It obscures, or entirely overlooks, a great struggle to modernize duties. That struggle, one might even suppose, may determine nothing less than the future of our ethics and our politics. Certainly the character of liberalism, and even its political future, depends on a recognition of that struggle, and on our support of it. Recovering the fraught but indispensable attempt to reclaim duties for a liberal or liberatory program is of far more than historical interest.

    “Every legal culture has its fundamental words,” Robert Cover, the legendary Yale Law School professor, a guru in some quarters, remarked in 1988 in a classic essay on duties, “Obligation: A Jewish Jurisprudence of the Social Order,” that has defined his intellectual legacy. “The basic word of Judaism is ‘obligation’ or mitzvah,” he continued, expressing the fallacy that if duties pertained to premodern ethics, then modernity — including liberal modernity — must be based on a successor and supplanting concept of human rights which has ousted duties, or subordinated them to a servile role in the culture of rights, with a kind of residual form of responsibility to acknowledge or vindicate rights. As Cover expressed it starkly, “the myth of Sinai is essentially a myth of heteronomy,” whereas “the myth of social contract is essentially a myth of autonomy.” Cover perfectly epitomized the conventional wisdom, according to which the philosophical and political situation is an either/or: duties or rights.

    Though Cover’s account is the best-known version of this canonical view in recent American legal discussion, the myth is omnipresent. The Catholic philosopher Alasdair MacIntyre, for example, defined premodern ethics in terms of role-performance. The scripts that our ancestors followed and bequeathed to their descendants “are part of my substance, defining partially at least and sometimes wholly my obligations and my duties.” Moderns, by contrast, are contentless, unobligated, anonymous: “lacking that space, they are nobody, or at best a stranger or an outcast.” Rights, MacIntyre held, are not just corrosive, relativistic, and solipsistic; they are also incoherent, having pried men (and possibly women) out of the realm of their performance of excellence.

    Such voices are correct that in modern times the substance of ethics shifted in the direction of autonomy and self-making — thankfully so. But it is not true that duties were simply overthrown by rights. The change was incomplete. An ethics of duties, far from being simply usurped by one of rights, endured the great modern rupture. Indeed, consecrating a new culture of duties, a modern culture of duties, became a high intellectual and political goal. Recalling how this critical but neglected movement was accomplished, so as to complicate the familiar canard about human rights as a kind of usurper ideology through which moderns abjured duties (except those supporting rights), is my purpose in what follows.

    Certainly it is true that, for millennia, duties — or obligations, or responsibilities — were the essential substance of religious ethics and thus the centerpiece of the history of ethical culture. This was true outside and inside what we used to call “Western civilization.” Whatever else world ethical traditions disagree about, they concur on placing duties at the center of their imaginaries, for the sake of God’s law or God’s will or human conformity with the natural order. Insofar as the whole history of humanity living under political rather than religious oppression saw their subaltern condition as morally justified, it was by an ethic of service — in the West, one cast by the long shadow of Rome in young men’s education, when they escaped or supplemented the religious call to serve the Lord. For the future of the city or the state, the worldly imperatives — the duties — of political security and political greatness mattered, too.

    Looking back at the age of duties, moderns see oppression, and by and large they are correct to do so. None of the societies in which the ethos of duty was nurtured were open or free or (by modern liberal standards) just. But that is not all that we need to know about them. They were not just prisons. Moderns can be grateful to the old form of ethical discourse for sometimes insisting, as the great scholar of Judaism Isaac Heinemann established long ago, that the precise nature of our duties can be open to discussion, and that the discussion of duties, including religious ones, requires some effort at reason-giving. With the exception of a few Biblical laws that were known as huqqim, or laws for which no reason can be given or known (and there were only a handful of such impenetrable statutes), rabbinical Judaism, and later philosophical Judaism, developed a long and rich tradition of looking for reasons for duties. That centuries-long effort was condemned by obscurantist and fideist factions, which insisted that the whole point of duties is that they come to us without our asking why. The Roman thinker Seneca condemned Plato for giving abstract principles rather than just the letter of the law, thereby emboldening people to debate the foundations, and some rabbis, in their understanding of Jewish law, were similarly emboldened.

    Yet in spite of such dogmatisms, for two millennia Jews have pondered why they have to do what they have to do. Again, not all of our duties were explicable, but most of them were; and pondering them and their intelligibility was central. This was why the introduction of medieval and early modern codes of law was intensely debated: they seemed to imply that intellectual inquiry into the grounds of duties was not necessary. Of course, traditional Judaism never claimed that Jews invented the substance of their mitzvot; the task was rather to give reasons for the ones that they had received, that God had imposed. The legal innovations of the rabbis, known as the oral Torah, were deemed to be the result of the “holy spirit,” which was continuous with, if weaker than, the revelation of Moses at Sinai, where according to tradition the oral Torah was given along with the written one. It was exceedingly rare, as the renowned case of Maimonides on the atavistic practice of sacrifice suggests, to regard the ordained duties as historically determined, and thereby risk the suggestion that these divine commandments, in spite of their original applicability, might sometime be obsolete. The same was true of Christianity, except that it was less bound to any scriptural list of obligations and even more affected by Greco-Roman philosophy. But it also developed a philosophical tradition elaborating the moral duties of human beings, most commonly within a framework of supposedly rational natural law. The rational and the natural, too, are given, and not invented. They must be discovered, not devised.

    Moderns did not only assert rights against the dictation of duties. They also theorized, as I will suggest, that rights entail “correlative” duties. In this way they kept duties philosophically alive. Yet to insist on the obverse — that the age of duties was by the law of correlativity also an age of rights — is to distract from how significantly the rhetoric and the content of morality changed. As the great scholar of international law (and a great rabbi’s son) Louis Henkin explained, Judaism “knows not rights but duties, and at bottom all duties are to God. (If every duty has a correlative right, the right must be said to be in God!)” It would be amusing to represent premodern history not solely as an age of duties, but as an age of God’s rights — or the state’s rights to demand fealty from subjects or citizens. The fact is that no one thought to put it this way. It was far more important to leave duties explicit, and to reflect on their content and their rationale, than to experiment with assigning rights to their proper provenance, divine or civic.

    It is equally familiar how central duty was to Roman self-understanding, not least to the fact that its most memorable hero, Aeneas, is constantly held up by Virgil as “dutiful,” or “pious.” But it is Cicero who, with his famous De Officiis (regularly translated as On Duties), easily wins the prize for the longest running teaching manual of moral philosophy in the West. It was used for centuries, with titanic influence in the early modern period. Voltaire remarked in 1771 that “no one will ever write anything more wise, more true, or more useful.” (Has anyone written the history of the over-the-top blurb?) “From henceforth those whose ambition it is to give men instruction, to provide them with precepts, will be charlatans if they want to rise above you,” Voltaire continued, communing with Cicero seventeen centuries later, “or will all be your imitators.” Frederick the Great carried De Officiiis on his campaigns.

    For Cicero, laying down the moral duties that nature and society imposed upon us was the substance of moral philosophy. “Who would dare call himself a philosopher if he had not handed down rules of duty?” he asked. Written in the last months of his life, De Officiis attempted to trace all the duties that we should recognize to two grounds: either they are honorable or they are useful. Mostly a Stoic, Cicero argued that our duties come from nature, and apply universally, everywhere and always. As in Judaism, Christianity, and Islam, Cicero and others went very far in justifying a kind of ethic of service as the substance of political morals, in the Roman case most especially the centrality of military service as a sacred duty of all male citizens, which remained as one of our American duties until not long ago.

    If premodern history is commonly (and correctly) understood as the age of duties, it is equally common to believe that modernity became “the age of rights,” as Henkin labeled it. It is the long shadow of the reign of duties that has always given rights their understandable appeal.

    I have no quarrel with the liberal belief that such premodern regimes of duties were in many ways oppressive and needed to be modified and even overthrown. In the early modern period, the word “officious,” from Cicero’s word officium, emerged as a derogatory term, as we still use it — referring not to someone who does his duty but who obnoxiously imposes his duty on us. Probably no one has better captured this revolution than the poet Ogden Nash, who roasted Wordsworth’s famous “Ode to Duty” in his own poem “Kind of an Ode to Duty.” Duty was “so ubiquitous, and I so iniquitous” — it policed us excessively, adversely to our freedom, and taught people to demean themselves in the name of upstanding virtue. No wonder we shucked it off.

    It would seem, from the conventional history of moral thought, that the modernization of ethics has principally been about leaving duties behind — in private and especially in public form, whether they were religious or secular in basis. For many moderns, most duties imposed by religion were irrational, and many of those imposed by states were unjustifiable, and so it was intellectually simple — though politically very difficult — to reject them and abandon them.

    But if this story of modernity as the exit from the reign of duties is incomplete as a matter of history, there is also a continuing moral and political risk in simply celebrating our liberation, even when it is to bolster our self-confidence in a never-ending struggle against the resumption of power by the more heteronomous forces of religion and politics. Ever on the lookout for unjustifiable oppression, libertarians specialize in keeping their eyes peeled for officious duties imposed on us against our will by oppressive communities and states, and they are not wrong to do so. But they cover only a small part of the waterfront. The question is whether the modernization of duties really does require rejecting them tout court or leaving them in the rubble of the old orders — or whether the modern abandonment of duties was excessive and mistaken and risky, making modernity less a liberatory time than a libertarian one, and forgetting that rights themselves, not to mention other values, depend precisely on the persistence of duties.

    Too many philosophers and historians have missed this significant possibility. They see nothing but a glamorous modern pivot from obligation to liberation, from duties to rights. But this is not just historically specious, it is also morally and politically noxious. Consider again Robert Cover’s picture of the replacement of heteronomy by autonomy. America in the 1980s, when Cover wrote, was a hardly a representative time in modern moral culture. It was the moment of a libertarian revolution in which emancipations of the 1960s were being succeeded by a mantra of economic freedom that overthrew the liberalism that Americans had struggled to establish since the New Deal and the Progressive era before it. Cover’s error of perspective, if I am right, led him to believe that modernization meant the abandonment of duties. But the opposite was the case. It was duties that were themselves modernized.

    I do not mean to suggest only that duties survived in modernity. This is obvious: we still teach our children ethics, or try to, including their moral duties. Not just duties, but also organized religion, survived modernization — and one temptation that follows for this very reason is to adopt a public/private distinction whereby we separate public rights and private duties, if we adopt a comprehensive moral view in private that imposes moral duties. In this solution, moderns have not wholly abandoned duties, they have merely expelled them from “the public square.”

    Liberalism has revered this distinction between public and private, but it is misleading, for two reasons. For one, our private lives and our private faiths have been deeply affected by modernization, as when Cover remarks that “even those among us who have been raised with a deep and abiding religious background can hardly have escaped the evocations that the terminology of ‘rights’ carries.” Living in a voluntarist culture changes the meaning of obedience. Choosing to be obedient puts a dent in the ideal of obedience. Second, it is also the case that there are still manifold public duties: obeying the law, paying taxes (no matter how low these monetary duties have gone), serving on juries, and so on. Was the erosion and the restriction of these moral duties an inevitable consequence of our liberation from the more oppressive duties of religion and state?

    One traditional story, which is not entirely wrong, is that a voluntarist tradition arose in moral philosophy in the early modern period which made rights against collectives and states central. This tradition conquered political life starting with the Atlantic revolutions. Cover’s citation of the social contract as the characteristic “myth” of modern moral philosophy is an example of this belief. And it is certainly what is often taught, especially in the “Western Civilization” courses that emerged after the Great War, stabilized around an anticommunist and later antifascist set of tropes. The story goes that first came Hobbes, who perfected or perverted natural law doctrines from the Stoics to the Christians so as to isolate the natural right of self-preservation as the basis of the artifice of government, and then came Locke, who added the pre-political right of property and consent to government. And thus the liberalism of rights, which was the only liberalism that anyone bothered to imagine, was launched.

    There is only one problem with this familiar narrative. It is misleading to the point of obscuring the actual career of liberalism, which in actuality preserved duties at its core, before that libertarian revolution that Cover generalized as if it characterized modernity as a whole. The libertarian narrative of the descent of liberalism functions to conceal the endurance of duties within it, as much as libertarianism exists to undermine duties themselves. In order to understand the history of liberalism, Edmund Fawcett not long ago remarked, “liberty is the wrong place to begin.” The reader may be forgiven for doing a double take. And as Helena Rosenblatt suggests in her recent book, The Lost History of Liberalism: From Ancient Rome to the Twenty-First Century, modern liberalism emerged out of Cicero’s moral duty of liberalitas, while liberals in modern times have (in her words) “fought not just for their rights, but for the means to better fulfill their moral duties.” What this means is that liberalism was from the beginning more than an affair of casting off yokes. Far from focusing so intently on coexistence or toleration within community, or on freedom from political authority as such, liberalism for a long time placed education for social interdependence, as well as the political constitution of social freedom, at the very heart of its historic agenda.

    The preeminent philosophical site of this focus was sempiternal natural law doctrine. And the most proper interpretation of its trajectory between the Renaissance and the Atlantic revolutions preserves the centrality of duties as what moral philosophy generally, and natural law in particular, requires. Once we take the trouble to read Locke’s full corpus, he himself emerges — like his fellow Protestant thinker Samuel Pufendorf, author of On the Duty of Man and Citizen — primarily as a theorist of obligations inculcated in the young and imposed by government, even if we commonly produce the reverse impression as an artifact of our teaching. As the great historian Knud Haakonsen observed, only erroneously and retrospectively does early modern thought appear to lead our way, philosophically or practically. No matter the temptation to promote libertarian and voluntarist rights thinking as the ascendant theme, with duties as “the casualty that defines its victor,” Haakonsen bracingly insisted that a rationale of duties and a schedule for duties remained dominant in theories of natural law.

    And we can push this argument farther, with the late historian of American ideas Morton White, by arguing that even the American revolutionaries, who conferred upon natural rights an unprecedented pride of place at the foundations of an actually existing government, were by no means ethical libertarians — they were readers of such forgotten figures as Jean-Joseph Burlamaqui, the Swiss natural lawyer who laid great stress on duties in works such as Principes du droit naturel in 1747 and Principes du droit politique in 1751. The American founders by no means intended to privatize morality, or to eliminate the educational focus on duties or the imposition of them on citizens by states, even while conditioning government on consent when rights were violated. The Virginians made this explicit in the summer of 1776 by including duties in their pioneering rights-based constitution.

    What is true is that the coincidence of the Atlantic revolutions and new forms of ethical privatization — including the rise in significance of commercial freedom — left a serious mark on modern moral life. We can detect the individualizing effects of early capitalism in the French Revolution: not in a backwater settler colony but in the center of “civilization,” political modernity became indelibly associated with rights. Yet the celebration of rights cannot explain that great convulsion. A closer look at the revolutionary era forces us to acknowledge the sometimes consequential presence of duties even in what we remember as an unprecedented moment of basing politics on individual entitlements.

    In 1789, the French revolutionaries famously decided to promulgate their “Declaration of the Rights of Man and Citizen.” Where were the duties? One reason for the apparent demotion of duties relative to rights was rhetorical and cultural: public and private duties went without saying. Nobody would have argued otherwise. Rights were stressed to limit government and in documents designed to delineate what government cannot make us do — while still leaving a lot of room for collective self-governance and imposed duties. And a proposal to mimic Virginia, announcing rights and appending duties, lost out by a relatively close vote of 570-433 in the National Assembly and after heated debate. Finally, even the most classic Anglophone defense of the immortal French declaration, Thomas Paine’s Rights of Man, insisted that the objective was to free us from oppressive duties of state and tradition, the better to allow our return to our justifiable duties as well as our natural ones. “All the religions known in the world are founded … on the unity of man,” Paine wrote. “By considering man in this light, and by instructing him to consider himself in this light, it places him in a close connection with all his duties, whether to his Creator or to the creation, of which he is a part; and it is only when he forgets his origin … that he becomes dissolute.” Prodded by Edmund Burke, Paine actually understood himself to be returning to the cosmopolitan or universalist natural law that history and tradition had obstructed, and which allowed humans to recognize not merely their entitlements but also their duties, which were universal across space and time.

    There were four intellectual legacies of that formative period that must be faced, because they clearly shaped some ways that duties came to be conceptualized in modern politics. All of them are unsatisfactory.

    One legacy of this revolutionary era is the view that duties are the residual legacies of the moralities of the past, that they are obeyed only in private, and that they do not affect public governance. The main worry in this regard, the Virginian declaration says, is that public authority will keep people from embracing the duties that religion imposes; but even the Virginian language still makes it perfectly clear that it is not simply up to individuals whether to embrace duties. “It is the mutual duty of all to practise Christian forbearance, love, and charity toward each other.” (The text left it unclear what would happen if people gave up their duties.)

    A second legacy has been, if anything, more fateful: the idea that duties for citizens, whatever their private commitments, correlate with and do not exceed their rights. In the French revolutionary debates, the Abbé Siéyès, the most influential thinker of the time, voiced this opinion. “I have duties toward others to the extent that I recognize their having the same rights as myself,” he declared. “Hence there are in fact only rights, of which duties are simply a special case in the interpersonal sphere.” Paine, who was frequently a mouthpiece for Siéyès, recalled that argument in reviewing the near-miss in the French Revolution for a declaration of rights and duties: “A Declaration of Rights is, by reciprocity, a Declaration of Duties also. Whatever is my right as a man is also the right of another; and it becomes my duty to guarantee as well as to possess.” Both authors implied that there were no modern duties beyond those to safeguard correlative rights. The Abbé Grégoire, famed for his role in emancipating the Jews in the following years, pushed back. (After all, both he and Siéyes were, technically, priests.) It is not the case that duties are deduced from rights, Grégoire insisted, or that all the moral rules and political obligations that we recognize will simply correlate with or follow from some antecedent set of rights. But he lost the debate, and as a result, in the words of the historian Marcel Gauchet, “rights would permanently be saddled with the ghost of duties.”

    Third, it has to be seen that these consecrations of rights on both sides of the Atlantic all afforded property special treatment, which — though no one could foresee how modern commerce and industry would transform our lives — ended up laying a practical foundation for a modern libertarianism of rights without duties. Contrary to communitarian publicists in our day, neither the French Declaration nor the broader revolution of which it was a part were atomistic and individualistic in spirit. But neither were they free of blame for modern anomie. With this triple demotion of duties, they did leave a big burden for the future.

    A fourth and final legacy, contingent but important, is that, even as declarations of rights created some of the conditions for later libertarian heresies, a space was opened for the conservative enemies of the new freedoms to take custodianship of duties. The most remarkable example of this occurs in 1795, after the Eighteenth Brumaire, when Robespierre and his fellow stewards of the Revolution were toppled from power, and their successors under the Directory propounded the era’s third French declaration of rights — a Declaration of Rights and Duties. Its first article follows the Virginians: rights are about limiting government in public, where duties are about private social cohesion. Its second article does the same: it protects residual Christianity and emphasizes a duty of golden rules. The rest of the duties announced in the Year III —the revolutionary calendar was still in use — imposes public duties, the central goal of which is safeguarding order (which meant property). “It is upon the maintenance of property that the cultivation of the land, all the productions, all means of labor, and the whole social order rest,” the declaration read. “Every citizen owes his services to the fatherland and to the maintenance of liberty, equality, and property.”

    Thus duties were now separately announced — but as a post-revolutionary attempt to contain the wildfire and to impose some limits. The cause of duties became a conservative, even a reactionary, cause. The modern stereotype about rights and duties, about liberals and conservatives, was born. It was the first time, but hardly the last, that the championing of duties would take this rearguard form, in part because the advocates of emancipation had rhetorically, and foolishly, ceded duties to their enemies.

    But these stereotypes, I think, represent a colossal error. This separation — this separationism — was not representative of all forms of liberalism. It was, in fact, the minority view for a long time. For a century or more, from the early nineteenth century to the mid-twentieth century, liberals understood the need to reappropriate duties, and to install them at the center of their moral and political program. The central issue was what liberalism was going to be about, given the experience of the revolution. In an account that has become widespread since John Rawls, the liberal goal is said to have been not only to overthrow the oppression of the past, and to limit tyrannical government, but also to privatize “comprehensive” or “perfectionist” understandings of the good life. There might be some shared set of civic duties, but moral duties were not the state’s problem, and what remained alongside rights against community and state oppression were some public duties so that the state could function for “political” ends, avoiding at all costs a state that would endorse a particular view of the good life, a particular vision of our highest ends and obligations.

    As a historical matter, this account cannot be correct. Contemporary scholarship about our understanding of nineteenth-century liberalism is showing almost the reverse. Most liberals and socialists understood themselves to be theorists of collective and individual improvement, and they saw politics both within and beyond states as a vehicle for that mission. As much as political and economic liberalism got entangled and identified in the era, many thinkers propounded profoundly non-libertarian understandings of freedom and interdependence. They propounded versions of “positive freedom,” with the modernization of duties as perhaps their most daunting and important task.

    The most stirring exemplar of this liberal mission in our intellectual history is Giuseppe Mazzini. He did not require the abandonment of either duties or modernity; he believed that they needed to coexist, precisely in the form of a modernization of duties. If you have heard of Mazzini, he is probably known to you as an Italian nationalist. As dubious as it may sound given current political debates, in which liberal nationalism is treated like an oxymoron, it was precisely liberals such as Mazzini who helped to invent nationalism (and nationalists such as Mazzini who helped to invent cosmopolitanism) — and shouldered the burden of modernizing duties in the process.

    Born in 1805, Mazzini was described by Mill as “one of the men I most respect” and by Nietzsche as “the man I venerate most.” Americans also loved him, especially his good friend William Lloyd Garrison. The American transcendentalist Margaret Fuller called him “the most beauteous person I have ever seen.” Since he challenged imperial and monarchical authority, Mazzini was said by Bakunin to be “the man who had given the most sleepless nights to the crowned heads of Europe.” David Lloyd George once said that he “doubted whether any man exercised so profound an influence on the destiny on Europe.” Actually, that understates things considerably: Mazzini is probably, after Marx, the most influential philosopher in world history, because his doctrines traveled to the ends of the earth, due to the extraordinary influence that he exercised on nationalist upstarts from India to Israel, who frequently adopted his refusal to distinguish between individual protection and collective identification.

    Mazzini organized his thinking around duties. I know of no evidence that he knew of Paine, but Mazzini’s The Duties of Man, which appeared in 1860, was in many ways a response to him. Published only twelve years before his death, this collection of his essays has been translated into twenty languages (including Esperanto and Yiddish) and has gone through hundreds of editions, exercising its greatest impact through the middle of the twentieth century. For Mazzini, duties are not just a residual legacy from Christianity or any other premodern tradition. They are the substance of a new ethic for a new age. He even went so far as to think that humanity would need a new religion — a religion of humanity, as several nineteenth-century thinkers called it.

    Mazzini grasped that without such notions the path lay open not only to protecting old religions or safeguarding individuals from tyranny, but also to libertarian perversions that sacrificed any common notion of the good life, including a productive understanding of individual freedom itself. The French Revolution’s rights left the state with no role in relation to the morality of freedom: “The sacred idea of Liberty has recently been perverted by some deeply flawed doctrines, declaring that all government and all authority is a necessary evil [or] that government has no other mission than that of preventing one individual from harming another. Reject these false doctrines, my brothers! … If you were to understand liberty according to these flawed doctrines, you would deserve to lose it. … Your liberty will be sacred so long as it is guided by an idea of duty, of faith in common perfectibility.”

    In taking up the liberal cause of duties, Mazzini did not mean to make any excuses or apologies for the community or the state. He knew that limits on community and state were important, and he never entirely denied the distinction between public and private. He was a liberal for whom the purpose of social interdependence was freedom and perfectibility, not constraint or oppression for their own sake. Yet an overriding emphasis on the risk of tyranny, he believed, would obscure the reality that some doctrine or other of individual perfection will rule in any society, and liberals needed to have a responsible one. More than this, at the center of his thinking, as of that of nineteenth-century liberals such as Mill and Tocqueville, there stood not so much a libertarian freedom from as an individual and social freedom for — a free agency to enact new things not only for oneself but also for society. He was a perfectionist liberal with duties as his cornerstone, notwithstanding the importance of freedoms or rights, which animated his political activities but were put in their place in the scheme of a wider liberal doctrine.

    Along these lines, just as he rejected the idea that duties were a thing of the premodern and religious past, Mazzini repudiated the idea that all our duties merely correlate with our rights and follow from them. If anything, it was the other way around. “When I say that the consciousness of your rights will never suffice to produce an important and lasting progress,” he explained, “I do not ask you to renounce those rights… I merely say that such rights can only exist as a consequence of duties fulfilled, and that we must begin with the latter in order to achieve the former.”

    Like all other liberals in this period, he held that freedom is not a natural condition; it is a collective and social achievement premised on continuing and intensified interdependence. The goal of the shift toward rights in modernity was an escape from the confinements of duty, and this was to some extent a good thing: the liberal insistence on freedom from God’s enforcers, from tradition’s tyranny, and from the state’s prerogatives was a significant advance in history and culture. But it by no means settled the question, after individual freedoms had been championed and won, as to what would happen to the earlier public emphasis on duties. If it disappeared, so would freedom itself. Proclaim liberty, indeed; but liberty is not the only valid end.

    In the twentieth century, it became fashionable to regard Anglo-American thought as constitutionally libertarian, poles apart from Continental statism, in which an abasing prostration to government prevailed, and no freedom. It is certainly true that both communist and fascist governments emphasized duties far more than they did rights — that is putting it mildly — as part of their respective re-enthronements of the ancestral, the mass, the communal, and the familial. Yet throughout the same period, and until a recent date, the story of liberalism, and of liberal forms of socialism, commands the retention of duties for the sake of credible liberation. Many Anglo-American liberals and socialists agreed with their Continental European colleagues about the need to emphasize a theory and practice of duties. And the reason they did so is that they were the earliest to make the move, which in this country we associate with the turn of the twentieth century, to welfare as the condition of freedom. “Of what use was the recognition of rights to those who lacked the means of exercising them?” Mazzini asked.

    T. H. Green, the Oxford moralist who fused Evangelical religion, liberal politics, and Hegelian metaphysics, gave the answer: duties to provide one another the condition to exercise free agency matter more than formal rights. Green, who lived from 1836 to 1882, is nearly forgotten now; in the twentieth century, when philosophers of many kinds united in the campaign to destroy philosophical idealism, his reputation crashed. For our purposes, what matters is that Green was trying to transmute a fraying Evangelical moralism into secular terms, and that the defense of a rehabilitated version of duties was central to his attempt to do so. As his biographer Melvin Richter explained, it was because he felt that he could count on secure English and Western European traditions of liberty that Green risked justifying a more interventionist state, especially in response to the identification of or mix-up between political liberalism and economic liberalism for which England was rapidly becoming known worldwide.

    Accordingly, Green named a major work Lectures on the Principles of Political Obligation, in which he argued, in his idiosyncratic and sometimes incomprehensible idealist language, that personal entitlements should receive far less rhetorical attention than state and collective ones — precisely so as to support policies that would augment inherited rights with needed redistribution. Like so many others in the nineteenth century — and not only those further left, such as Marx — Green’s point of departure was an attack on the myth of the socially antecedent individual, as economic freedom was defended more effectively than political freedom, so much so that across the global north parties of all stripes associated rights with the limitation of the state for the sake of property protection and transactional freedom. Green’s project was to save the possibility of moral community from this development, which did not require any radical critique of rights but did require keeping them in their place to offer an account of what we might call duties as trumps.

    “The popular effect of the notion that the individual brings with him into society certain rights,” Green complained, “is seen in the inveterate irreverence of the individual towards the state [and] in the assumption that he has rights against society irrespectively of his fulfillment of any duties to society.” Green did not reject rights, but he reframed them, reaching for a theory of rights that would acknowledge individual capacities while prioritizing social cohesion and progress. This meant, above all, an insistence that duties must have the same standing and importance as rights: “There cannot be innate rights in any other sense than that in which there are innate duties.” Of these, he added, “much less has been heard.”

    Green, as historians have shown, remained the godfather of the coming of the British welfare state for decades after his death in 1880s. (A recent essay by Ben Jackson on the evolution of British political ideology is entitled “From Idealism to Neoliberalism” as if with those two vaporous categories one could describe and explain a century’s worth of intellectual and political transformation.) Like his masters in German philosophy, Green was trying to reinvent the moral teleology of premodern thinking for a modern age of emancipation, and his view of duties and rights were that they tracked whatever led humans, individually and collectively, to their flourishing. Under modern circumstances this meant that society has a deep claim on individuals, who are never to be seen as antecedent to it. Green was willing to allow for inherent human rights but only in connection to human duties with equivalent status, on the theory that if there were some eternal or long-run interests that humans as such have, they are inevitably lived out together, in society.

    Most of all, Green, his British New Liberal followers, and their influential American analogues were arguing against a libertarian presumption that made state intrusion into the allegedly free domain of market activity a violation of rights. They directed their fire toward the idea of rights as metaphysical entities; instead, rights were social goods whose justification ultimately lay in collective purposes. Later, in the twentieth century, so-called legal realists such as Robert Hale and Karl Llewellyn pursued a similar deconstruction of rights. In theory the anti-metaphysical critique applied equally to duties, but neither Green nor his successors targeted duties for their criticism, perhaps because they wanted in the first place to make duties plausible in an age in which liberty is used to justify market hierarchy and depredation.

    For such figures, the argument was twofold. First, if people have rights based on their innate features, then they have innate duties too. Second, the collective setting of individual freedom makes the harmony of social and individual purposes a policy challenge rather than the occasion for asserting the supremacy of individual freedom over the collective good. They refused to play the trump card of rights to minimize the state. “Rights, indeed, are precious and sacred,” remarked Edinburgh socialist theoretician Robert Flint in 1894. “In the course of the struggle for ‘rights’ great and indubitable services have been rendered to mankind. Nevertheless, the alone properly supreme and guiding idea of life, whether personal or social, is not that of rights but of duty.” (For good measure, Flint added: “This truth has found its worthiest prophet and apostle in Joseph Mazzini.”)

    In the course of the twentieth century duties fell out of fashion, and certainly out of liberal discourse. “The fact is that today, it seems to me very difficult, whether in high schools or universities, to talk seriously of the duties of citizens,” remarked Raymond Aron, the leading twentieth-century French liberal. “I think that whoever risked doing so would seem like he belonged to a lost world.” At some point — it is hard to date precisely, and even harder to explain — the struggle to modernize duties ended, and it was replaced by more libertarian schemes. We are the poorer for this. We are living in the void it left behind, in which it is daunting to imagine the construction of new foundations for perfectibility and progress.

    The situation is dire. Perhaps the foremost French moralist with any claim to be Aron’s successor, the reactionary novelist Michel Houellebecq, whom Aron would have despised, once noted on the radio: “I am not a citizen, nor do I want to become one. No one has duties towards his country; they do not exist. We are individuals, neither citizens nor subjects. We have lots of rights, but no duties. … France is a hotel, nothing more.” Those sentiments can be easily translated into an American idiom. We are an impossibly divided country, increasingly cold and increasingly libertarian, devoid of a common national feeling; a hotel for the rich with many hovels for the poor and no immediate prospects of becoming a common home for all. And for cosmopolitans who insist on duties to our fellow humans — for Mazzini was also a pioneering globalist, who imagined duties of world citizenship — the challenge is even starker. After the fall of the house of duties, it may not be much of an act of social reconstruction to rewrite their history, and to refute the either/or that relegates them to premodernity, in order to remind ourselves of just how recently the modernization of duties remained the central liberatory project; but it might help. Reflection, too, is one of our duties.

    Between Leah and Rachel

    Osip Mandelstam’s Conversation About Dante is the major Russian work on the great Florentine poet. Ever since it appeared, and perhaps even before it did, we have known that this conversation would turn out to be about something different: about “time and the self,” as another poet wrote. Dante’s optical devices, his mirrors and his loupes, were designed for the intense scrutiny of the fabric of his contemporary world: its decaying weave, which was nonetheless destined inexplicably for salvation. Perhaps this is why his Commedia becomes more important when the possibility of salvation is more remote, or so it seems.

    At the beginning of 1933, Mandelstam arrived in Leningrad to take part in two evenings of poetry arranged especially for him. His evening at the Grand Hotel Europe (then called the European), where he was staying, was attended by Leningrad’s literary beau monde. The only one missing that evening was Anna Akhmatova, who was only fleetingly present to hear Mandelstam read and then departed after a brief and almost formal exchange of words with him. They would meet later, and without anyone present, in her room in a communal apartment.

    He had only just mastered Italian and he raved about Dante, reciting whole pages by heart. We talked about the Purgatorio and I read a passage from Canto XXX, the appearance of Beatrice.

    sovra candido vel cinta d’uliva a woman
    donna m’apparve, sotto verde manto
    vistita di color di fiamma viva.

    ……..

    ……….Men che drama
    di sangue m’e rimaso non tremi:
    conosco i segni de l’antica fiamma.

    a woman showed herself to me: above
    white veil, she was crowned with olive boughs
    Her cape was green; her dress beneath, flame-red.

    ………

    ………I am left with less
    than one drop of my blood that does not tremble:
    I recognize the signs of the old flame.

    (Translated by Allen Mandelbaum)

    Osip began to cry. I was terrified.

    “What’s wrong?”

    “Nothing, nothing — just those words, and in your voice.”

    The relationship between Mandelstam and Akhmatova at that point was what we might call complicated. Its complexities continue to occupy Russian literary scholarship, and I am merely summarizing here what has been recently revealed.

    Mandelstam’s views on literature, politics, and life had by now converged to a point where he was compelled to engage in the anxieties and the issues of the day — a high-risk will to action and the building of a new age on new principles. His choice of acquaintances was surprisingly catholic (to the consternation of the reader now as well as his friends back then), including Nikolai Bukharin, the high Bolshevik official and editor who often protected the poet against the regime, and Nikolai Yezhov, the head of the NKVD, whom he had met in one of the Central Committee’s sanatoriums, as well as Komsomol leaders from the Russian Association of Proletarian Writers. His article in the newspaper Izvestiya in 1929, a contribution to a debate about literary translation, demanded that “the shoddy, pointless direction of production” be destroyed at the root, and that literary initiative be “wrestled from the grasp of artisan-entrepreneurs” — he even called for someone to be taken to court for “unprecedented wrecking.” He cast off his own literary past and accumulated Symbolist capital as so much ballast: “I don’t want to be living off my ‘Mandelstam-ness,’” he wrote to his wife.

    Against this backdrop, his relationship with Akhmatova — who had deliberately and for a long time been shrinking her connection with the literary world to almost nothing, neither publishing nor performing her poems, and hardly writing at all — was extremely important. They were connected by a common history, an intimate friendship and, what was even greater, a linguistic intimacy, which allowed them to “listen to and understand each other” without concession or correction — even if it was often accompanied by mutual irritation. Akhmatova’s position, her preference for the path of apparent inaction and literary and political non-participation, seemed to him to be alternately a temptation and an inappropriate anachronism. He himself was unable to stop “acting, making a noise, and giving everyone the run-around,” as his wife put it.

         In 1933 Mandelstam was reading Dante compulsively — “day and night,” as Akhmatova reports. But only after their meeting, and hearing Dante’s words in her voice, did he return to Moscow and begin work on his Conversation About Dante. He continued writing during his time in Crimea, and when he returned to Moscow he wrote one of only a few poems that year, the openly political and not-for-publication “Old Crimea,” where the “terrible shades of Ukraine and Kuban” put us forcibly in mind of Canto XXXII of the Inferno and the hunger that is worse than grief, in which ice-locked Ugolino is paying the penalty for his treachery long ago. It is possible that what Mandelstam saw in Crimea forced him to reassess his role in public life and, like Dante before him, to demand of himself a different life and a different politics. And he did something drastic about it. The next poem that he composed was his infamous “Stalin Epigram” —

    …his words like measures of weight,
    the huge laughing cockroaches on his top lip,
    the glitter of his boot-rims.
    Ringed with a scum of chicken-necked bosses
    he toys with the tributes of half-men….
    He rolls the executions on his tongue like berries.
    He wishes he could hug them like big friends
    from home.

    (translated by W.S. Merwin and Clarence Brown)

    The poem was a suicidal act, as Mandelstam well knew, and he was consequently preparing himself for death. But even before that anti-Stalinist effusion he wrote this in his Dante text: “It is unthinkable to read Dante’s cantos without turning them to face the contemporary world. They are created exactly for this end. They are devices for detecting the future. They demand commentary in the Futurum.”

    The poems, the small book on Dante, the political protest, as we might call it now — Mandelstam’s need to draw all these into conjunction in a text, conceived and written in a way that would allow “Komsomol youth to sing it in the Bolshoi Theater” — all this was a direct result of his reading of Dante and his new engagement with the horrors of his times, his critical insistence that humanity should be, in Dante’s words, “removed from the state of misery and led to the state of bliss” through the merging of the literary and the political in action. For Mandelstam, the Commedia, in which the poet-protagonist seizes his opponent by the hair in a moment of fury, pulling it out by the handful, was a model of that “self-lacerating rage” so necessary for a writer, on a level with Nekrasov, the great anti-czarist and populist poet of the nineteenth century; and a way of seeing himself alongside these others, like them an “internal outcast,” unable to concede even an inch in argument, flying off the handle, making mistakes and needing a guide, yet still defending the “social worth and public position of the poet.” The historical figure of Dante Alighieri recedes somewhat in this reading: what is important here is the discovery of a common denominator, something that unites him with Mandelstam, his comrade-in-arms, both moving together with a common purpose.

    “Time for Dante,” writes Mandelstam, “is the content of history understood as a single synchronous act, and also its converse: the collective holding of time — by fellow workers, by rivals, and by joint discoverers.” This synchronous collective time in which past and future are brought together and compacted into a unified present, so that it is just a single step from Mandelstam to Dante, shares the temporality of the Commedia, or more precisely its atemporality, the imagined contemporaneity of its figures from different times and different places, in which Ulysses and Guido da Montefeltro are simultaneously tormented by the same flame and address the same listener. And this listener, it is important to remember, is living, and so can still act on what he hears to change his own and others’ fates.

    BC Two days after the reading at the hotel, Mandelstam was invited back to Akhmatova’s apartment. It was expected that he would read new poems in this intimate setting, but the evening was a failure: the invited audience had all been arrested the day before. Akhmatova made her apologies: “here’s some tea, and here’s the bread, and I’m afraid the other guests are all in prison.” Actually existing in a concrete moment of history throws all plans and pre-arranged positions into doubt: the vita activa and the vita contemplativa are intertwined and disarmed in a strange way — they reflect each other like Leah and Rachel in their mirrors in Canto XXVII of the Purgatorio. In the face of misfortune the difference between them is erased, leaving only the common features.

    Today the world is faced with a new misfortune, a planetary medical misfortune, and as it attempts to cope with it Leah and Rachel are once again difficult to tell apart. Perhaps the clarity of the difference between them is a feature of peacetime, when the individual, rather than her circumstances, can choose her identity. Mandelstam invoked Leah and Rachel, in Dante’s account of them, in poems that he wrote in 1934, in which their qualities and their deeds complement and equal each other: “Rachel gazes into the mirror of being, /And Leah sings and weaves a wreath.” As tradition requires, action and reflection are here placed in opposition, and neither is given preference. Mandelstam closely follows Dante’s example, but it is noteworthy that in both works song is considered part of the active life, as an act in itself. If there is a poet in the Leah-Rachel dyad, it is of course Leah (who was, we recall, “weak of sight”). Rachel has clear sight and is silent, whereas short-sighted Leah sings.

    Dante does not encounter them in hell or in heaven; they appear to him in a dream on the threshold of Earthly Paradise after passing through the line of fire, his meeting with Beatrice assured. The dream lasts for only fifteen lines, but still some words and objects are repeated with the insistence of a reflection. We do not see Rachel, we only hear about her, we hear her reflection, in the words that Leah sings about her: Rachel has beautiful eyes, belli occhi, and — in a telling symmetry — Leah has belle mani, beautiful hands, which she uses to adorn herself in order to look in the mirror that we do not see. The effect is of a hall of mirrors, constantly and continuously lengthening: we are being told a dream in which a girl appears and describes another girl who stares at her own reflection. Both girls, it seems, hold mirrors, but the glass objects have different names, specchio and miraglio, and they are as unlike each other as future and present — Rachel looks only at her mirror, whereas Leah’s, in contrast, has a deferred purpose, it is waiting for her to appear in it, adorned. Since Leah’s joy lies in action (just as Rachel rejoices in looking), her mirror is always empty, like a waiting room. Rachel is sharper-sighted today, whereas tomorrow Leah is barely discernible. Sometimes it feels to me as if their mirror, just like their beauty, is single, and shared between them.

    There is one other mirror in The Divine Comedy, but no one looks in it: it is the icy surface of Cocytus, the lowest point of hell, the focal point of sin and despair, a place where the future is cancelled and the present is unbearable. Unlike other imaginings of the contraption of hell, Dante makes the ninth and final ring a place of extreme cold. There is a possible precedent for this: Ovid in his late books describes the land of his exile as a non-place where people speak a non-language (just as Dante, in the last cantos of the Inferno, finds that speech falls short and turns to the Muses to help him acquire that barbed rasp of language befitting the place of wailing and gnashing of teeth). The main features of this non-place are a kind of inverted stability: the cold and danger are unrelenting, the impossibility of conversation or understanding is eternal. The frozen waters of Tristia, Ovid’s poems of exile, in which the half-dead fish cling to each other like Dante’s sinners, are directly related to Cocytus: more like clear glass than water, the mirror in which one sees neither oneself nor one’s interlocutor. (Tristia was also the title of Mandelstam’s second volume of poetry.)

    Dante begins gathering up the winter metaphors long before Canto XXXII, as if the cold was stiffening and congealing around him as he moved, and the weight of sin accumulating. The ice on the actual lake is so impenetrable that even if the rocky crags crashed down on its thickness it would fail to crack. The souls of traitors are locked there, those who betrayed their families, their country, their friends, and their benefactors — and there are thousands of them, their faces bent down. The cold has dehumanized them: they chatter like storks, they have muzzles. They are also blind: their perpetual weeping has left their eyes covered by a cataract of ice and their eyelashes are frozen together. The surface of Cocytus, the scholar Teodolinda Barolini tells us in her commentary, is a huge mirror in which we can see the evil in ourselves. And all those who are tormented in this circle of hell become this icy mirror, though they cannot see themselves or each other. Only Dante and Virgil can see, and the first sinner to address them reproaches them for their sight: “why eye us so, as though we were your mirror?” Sight is an object of envy, and an inaccessible privilege: the ability to see oneself and one’s sins in another, and to attempt to mend one’s life. On many occasions Dante speaks of hell as a blind kingdom and of his journey back up as a path towards sight, but it is here that the visual parallel is fully realized. Strangely enough, I read these terze rime as a grotesque commentary on the dream of Leah and Rachel, as if the unseeing souls gathering before the icy mirror were simply demonstrating the weakness of the vita contemplativa in a situation in which the vita activa has been denied to them.

    BC There have been many descriptions of what has happened to time since our pandemic began: it stuttered, it sped up, it slowed down, it froze; what could once be achieved in half a day now takes two days; cats that needed feeding once a day now demand food every few hours; flowers stand for weeks in their vase or wither overnight. Time is deformed, abbreviated, protracted, twisted. It behaves in an odd way, and so do we. Every little chore (cooking, cleaning, walking the dog) looms large: the usual tasks are de-ritualized, broken down, protracted. And the border between one day and the next has been erased: each day is counted only insofar as it is another unsuccessful attempt to leap the barrier, to escape the snare.

    Our sense of endless daily repetition, of immobility and entrapment, seems almost sacrilegious if we admit its generic similarity with what is described, say, in the memoirs of the imprisoned and the victims of the Siege of Leningrad. Yet it is the same: the reactions are warranted even when the circumstances are more obscure and more sparing of us, and it remains for us to understand what exactly we are reacting to. If we are allowed books, the internet, walks, discussions, why am I left with such a feeling of stifling, crippling confinement?

    The powerful sense of deprivation is clearly linked with an intense concentration on the present, on a tyranny of the present, which is unnatural for a temporal creature, who normally lives also futurally. But in pandemic time much hope and emotion has crossed the line back from the future and are condensed in the present, which bends under its own weight, allowing nothing to be done, nothing to be kept in mind. This present is like a stronghold, a little fortress, surviving to the detriment, and at the cost of, tomorrow. Any work intended for tomorrow — more broadly, any work that distracts you from the feeling of living in the here-and-now — is inimical to one’s survival in the present and is rejected like an incompatible blood type. Tomorrow steals from today and today steals from tomorrow. Pandemic time has a beginning and so it should have an end, but the end will not permit itself to be planned or forecast. While we live within it we are inside an implant, an insert, a foreign body in the main flow of time. I know that sooner or later, with or without me, this parcel of deformed time will end, but I cannot influence this happening. I feel the need to listen hard to it, to try to ascertain how I can live cheek-by-jowl with it in some proper, natural way, so as not to burn up or become suddenly old in this unfamiliar current.

    The sensation of reanimation, of resurrection from the dead, touches one with every real act that is accomplished, even the most insignificant: a shower, or the smell of lime trees in a Moscow yard, is enough to make everything within me awaken and sing with the joy of return. But still the numbness is stronger, and no work, no distraction, can prevent the constant stun of reverie, careful listening, the transfer back into an uncertain stasis. Lidiya Ginzburg, in her account of the experience of the siege of Leningrad, called it “suspension”: “a person begins to realize with astonishment that, sitting in his apartment, he is in fact suspended in the air […] and above and below him other people are suspended in the same way.” This collective life in extremis, the simultaneous presence of everyone with everyone else, was rarely felt in the time before the catastrophe — or to put it more precisely, special locations in which one could feel the unity were set aside in the common space: forums, avenues, town parks and squares, where everyone was visible to everyone else. But in the home, in one’s own little box, everything was carefully arranged to prevent us from considering the presence of those nearby, our neighbors were out of hearing range, we were ourselves never seen from a stranger’s window. Now the presence of this absence overwhelms me and I cannot resist it. In every apartment of the block opposite mine people are like flies suspended in amber, no longer going to work, spending hours at the breakfast table, going to bed too early or too late, fated to unconsciously repeat each other’s movements, voices droning in an unharmonious but rhythmical choir.

    In his essay about Rousseau, W.G. Sebald recounts going down to the water on the little Swiss island where Jean-Jacques was so happy that he asked to be banished there, where he would see no one and never leave. Two hundred years later, in the deep silence, Sebald stood one night at the lakeside, until the water before him, without its darkness abating, became utterly transparent, revealing row upon row, whole terraces, of motionless fish, large and small, swaying, asleep, one on top of another. Life in a pandemic cuts transversely through a cross-section of piled layers of dark, slightly frozen lives — the opposite of our customary longitudinal living, with its constant horizontal yearning pull forwards: To Moscow! To Italy! Tomorrow! Time’s density is altered: it was a flood, surging on, dragging everything with it, and all of a sudden it congealed, became abstract and metaphysical, a barren boggy land. Contemplation and action hardly know themselves in the empty mirror.

    BC Our way of reading the pandemic, and the helix of history in which it is wound, is to instinctively anthropomorphize, to overlay. Beyond the struggle for sheer survival, there is the aspiration to squeeze meaning from what is happening — to turn its face, as Mandelstam might have said, to the contemporary world. The desire to believe that seeing is still possible, to perceive the beginning of a new world in a place of exile, to discern a message to mankind in a virus. But the message on the screen cannot be made out; it trembles and glitches. I have the sense that this virus-touched reality operates within three different states, three ways of understanding what is happening, each linked to a sensation of time. And from inside the tiny gaps between them I will attempt to note down how they work; the constant rapid movements between each state are a sketch, a flicker, a whirl of snowflakes, entrancing and enervating us.

    The first of these states is the tendency to see whatever happens to humanity as an opportunity that needs to be taken advantage of. The freed-up time — or perhaps more accurately, the unforeseen time which has appeared in the place of the old, cancelled time — is seen as a resource: a new allocation of opportunity that must be mastered. It is of course understood that this is a different and unfamiliar space (and that in any case we have none of the tools needed to work with it in the usual scheme of things). It is perceived as an alternative space, as “down time”: it can be filled with what is usually neglected. Plans and promises, the public and the private, weight loss, learning a language, writing a novel — all these are a form of emigration or downshifting, in which we are given another chance to live a different, alternative life in the overflow of the pandemic. The fact that this is not an unalloyed benefit — this freed-up time comes at the cost of illness and the death of others, as well as the permanent sense of danger — gives these plans a hectic and unforgiving intensity: not achieving them becomes a personal failing, a moral failing.

    The second state is the deliberate suppression of any awareness of the particular character of this out-of-joint time. This willful denial can be interpreted as an attempt at a kind of emotional violence, an inner revolt against that reality that must be forced back within its banks. This state of mind, this dogmatic need for the quotidian, demands that time be made to submit: it must be experienced as quite ordinary, as if nothing had happened. There is a wide spectrum of Covid-dissidence, from denying the pandemic to various behavioral practices that resist its constraints. There is also its politicization, based sometimes on the worsening of suffering for certain people or groups, until they reach the point at which they are compelled to go out onto the streets and protest. In this scenario the disease comes to seem comparatively unimportant, not necessitating a reassessment of time or a renaming of the age as unusual; the imperative is for time to obey the current order and flow as it should.

    The third state, the easiest to live with, the most tempting and the most traumatic, is to do nothing. Pandemic time is lived in the knowledge that it is unproductive time, lost time. We may reconcile ourselves to this loss if we see it as a sacrifice. You didn’t catch Covid, but you might have done; your nearest and dearest didn’t catch it, but they might have done. At this very moment someone else is ill. And what might have been given to illness is instead lived, with a particular sense of ceremony, as if it were a sacred and special portion of life from which no practical use can be derived. Instead you wait and pass through it and come out of it changed. (In the late Soviet period a common feature of any conversation with school-age children was the reminder that they were living in place of soldiers who had died in the Second World War, and therefore did not completely own their lives and had to meet certain higher expectations in how they led them.)

    Falling out of the frame of ordinary life is understood as a fall from time into a state of timelessness, a zone of pure existence, where the laws of cause and effect have not been annulled but gently suspended. Remaining steadfastly in the present turns out to be an occupation in itself, but since this has no intelligible purpose (why? what for? is it a denial of life or an enhancement of it?) we continuously transgress: consciousness skips forward into the first or second state and we experience something akin to an involuntary internal tic, in which my thoughts jolt between all these states while I remain all the time more or less stationary. But more often than not I find myself in the flickering gap between the zones, falling out of times, both pandemic and normal. The past is frost-damaged. The present is motionless. The future has been postponed. Leah and Rachel have fallen asleep and between them there is an empty mirror, and no one to see anyone in it.

    We began with two apparently opposing models of literary and social behavior, and two close friends who had chosen these models for themselves. The fact that they also turned out to be great poets is a minor detail. I want to tentatively suggest a reason for Mandelstam’s fit of weeping in Leningrad, when Akhmatova read to him of how the dead Beatrice reproaches the living Dante for forgetting her and chasing transient, earthly desires: “making a noise, and giving everyone the run-around.” Dante himself collapses at her reproof for an especially unforgiveable deed, stung by the nettle of remorse. Just over a year after his tears (“just those words and in your voice”) Mandelstam was arrested for his anti-Stalin verse, and during the search of his home a record of Hawaiian guitar music played in a next door room, while his wife, Nadezhda Mandelstam, and Akhmatova sat side-by-side “pallid and numb.” In the face of catastrophe, the contemplative life and the active life always turn out to be equals.

    Dante describes Mount Purgatory, and its summit, Earthly Paradise, as the obverse of non-existence, non-place, non-time, the monstrous funnel of Hell, from which even the Earth ran away so as not to share its space with the fallen angel Lucifer. They are two sides of a coin: the yawning of complete absence, named evil — and its opposite, where there is suffering but also the chance of overcoming it. When Dante emerges under the stars again, back from the ninth circle and its dead air of non-existence, he cannot stop looking and listening: flowers are renewed, their sweetness is returned, and with that sweetness the desire to desire — the hope for salvation, which is beyond the capacity of those in Hell. It seems that there is nothing harder for mankind than the desire for that desire, and in this respect our own condition is not so very different from Dante’s in his opening canto. In the words of the Russian poet and Dante scholar Olga Sedakova, the poet

    is the one who desires
    what everyone desires
    to desire

    Learning to desire is an inescapable lesson, a lesson for both Rachel and Leah, for anyone who looks into the mirror of being, both the singers and the silent.

     

     

     

    This essay was translated by Sasha Dugdale

    The First Virtue: On Ambedkar

    The great historian C. Vann Woodward, author of The Strange Career of Jim Crow, a book that Martin Luther King, Jr. described as the “historical bible of the civil rights movement,” recount not just as an icon for ‘untouchables’ bus in his autobiography how the writing of the book came to be shaped by an unusual encounter:

    A new and extraordinary foreign perspective came my way during the Second World War, while I was on duty as a naval officer in India. With a letter of introduction in hand, I sought out Bhimrao Ramji Ambedkar, acclaimed leader of India’s untouchables and later a figure of first importance in Indian constitutional history. He received me cordially at his home in New Delhi and plied me with questions about the “black untouchables” of America and how their plight might be compared with that of his own people. He also took time to open to me the panorama of an ancient world of segregation by caste to show me how it appeared to its victims.

    That Woodward sought out Ambedkar was not surprising. For years preceding his visit, there had developed a lively intellectual and political tradition in both the United States and India comparing and contrasting caste oppression in India with racial oppression in the United States. This conversation involved major figures such as W.E.B. Du Bois and the Indian nationalist Lala Lajpat Rai. Woodward correctly remarked upon Ambedkar’s larger status, not just as an icon for “untouchables.” He noted that Ambedkar was not just as an icon for ‘untouchables’ but also a major constitutional figure, as if he were DuBois and Madison rolled into one.

    But the intellectual distinctiveness of the problem that they discussed was not simply the analogies that might exist between how race and caste functioned as systems of oppression. The originality of Woodward’s book on Jim Crow was that it argued that the mechanisms of racial exclusion persisted and were reinvented after the Emancipation Proclamation. Ambedkar was also keenly attuned to the fact that while India was embracing political democracy, the mechanisms of exclusion and subordination of “untouchables” would not be overcome simply by granting formal political rights. So this encounter was not just about a comparison of two systems of oppression. It was about that haunting question: Why do the two grandest experiments in democracy, India and the United States, find it so difficult to overcome the original sin that marks their founding, race in the United States and caste in India?

    A part of the answer, of course, is that a commitment to equality is always a grudging compromise at best, subordinate to other values. In his book, What Gandhi and the Congress Have Done to Untouchables, which appeared in 1945, Ambedkar offered this biting judgment:

    Mr. Gandhi’s attitude towards Swaraj [self-governance] and the Untouchables resembles very much the attitude of President Lincoln towards the two questions of the Negroes and the Union. Mr .Gandhi wants Swaraj as did President Lincoln want the Union. But he does not want Swaraj at the cost of disrupting the structure of Hinduism, which is what the political emancipation of the Untouchables means, as President Lincoln did not want to free the slaves if it was not necessary to do so for the sake of the Union.

    Just a few sentences earlier he had written of Lincoln, “Obviously the author of the famous Gettysburg oration about the Government of the People, by the People and for the People would not have minded if his statement had taken the shape of the government of the black people by the white people provided there was Union.” And with some relief Ambedkar added, “Lincoln was at least prepared to emancipate the Negro slaves if it was necessary to preserve the Union. Mr. Gandhi’s attitude is in marked contrast. He is not prepared for the political emancipation of the Untouchables even if it was essential for winning Swaraj.”

    This caustic assessment of Lincoln and Gandhi may not be entirely fair. But it took a bracing moral clarity and intellectual self-confidence to be able to make such a judgment at all. If one looks at history through the eyes of those who were enslaved, marginalized, or oppressed, there is only one inescapable conclusion: that justice is the last thing on men’s minds, the weakest passion that moves their souls. Even great leaders, let alone ordinary men and women, would readily sacrifice justice for pretty much any other value: God, country, success, custom, privilege. The most consolation that history affords is that occasionally there arises a leader, a Lincoln, who will not stand in the way of justice. But even he went down the path of justice only after all other options had been exhausted. In Ambedkar’s view, for Lincoln it was Union first, justice second; and for Gandhi, despite his personal sacrifice and courage, it was Hinduism first, justice perhaps not at all. For justice, humanity would need an altogether different kind of emancipator — someone for whom justice was the first virtue, never subordinate to any other value.

    In a tribute to one of the few Indian leaders he admired, the social reformer Madhave Govind Ranade, Ambedkar reflected on what makes a great leader. After a critical survey of different conceptions of greatness, he proposed a test: “A Great Man must have something more than what a merely eminent individual has. What must be that thing? A Great Man must be motivated by the dynamics of a social purpose and must act as the scourge and the scavenger of society. These are the elements which distinguish an eminent individual from a Great Man, and constitute his title deeds to respect and reverence.” And with those words Ambedkar described himself.

    Ambedkar was one of the most important emancipators of the twentieth century. He is most well-known as an unrelenting champion of the rights of Dalits. Ambedkar popularized the use of the term “Dalit” — literally, “broken people” — to give a new political identity to groups most oppressed and marginalized by India’s caste system. These were lower castes, and in some cases they were groups outside the caste system altogether, contact with whom defiled upper castes. In the alleys and the byways of Dalit neighborhoods, Ambedkar is venerated and memorialized in thousands of iconic statues: a man in a Saville Row Suit carrying the Constitution of India. He is the hope of every oppressed group in India seeking liberation. His name is a byword for justice, and his photograph has now become its central political iconography, invoked in every struggle for justice.

    Yet Ambedkar’s significance far exceeds the specific cause of Dalit emancipation that he espoused. He has become more central to conflicts over the soul of modern India than any other figure, including Gandhi and Nehru. And his life and thought have become more central to the fate of societies and states beyond India’s borders. Ambedkar was deeply attuned to the ways in which hierarchies of power mutilate human dignity. He was one of the most brilliant practitioners of the art of unmasking power, seeing its operations at work in the very categories in which we think. But unlike many others who profess this critical art, he never wavered in his commitment to the ethical standpoint, the imperative of human dignity, and the possibility of governing public affairs by public reason. Far away from the Western centers of liberalism, he was the most creative interpreter and consistent practitioner of what we refer to as the Enlightenment project, now much frayed: a world governed by liberty, equality, and fraternity.

    Bhimrao Ambedkar was born on April 14, 1891 in the little garrison town of Mhow, near the city of Indore. His family belonged to the Mahar caste, a sub-caste among untouchables. His original name was Ambavadekar, itself derived from his family village of Amabvade in Maharashtra. But in a society where caste was destiny, being born into a Mahar caste at least afforded Ambedkar the opportunity of a possible escape. The Mahars were significant in two respects: they were the most numerous Dalit caste in what is now the state of Maharashtra, and many of them had acquired a modicum of social mobility by enlisting in the British Indian Army. The British Indian Army required its recruits to be educated. It is a measure of how deprived Untouchables were of education that, by 1911, the Mahars in Bombay Presidency has achieved a literacy rate of barely one percent. All his life Ambedkar thought of the army as the one institution that afforded a sliver of social mobility for Dalits, and frequently petitioned for recruitment amongst the Mahars to be expanded.

    Ambedkar could study in cantonment schools provided by the Army, and graduated from Elphinstone College with a B.A. in English and Persian. He was denied formal education in Sanskrit because of his caste. By all accounts his blazing intellect impressed everyone who encountered him. His professors petitioned the Maharaja of Baroda to finance his studies. He was granted funds to study at Columbia University, on the condition that he return to serve the princely state of Baroda on his return. In the United States he was struck by a paradox: he was freed from the humiliating constraints of caste in India, but his proximity to Harlem reminded him that the United States had perfected its own system of exclusion.

    At Columbia he studied with John Dewey, whose books he copiously annotated, and who became a formative intellectual influence, especially on his thinking about democracy. For Ambedkar, democracy was not to be identified with popular sovereignty. It was rather the building together of a common moral and practical life with others, in a kind of fraternity, through free criticism and discussion. But he went beyond Dewey in recognizing that the actual distribution of power in society would be consequential for determining whether such a project was possible at all. There are Deweyan strains in the way Ambedkar argued for equality. He was not interested in giving a metaphysical argument for equality. He had the unerring suspicion that if you were demanding an argument for equality, some metaphysical basis for it, you were most likely not interested in instituting it. Equality had to be a postulate. The best one could do was paint a picture of it — of a society founded on relations of equality and freedom that was less likely to mutilate and damage human beings than societies premised on other postulates.

    Ambedkar went on to earn two doctorates, from Columbia University in economics, where he studied the evolution of provincial finance, and from the London School of Economics, where he studied monetary economics with a dissertation on the problem of the Indian rupee. It is a measure of his intellect that there was virtually no discipline that he studied in which he did not open a new debate. Yet academic accomplishment was not a sufficient guarantee of social esteem. As a child he had been subject to the humiliations that were routine for Dalit children: being deprived of access to water at a station, being asked to stand in a corner of railway platforms. Overcoming the burden of caste would require more than two PhDs. When he returned to Baroda he could not find housing, and had to represent himself as a Parsee to find a place to live. Files were thrown at his desk by his office subordinates, who could not accept him as social equal. He registered at the Bombay Bar, but his caste status deterred many potential clients. He had to supplement his income by teaching.

    Ever since his days at Columbia, Ambedkar was analyzing caste in all its dimensions, and to this day he remains one of its leading sociologists. In 1920 he founded a periodical called Mook Nayak, or The Silent Hero, which became the site of penetrating discussions of Dalit dilemmas. In 1927 he led the now famous protest march at Mahad, where Dalits claimed the right to draw water from tanks and to enter temples. Ambedkar himself was ambivalent about the issue of temple entry: it would legitimate the Hindu temple as an institution, even as it would assert Dalit self-respect and protest against exclusion. But these movements were vital. In the same egalitarian spirit he became the foremost interlocutor in the debate over representation in modern India. The British had, since 1905, under pressure from the Congress Party, introduced limited self-government in India. But how was representation going to be organized? Ambedkar became the representative of the “Depressed Classes” in these negotiations and began his lifelong battle with Gandhi.

    He continued to produce writing of scorching power, including Annihilation of Caste, in 1936, a book so radical that he was not allowed to deliver the lecture on which it was based. Its force has not diminished to this day. His stature grew, and by the 1940s he was internationally recognized as one of the great emancipators. His relationship with the Congress Party remained adversarial, and it often sought to marginalize him. But in 1947 he was made Chairman of the Drafting Committee of the Constitution of India, where he made his most enduring mark as the greatest of modern legislators. He piloted a bill to reform Hindu law, but then resigned from Nehru’s cabinet, increasingly frustrated by the conservatism of the Congress Party. Ambedkar had formally dissociated himself from Hinduism in the 1920’s, but his last political act was to convert to Buddhism, which he reformulated as a religion of ethics and reason, a font of Enlightenment.

    Ambedkar is still at the center of the conflict over the soul of modern India. He is virtually the only figure left from India’s great founding generation to whom almost every political persuasion pays homage. Gandhi’s political standing in India has been battered by three historical forces. For Dalits, Gandhi’s emancipatory vision was not radical enough; it did not grant them the full measure of political agency or social standing that was rightfully theirs. For Hindu nationalists, Gandhi is the foil against which they define themselves: their violence against his non-violence, their intolerance against his religious pluralism. Finally, Gandhi may have exhorted Indians to keep the face of the poorest person in mind when formulating policy, but in a more general vein, for the post-liberalization generation in India, Gandhi’s anti-modernist caution about industrialization is regarded as a formula for keeping India poor. Acquisitiveness is the new flavor in modern India, wealth the path to greatness, and a Darwinian competition the route to strength. And Jawaharlal Nehru’s political star has also dimmed. His socialist leanings are blamed for India’s economic ills. His internationalism is dismissed as woolly idealism untethered to India’s national interests. His achievement of building an extraordinary democracy under difficult circumstances is largely overlooked in an era contemptuous of his Congress Party, and of democratic institutions more generally.

    Ambedkar is the only figure for whom every political party has to offer at least outward obeisance. This is in part because the Indian Constitution that he so profoundly influenced still remains the central element of the social contract of modern India. It is a measure of his success that, at least as a matter of principle, if not practice, Indian democracy must measure its success by its ability to include Dalits into its political order. Even the Hindu nationalist party RSS includes Ambedkar in its morning invocation of great luminaries who have shaped India.

    But this veneration of Ambedkar — his mainstreaming — has in some ways obscured his radicalism. By elevating him as an object of veneration, we avoid looking at him in his totality. To get a full measure of Ambedkar’s moral radicalism, it might be helpful to situate him in relation to currents of thought that have defined modern India. For a start, there is his relationship to Gandhi. “Gandhi was never a Mahatma. I refuse to call him a Mahatma.” This was Ambedkar’s summary judgment on Gandhi in an interview given to the BBC in 1955. He added, “As I met Mr. Gandhi in the capacity of an opponent, I know him better than most people, because he had openly real fangs to me, and I could see inside the man.”

    Gandhi was not customarily described in such words. The proximate reason for their political disagreement was the question of Dalit representation. How should an emerging system of representative government ensure that historically marginalized and oppressed groups get their fair share of political power? How do we design a democracy such that no group fails to recognize itself as part author of that democracy? Ambedkar’s thinking on democracy was far ahead of its time in being preoccupied with this question in the context of minorities all across the world. Ambedkar argued that Dalits should be treated as a minority. They should be granted recognition and representation as a separate electorate. Under a scheme of separate electorates, Dalits, as a community, would choose their own representatives, a privilege that had been accorded to Muslims under the British Raj. For Gandhi, however, conceding separate electorate status to Dalits would imply that they could not be part of the Hindu order; it would also vitiate his capacity to represent Dalits. For Ambedkar, by contrast, making the election of Dalits dependent upon the preferences of non-Dalit voters was a recipe for disempowering them. Gandhi went on to declare a fast unto death in opposition to separate electorates. He broke his fast when Ambedkar agreed to compromise and signed the Poona Pact in 1932. Ambedkar gave up the principle of separate electorates, in return for which more seats were reserved for Dalits, though they would still be dependent on the preferences of non-Dalit voters for their election. The principle of representation that Ambedkar defended was upheld, but his principle of separation from Hinduism was not.

    This encounter with Gandhi’s “fangs” led Ambedkar to mount a deeper critique of Gandhi’s methods, especially his thinking on ends and means. Gandhi’s mode of political action through fasting was, in Ambedkar’s view, deeply immoral. In the instance of the fast that led to the Poona Pact, it was undertaken for the immoral end of depriving Dalits of full empowerment. It was an exercise in coercion instead of an attempt at persuasion. Ambedkar suggested that Gandhi’s non-violence was based on an inwardly directed psychological coercion — a charge that Gandhi might not have disputed. It is not an accident that in a speech in 1948, as India’s constitutional government was being promulgated, Ambedkar reminded his audience that constitutional government is incompatible with Gandhi’s idea of satayagraha, which was a unilateral assertion, a narcissistic belief in one’s own truth, without any acknowledgement of the reality of difference and the need to engage in public reason. For Ambedkar, constitutional methods were a truer and more radical expression of non-violence.

    In some ways, this put Ambedkar at odds with the grammar of agitational politics that Gandhi’s idea of satyagraha bequeathed. It takes immense courage not to convert a deep sense of oppression, or a well-founded scepticism about the motives of ruling classes, into a call for cathartic or agitational politics. Some Indian radicals have rued Ambedkar’s faith in constitutionalism. Ambedkar was not a committed pacifist — but when a marginalized community has been deprived of even the means to defend itself, founding a politics on violence would victimize it more than the ruling classes it sought to displace. In many ways it was Ambedkar as much as Gandhi who tied the Indian project to a form of non-violence, by committing the Dalits to constitutionalism.

    Even in the midst of the controlled anger of a book such as Annihilation of Caste, Ambedkar could write that “reason and morality are the two most important weapons in the armoury of a Reformer.” It is a remarkable fact that of all the groups seeking to exercise political power in India, Dalits have shown more faith in the electoral process and the constitutional structures than any other group. The fact that Ambedkar was seen as one of the principal architects of the constitution — an outcome itself made possible by Gandhi’s urging that Ambedkar be included in the drafting committee — made the Indian constitution a central part of Dalit political identity. Through Ambedkar they had authored a new republic.

    But Ambedkar’s worries about Gandhi ran much deeper, and these go to the heart of the social fault lines in contemporary India. Gandhi was resolutely against untouchability, and courageously transgressed all the taboos to fight it. He declared that if untouchability was a part of Hinduism he would openly rebel against Hinduism. But for Ambedkar, Gandhi’s position was ultimately duplicitous. In Gandhi’s fight against untouchability, the focus was entirely on the agency of the upper castes. Gandhi’s construal of untouchability as a problem for the upper castes was in some measure a ruse to avoid sharing power with the lower castes. Gandhi was appealing to upper castes to cease practicing untouchability, so that they could be purified of their sins. He was less interested in creating more equal mechanisms of sharing power that would grant Dalits’ their own political agency. In fact, Gandhi’s act of naming untouchable’s harijans, or children of God, was the ultimate act of upper caste hubris. It was patronizing, and more importantly this act of naming re-enacted the ultimate sin of caste privilege: it took away the power from marginalized groups to define their own identity on their own terms. This is a liberty for which many Indians are still struggling.

    Fundamentally, for Ambedkar, Gandhi was a conservative. He wanted to abolish untouchability so that he could save caste. The abolition of untouchability is not the same thing as the abolition of caste. And although Gandhi’s views evolved, Ambedkar was mostly right. More generally Gandhi was emblematic of the modern Indian approach to reform. There has been a political consensus on the abolition of untouchability; much less so on the abolition of caste itself. For Ambedkar this was like wanting to abolish the ugliest and most oppressive edge of a social structure without transforming the structure that had given rise to it. And it is here that Ambedkar launched his most radical and discomforting intellectual assault on Hinduism: for him, it was the root from which the evil sprung.

    Against Gandhi, Ambedkar claimed that violence was central to the constitution of Hindu society. Violence was not an aberration, a flotsam that could be cleared up to reveal the bright and placid waters of an ideal Hindu society beneath. It was central to its identity and its functioning — as we would now say, a feature, not a bug. There is no skirting around the fact that for Ambedkar justice required declaring a war of sorts on Hinduism. As he wrote to Gandhi, “I would like to assure the Mahatma that it is not the mere failure of Hindus and Hinduism which has produced in me the feelings of disgust and contempt. I am disgusted with Hindus and Hinduism because I am convinced that they cherish wrong ideals and lead a wrong social life. My quarrel with Hindus and Hinduism is not over the imperfections of their social conduct. It is much more fundamental. It is over their ideals.”

    This radical declaration is what makes Ambedkar so central to contemporary struggles. The project of achieving justice was not simply a matter of reforming a tradition, making it live up to its ideals. Justice would require the intellectual repudiation of a tradition. Ambedkar’s interpretations of Hindu texts leaves the reader defenseless and gutted. Many of his theses are acute in their sociological insight and historical penetration. He rejected the theory that caste oppression has its roots in Aryan invasions and the subjugation of native populations. He rejected all race-based explanations for caste. He was particularly scornful of functional explanations of caste, since caste involved an imprisoning hierarchy of functionaries, not functions. Whichever way we cut it, material or functional explanations could not by themselves explain the peculiarity of caste: it was, at base, a diabolical series of representations imposed by a priestly class — an act of power.

    Caste, in Ambedkar’s analysis, was self-perpetuating in two ways. It was self-perpetuating through its denial to lower castes of all the means of advancement that give individuals any standing in society: political power, wealth, and education. He wrote that “to deny freedom of opportunity, to deny freedom to acquire knowledge, and to deny the right to arms is a most cruel wrong. It mutilates and emasculates man. The Hindu social order is not ashamed to do this.” He contended that Hindus had acquired a reputation for humaneness precisely because they had perfected the most insidious form of social control — one that did not require exterminating peoples, but merely denying them opportunity.

            

    Caste worked by creating a series of gradations in society where adjacent classes oppress each other. It acquired its power through psychological mechanisms. In a system of hierarchy, so long as a group had another group in relation to which it was superior, it would continue to perpetuate the system. The alchemy of caste came from its fine gradations: even among untouchables some groups were superior to others. Ambedkar is unsparing in his criticism of the casteism of the untouchables themselves. The principle of hierarchy is mimetically reproduced throughout the entire system. This was the biggest obstacle in creating solidarity against it.

    Rather than frontally confronting the reality of caste, almost all Hindu engagement with caste begins with an apologia of some sort or the other. Caste was functional at some point, Hindus intone; and the social reality of caste did not correspond to the ideal, as if caste could ever be a justifiable ideal. The claim often made in response to Ambedkar that historically there was a lot more flexibility and mobility in the caste system is usually presented as an apologia for the system. In the end Ambedkar understood that even the slightest defensiveness on caste is a ruse to blunt and extenuate its sheer vileness. He understood more clearly than most that the first response to the unmasking of deep hierarchies is a kind of evasiveness — it is a rare case where someone confronts the evil of casteism, of racism, squarely in the eye.

    Yet Ambedkar’s reading of Hindu scriptures forced an even more disquieting question. That these texts defended caste was not news. What was more striking was Ambedkar’s claim that the rich philosophical and soteriological arguments of these texts could not be understood outside of the context of caste. Modern Hindus might try to reinvent the Bhagawad Gita as a deep mediation on the nature of the self, or as an ethical reflection. But none of these arguments would make sense without assuming the reality of caste. Even those texts that assert the equality of all castes do so only in the spiritual realm, or at best abstractly. Ambedkar was Deweyan enough to recognize that ultimately the worth of ideas was proven by the kind of concrete social order that they created. Abstract affirmations of equality turn out to be often quite compatible with oppressive power structures.

    And worse still, they had an unhealthy effect either by giving the ruling classes the satisfaction that they had instituted equality, or as religious propaganda to instruct the “Depressed Classes” that their equality was being affirmed in the eyes of God or in the abstractions of a philosopher. If you are brutally honest about the crisis of Indian intellectual traditions, you have to acknowledge the fact that this crisis has its roots not in Western delegitimization, as potent as that might have been, but in the eruption of the “social question.” With what degree of conscience, with what act of good faith, can you defend an ancient and indigenous intellectual tradition at whose core is an oppressive, hierarchical, and segmented social system? What does one make of a tradition whose intellectual radicalism almost always ends up serving the ends of social conservatism in one form or the other?

    This is not the place to settle the argument over the relationship between the transcendental and the social. A plethora of Hindu reform movements have tried to rescue Hinduism from caste, to detach its metaphysical, experiential, and soteriological ends from the taint of this spectacular injustice. But Ambedkar was clear that the abolition of caste required the abolition of Hinduism. Ambedkar’s project was the biggest and most daring act of systematic unmasking of any civilization that anyone has ever seen. The psychological forces that he detested and denounced are now being played out in violent forms in contemporary India.

    Ambedkar’s critique is made all the more disturbing by Ambedkar’s psychological, almost Nietzschean excavation of modern Hindu identity. He is unremitting in peeling away the layers of resentment that in his view went into the construction of that identity. For instance, he was among the first to reflect on the paradox of what he regarded as the violence of Hindu vegetarianism. As he saw it, the Hindu prohibition on beef was not rooted in an ethical imperative. Rather it stemmed from a nausea against those who dealt with dead cows, especially untouchables, but increasingly Muslims as well. Behind the solicitude for the cow lay a visceral hate for meat eaters. The professed gentleness towards the cow was merely a sublimated form of cruelty toward human groups. It had nothing to do with non-violence or a horror of slaughter.

         Ambedkar was also among the first thinkers to link the question of gender violence with caste. In his early essays, he rightly pointed out not only that endogamy was central to caste, but that the control of women, widows, and unmarried girls in particular would be central to the perpetuation of caste identity. As he put it, the problem of caste “is the problem of surplus men plus surplus women.” The regulation of women was central to caste and to any social hierarchy. Patriarchy and caste were deeply intertwined.

    In contemporary India it is difficult not to acknowledge the power of Ambedkar’s analysis of communal relations between Hindus and Muslims. From a position of marginality he could see that the leaders of both communities were making self-serving claims. His book Thoughts on Pakistan, which appeared in 1945, remains the most bracing analysis of the problem. He dismissed the sentimentalists who claimed that India had evolved a composite civilization, and that the problem between Hindus and Muslims was largely a creation of the British policies of divide and rule. He took the view that these were in fact two distinct social and cultural orders whose antipathies ran deep. They were unlikely to agree even on a narrative of the past. But if he did not spare Hinduism, he did not spare Islam either, in words that might now be construed as Islamophobic. “Hinduism is said to divide people and in contrast Islam is said to bind people together,” he wrote. “This is only a half-truth. For Islam divides as inexorably as it binds. Islam is a close corporation and the distinction that it makes between Muslims and non-Muslims is a very real, very positive and very alienating distinction. The brotherhood of Islam is not the universal brotherhood of man. It is a brotherhood of Muslims for Muslims only. There is a fraternity, but its benefit is confined to those within that corporation. For those who are outside the corporation, there is nothing but contempt and enmity.” But his unkindest cut of all was to again delve into moral psychology. The antipathy towards Muslims among Hindus was largely a product of their need to overcome the divisions of caste. As he acerbically put it, “a caste has no consciousness of being affiliated to another caste, unless it is in the context of Hindu-Muslim riots.” In short, Hindus needed an “Other” to consolidate their identity in the face of internal division.

    This analysis places Ambedkar awkwardly in relation to trends in contemporary Indian politics. The Rashtriya Swayamsevak Sangh, or RSS, a popular Hindu extremist organization, and other Hindu nationalists have embraced Ambedkar for two of his claims: his willingness to acknowledge the potentially irreconcilable nature of Hindu-Muslim tensions, and his claim that there is no unified Hindu identity so long as caste persists in all its depth. But his deeper claim that Hindu identity is constituted by these unacknowledged layers of violence not only goes unanswered, it is and is recast as Hinduphobia. It is regularly vindicated when beef traders are lynched and Muslims are attacked in the name of vegetarianism.

    In their determination to modernize India, Nehru and Ambedkar were kindred spirits. You might think they would be radically allied in transforming tradition. But in some ways Nehru was, for Ambedkar, an exemplar of the kind of liberals who can affirm equality in the abstract but will do little to undermine their own privilege. His complaint against Nehru was perhaps even more devastating than his indictment of Gandhi: that he had no ability to acknowledge the centrality of this violence to Indian society. “Turn to Jawaharlal Nehru,” he wrote. “He draws inspiration from the Jeffersonian Declaration: but he has never expressed any shame or remorse about the condition of sixty million untouchables. Has he anywhere referred to them in the torrent of literature that comes out of his pen?”

    Ambedkar is one of the few Indian leaders of his generation who understood the deep transformative effects of money on society, and its necessity for a life of dignity. He was impatient with both the renunciation and poverty that Gandhi represented and the aristocratic critique of wealth that Nehru signified. Gandhi’s renunciation was a theatrical act of privilege: his embracing of a simple piece of cloth for his dress counted as renunciation because he had a fuller wardrobe to give up. For Ambedkar, by contrast, only the donning of a suit could be a path to self-respect. In 1918 he published a telling review of Bertrand Russell’s The Principles of Social Reconstruction. He chided the aristocratic Russell for his platitudinous critique of the love of money. Ambedkar writes: “Neither does the restatement of the evils of ‘love of money’ by Russell add any philosophic weight to its historic value. The misconception arises from the fact that he criticizes the love of money without inquiring into the purpose of it. In a healthy mind, it may be urged, there is no such thing as a love of money in the abstract. Love of money is always for something, and it is the purpose embodied in that something that will endow it with credit or shame. Thus even love of money as a pursuit may result in a variety of character.” The sanctification and ennobling of poverty was a disfiguring privilege that did not attend sufficiently to what money might achieve. India had to be reoriented from its heritage of renunciation. After all, it had actual poverty — the poverty of poor people — to confront and to ameliorate.

    Ambedkar once lamented that in India the hold of intellectual Brahminism was so powerful that it had at most an Erasmus — partial reformers working within the paradigms of existing structures — but it had never had a Voltaire, one who could radically question the legitimacy of the whole edifice. But Ambedkar was himself India’s Voltaire. In his intellectual radicalism, he set out on precisely such a fundamental questioning, in an assault whose reverberations are still being felt.

    The most powerful expression of this is Ambedkar’s poignant and moving dedication to his book, What Congress and Gandhi Have Done to Untouchables. This dedication is unusual because it is one of the rare moments in Ambedkar’s writing where he seems to let down his emotional guard. The only other such moments are when he speaks of the immeasurable tragedy of the loss of his children. In a letter written in 1926 to Dattoba, he writes, “With the loss of our kids the salt of our life is gone, and as the Bible says, ‘ye are the salt of the Earth, if it leaveth the Earth, wherewith shall it be salted.’”

    The dedication of What Congress and Gandhi Have Done to Untouchables is to a person addressed only as “F.” It begins with a quotation from the Book of Ruth, the famous early dialogue between Naomi and Ruth. The quotation ends with “thy people shall be my people, and thy God my God. Where thou diest will I die, and there will be buried; the Lord do so to me and more also, if ought but death part thee and me.” Ambedkar continues in a passage worth quoting in full:

    I know how, when we used to read the Bible together, you would be affected by the sweetness and pathos of this passage. I wonder if you remember the occasion when we fell into discussion about the value of Ruth’s statement “Thy people shall be my people, and thy God my God.” I have a clear memory of it and can well recall our difference of opinion. You maintained that its value lay in giving expression to the true sentiments appropriate to a perfect wife. I put forth the view that the passage had a sociological value and its true interpretation was the one given by Prof. Smith, namely, that it helped to distinguish modern society from ancient society. Ruth’s statement “Thy people shall be my people and thy God my god” defined ancient society by its most dominant characteristic namely that it was a society of man plus God while modern society is a society of men only (pray remember that in men I include women also). My view was not then acceptable to you. But you were interested enough to urge me to write a book on this theme. I promised to do so. For as an oriental I belong to a society which is still ancient and in which God is a much more important member than man is. The part of the conversation which is important to me at this stage is the promise I then made to dedicate the book to you if I succeeded in writing one. Prof. Smith’s interpretation had opened a new vista before me and I had every hope of carrying out my intention. The chances of developing the theme in a book form are now very remote. As you know, I am drawn in the vortex of politics which leaves no time for literary pursuits. I do not know when I shall be out of it. The feeling of failure to fulfill my promise has haunted me ever since the war started. Equally distressing was the fear that you might pass away as a war casualty and not be there to receive it if I were to have time to complete it. But the unexpected has happened. There you are, out of the throes of death. Here is a book ready awaiting dedication. This happy conjunction of two such events has suggested to me the idea that rather than postpone it indefinitely I might redeem my word, by dedicating this book which I have succeeded in bringing to completion. Though different in theme it is not an unworthy substitute. Will you accept it?

    There is an almost unbearable tenderness and commitment in this dedication. It also illustrates the state of our neglect of Ambedkar that we know little about the person called “F” to whom this extraordinary effusion is dedicated. Dhananjay Keer, Ambedkar’s first English biographer, tactfully avoided the reference by describing her simply as Ambedkar’s “bible teacher.” We now know her to be Frances Fitzgerald, an Irishwoman working in the House of Commons. Ambedkar maintained a correspondence with her from 1921, when they first met in London, to 1946. In his dedicatory epistle Ambedkar poignantly lays his cards on the table. The difference between the modern world and the ancient world is identified as the centrality of “man.” His striking sentence — “I belong to a society which is still ancient and in which God is a much more important member than man is” — is imbued with a deep sense of the tragedy of this condition, a modern living in a pre-modern reality. Ambedkar chose modernity. To claim that anything was higher than “man” carried the implication that “man” could be sacrificed. To liberate humanity, one had to slay the gods.

    But what ethic is possible after the gods have been slain? Ambedkar’s political philosophy was complicated, but there was one strand that ran through his work like a red thread. For him, the problem of ethics was not a problem of arguing for its foundations. Its task, rather, was to create the conditions for two human sensibilities. The first was reflection itself. We are apt to negate our own humanity and those of others by being prisoners of habit and unexamined life, or by being governed by the authority of rules. This is a danger even for philosophical ideas of morality, which are apt to degenerate into a debate over rules. The psychic gratification of belonging to groups, and the fear of being ostracized by them, is an impediment to reflection. Even the most gifted of thinkers will limit their thinking to the boundaries of the group’s worldview. Ambedkar insisted that morality represented our ability to rouse ourselves from this doctrinaire and parochial condition.

    The second goal of ethics, for Ambedkar, was an enlarged sympathy. Although he referred to liberty, equality, and fraternity as a Holy Trinity — or in his Sanskrit formulation, Triguna, the three qualities — he came to believe that the interpretation that the French Revolution had placed on the relationship between the three emancipatory concepts was untenable. In his last book, The Buddha and His Dhamma, which he was writing at the time of his death in 1956, he turned to a reformulated Buddhism as an affirmation of our finitude, and as a project of ethics without metaphysics. For him, the most striking element of Buddhism was the concept of maiteryi, a form of compassionate friendship. As Ambedkar ruefully remarked, the legacy of the French Revolution went astray because it did not sufficiently recognize the proper relationship between the three concepts. It was fraternity that gave moral force to both liberty and equality. But fraternity, for Ambedkar, does not mean the construction of yet another group solidarity. His fraternity does not have any of the exuberance of unity evinced in most republican thought. It is rather the creation of a self that is deeply attuned to the individual suffering of others. The questions, What is liberty? What is equality?, are not the most important ones. Many dexterous philosophers could answer them. The most important question is, how do you get attuned to the idea that every person should be included in their ambit? This involved a reorienting of the soul, almost an act of conversion to morality as the highest religion. In the final analysis, according to Ambedkar, no concepts and no institutional contrivances can help without this atonement to fraternity.

    Ambedkar is so disconcerting because he challenges all illusions: about tradition, about power, about Hinduism, about democracy, about the moral evasions of religion and metaphysics. In our current climate it is also worth celebrating his skepticism about preferring nationalism over justice. As he stirringly states, “nationality is not such a sacrosanct and absolute principle as to give it the character of a categorical imperative, overriding every other consideration.” There is a profound pathos at the center of his thinking. How could he who had seen farther into human depravity and oppression than anyone else remain committed to reason and morality? It is to his credit that he did not rely on any of the customary intellectual crutches — the dialectics of history, the comforts of metaphysics, the consolations of religion, the certainties of scientism — to sustain hope against centuries of suffering.

    In his essay on the social reformer Ranade that I cited earlier, Ambedkar began with a beautiful summation of his standpoint:

    As experience proves, rights are protected not by law but by the social and moral conscience of society. If social conscience is such that it is prepared to recognize the rights which law chooses to enact, rights will be safe and secure. But if fundamental rights are opposed by the community, no Law, no Parliament, no judiciary can guarantee them in the real sense of the word. What is the use of fundamental rights to the Negroes in America, to the Jews in Germany and to the Untouchables in India? As Burke said, there is no method found for punishing the multitude. Law can punish a single, solitary recalcitrant criminal. It can never operate against a whole body of people who are determined to defy it. Social conscience, to use the language of Coleridge, that calm and incorruptible legislator of the soul without whom all other powers would “meet in mere oppugnancy, is the only safeguard of all rights fundamental or non-fundamental.”

    For Ambedkar, the core of the crisis of democracy is an ethical failure, a failure of fraternity that no amount of constitutional engineering, something he knew a thing or two about, can overcome. The idea that in the final analysis we have no resources other than a social conscience can leave us somewhat defenseless, particularly, as Ambedkar knew, when conscience can be so easily trumped by the comforts of group hierarchy. In an ironic way he ended up close to Gandhi, and later to Havel, in his insistence that the greatest disruptor of oppression was the call to conscience. Ambedkar does not have a full theory of political praxis. But what he had in ample measure was the power of his reason against a civilization that is mostly arrayed against justice, against the first virtue, whether in India or the United States.

    Losing Our Religion

    What Fiddler on the Roof is for most American Jews — an emotional bull’s-eye on any family’s saga that began in a shtetl and wound up in the United States — The Lehman Trilogy is for me. My family, like the Lehmans, came here from Germany in the early nineteenth century. Both families left the old country as Lehmanns; we lost an “h” at the dock in New Orleans in 1836, they lost an “n” at the dock in New York in 1844. In both cases, the family saga began with a young single man from a small town — Rimpar, Bavaria in their case, Essenheim, Hesse-Darmstadt in ours — coming to America alone, starting out as a backpack peddler in the slaveholding antebellum South, and establishing a dry goods store. Theirs was in Montgomery, Alabama; ours was in Donaldsonville, Louisiana, and still exists there in altered form as Lemann’s Farm Supply, where I hope you’ll buy your next tractor.

    Distinctive German-Jewish culture, which is now as completely vanished as Lehman Brothers, had a triumphant and controversial reign for a century or so in the United States, between the mid-nineteenth and mid-twentieth centuries. German Jews, often with the same oddly specific Southern-peddler American beginnings, started many of the leading Wall Street financial firms. They developed and ran grand, and now disappearing, department stores — Macy’s, Gimbel’s, Neiman Marcus, and so on. They were book, magazine, and newspaper publishers — Viking, Knopf, Farrar Straus, The New York Times, The Washington Post, and The New Yorker all have a German-Jewish origin story. The German Jews built palatial houses of worship, such as Temple Emanu-El on Fifth Avenue in New York. They married within their tribe. They lived in distinct neighborhoods. They practiced their own religion, American Reform Judaism. They maintained business networks. It was a small world. My own family, back in the late nineteenth century, had multiple connections to the Lehmans, who maintained a branch of Lehman Brothers in New Orleans. They occasionally took out loans there; the local partner’s daughter was married to one of my relatives; another relative married a Lehman descendant.

    And then the German Jews disappeared — in Germany for obvious reasons, in America for less obvious ones. We know what happened there. What happened here?

    The reason that Stefano Massini did not choose to write a play called The Seligman Trilogy or The Guggenheim Trilogy is that it was Lehman Brothers that collapsed in the fall of 2008 — and this was, if not the cause of, at least the narrative hook for the immensely consequential financial crisis that followed. Even though Lehman Brothers hadn’t been run by the Lehman family for the forty years preceding the crisis, in The Lehman Trilogy, which ran for a few weeks on Broadway last fall, the family’s American saga is meant to function as a crisis origin story. From the moment Henry Lehman first steps onto American soil, the audience knows that things are headed inexorably in the direction of disaster. Massini, a Catholic from Milan, has said that his philo-Semitic father arranged for him to have a Jewish education, in addition to a Christian one, as a child; before The Lehman Trilogy, he directed a production of The Diary of Anne Frank and wrote a play called The End of Shavuot, which is about the friendship between Franz Kafka and the Yiddish theater star Yitzhak Lowy. He has a certain expertise in Jewish subjects, though of a kind that evinces book learning rather than lived experience. The Lehman Trilogy has been produced in Europe, England, and the United States in various versions at various lengths (up to five hours!) since 2013. There is also a seven-hundred-page hardcover version of the play, rendered in free verse, called The Lehman Trilogy: A Novel, which ends with a characteristically earnest and at the same time clueless ten-page “Glossary of Hebrew and Yiddish Words” in alphabetical order (Schmaltz, Schmuck, Schnorrer, Shabbat….). I saw the first New York production of The Lehman Trilogy at the Park Avenue Armory in 2019, but here I will be quoting from The Lehman Trilogy: A Novel.

    If you are in a mood to count your blessings, one of them might be how relatively rarely the financial crisis was blamed, at least publicly, on the Jews. Conversely, Massini’s project of telling the Lehman family’s story in a way that is deeply imbued with all forms of Jewishness and that also presages the financial crisis amounts to wandering into a danger zone. The playfulness and the imagination of Massini’s dramaturgy, and the skill of the acting, direction, and staging, have commanded so much admiring attention that Massini’s regular forays into certain ancient tropes have gone unnoticed. He repeatedly compares various Lehmans and other Jews to menacing animals (cobras, pythons, mastiffs, jaguars). He gives the Lehmans credit for inventing a wide array of financial techniques that have existed for many centuries and are not distinctively Jewish: promissory notes, factoring, installment buying, stocks, bonds, trading floors, and even the most basic one, lending money at interest. They are shown to be responsible for just about everything that happened in the nineteenth and twentieth centuries — including, by supplying money to pay for armaments, all the major wars. Cotton, sugar, oil, tobacco, liquor, railroads, radio, aviation, movies, television, computers — they all came to us thanks to the Lehmans. All of this goes in the direction of a topic that historically has made us uncomfortable, which is our domination of the world. Massini goes there repeatedly, as in:

    Each morning
    Sigmund Lehman
    walks smiling
    into the gray and white building
    from which Lehman controls America.

    or

    The truth is that now
    Bobbie Lehman
    has the world and the bank
    in the palm of his hand.

    or

    Lehman Brothers
    will dominate the earth.

    Massini’s attitude toward money is more New Testament than Old Testament: there is something peculiar, something unclean, about it, or at least about handling it as one’s vocation. The Lehmans and other Jews fetishize money and money-adjacent materials — several times Massini shows the Goldmans of Goldman Sachs being eponymously obsessed with gold. They can hardly restrain themselves from inventing new ways to manipulate it to gain more power — ways that usually disadvantage those whose work is more prosaic and morally straightforward. Throughout Massini’s account it is hard to figure out what Lehman Brothers actually does — is it a commodities broker, a bank, a broker-dealer that issues securities, or what? — because all the varieties of things that one can do with money are indistinguishable in their impurity, and engaging in each one in succession further pollutes the family.

    That Massini has accumulated a good deal of information about Jewish religious practices enables him to get himself off the hook you’d expect him to be on, though in a way that can seem wacky if you are actually, actively Jewish. (A Lehman dies and his body, in an ornate open coffin, is put on public display in the lobby of the Lehman Brothers building on Wall Street — really? Has anybody ever been to an open-casket Jewish funeral?) He presents Jewish observance and Jewish money-lust as opposing forces. The Lehman Trilogy is framed around the contrast between the Lehmans’ decreasing religious Jewishness over time and their increasing financialization. They abandon God for Mammon. Back in Alabama, when Henry Lehman dies,

    They observe all the rules, they have decided
    Shiva and sheloshim
    As they did over there in Germany
    All the rules as though we were in Rimpar, Bavaria.
    Not to go out for a week.
    Not to prepare food: to ask neighbors for it, receive it and that is all.
    They’ve torn a garment, as prescribed

    (Sheloshim is the ritual marking of the thirtieth day of mourning.)

    But by the 1940s, when a Lehman dies,

    The world has moved on.
    Nor have they
    let their beards grow
    the famous mourning beard
    of Shiva and sheloshim
    the uncut beard, as was the custom over there in
    Germany….
    According to ritual they shouldn’t go out for a week.
    No chance!

    And then the play ends with Lehman Brothers dead, and all the generations of the family, including the long-gone founders of the firm, gathering one last time to mourn. But now, finally unyoked from finance,

    They will grow their beards
    in the coming days
    as the ritual requires
    Shiva and sheloshim.
    They will respect the Law
    as it is prescribed
    in every duty.
    And morning and evening
    they will recite the Qaddish
    As it used to be done over there in Germany
    in Rimpar, Bavaria.

    There is something deeply un-Jewish about the idea that the purpose of religious practice is to de-orient you from money, and also about the idea that Jews have a special need for such de-orientation because otherwise the inexorable undertow of their Jewishness would tug them especially strongly in the wrong direction. Massini has one of the Lehmans, early in the family journey away from observance, devise a special secular Talmud with a hundred and twenty mitzvot, or commandments, each of which is about money. (“85. Everyone is for sale, Sigmund: at least sell yourself dearly.”) Another Lehman, later and even further down the wrong road, re-enacts the binding of Isaac by forcing his artistically inclined son “on the sacrificial altar” to go to work for Lehman Brothers — but in this case, unlike in the Biblical version,

    There is no angel hurtling down
    at the crucial moment to stop the killing of a son

    …so the child, Robert Lehman, goes on to run Lehman
    Brothers for decades.

    At “The Great Temple of New York” — Temple Emanu-El, obviously — the purpose of the secularizing Lehmans’ participation is to make so much money that they can be reassigned by steps from the twenty-first pew to the first. And the Temple — run, in The Lehman Trilogy, by a “Council of Elders” — is happy to govern itself by this standard. You would not know from the play that the dominant figure in American finance during this period was J.P. Morgan, a lifelong devout Episcopalian. Was Morgan’s career and his religion a contradiction that needed to be explained?

    Today, if you tracked down the surviving Lehmans, you would likely find that only a small minority are still synagogue-attending Jews, and that many would be either completely non-religious, or Christian. The same would be true for most German Jews with similar family histories, including my own family. Here’s a thought experiment meant to undercut Massini’s central supposition: Would you guess that the Lehmans who are no longer Jewish are much richer than the ones who are still Jewish, because they have been freed from Judaism’s restrictions on finance capitalism? Of course not. That is because in fact German Jews have been finance capitalists for more than fifteen hundred years, since they first arrived in Germany in the early Middle Ages. Owing to a combination of culturally rooted skills and strict legal barriers to most occupations, Jews usually made their living as small-scale traders, merchants, peddlers, and moneylenders. Henry Lehman, Massini reminds us, was the son of a cattle dealer — and not a poor one; the Lehman house in Rimpar, which still stands, is large and solid. My family came from a part of Germany that went back and forth between German and French control, and my forebears are listed in census records as handelsman or marchant des bestieux. The German Jews who came to America in the early nineteenth century were doing what they had always done, but (in the case of the Lehmans, at least) on an ever-grander scale. The logic of The Lehman Trilogy, its spiritual plot, is that with the advent of each succeeding phase in its business, the family loses another little portion of its soul. But if your very first venture in America is doing business with enslavers on cotton plantations, it is hard to get purchase for the idea of a rising curve of moral unacceptability.

    Could there have been another reason, besides the corrupting effects of their business practices, that the German Jews lost their religion? There was another reason, of course. And it could serve as the basis for a different saga, and one just as grandly tragic as The Lehman Trilogy. 

    Back in Germany, most of the early Jewish immigrants to the United States had not belonged to the small, sophisticated vanguard in Berlin or Hamburg who had begun the project of trying to join the mainstream of German life — those people were placing their bets on their long-term future in Germany. German-Jewish immigrants in the early nineteenth century were typically what we would understand as Orthodox (and they would have understood as simply Jewish). They lived on Judenstrassen in their villages, were educated in cheder, kept kosher, wore yarmulkes or hats, and were officially governed by Jewish legal authorities, whose jurisdiction in many spheres of law was recognized by the state. They were members of a separate tribe, not Germans. This was the way most Jews in Europe had always lived, and if they had wanted to live another way they could not have done so. One reason most German Jews lived in villages was that many German cities expelled all Jews at sunset and locked heavy iron gates behind them. Simply by emigrating, then, these Jews were making a consequential religious decision. At the end of Fiddler on the Roof, Tevye’s whole village heads to America in unison; Henry Lehman came to America alone. As a bachelor peddler in the rural South, you couldn’t assemble a minyan, and so you couldn’t pray. You couldn’t keep kosher. You couldn’t celebrate holidays.

    Take a moment to think about the immensity of the experience of coming to America in this way, and of the conflict it set in motion, which is a supercharged version of the standard dilemma of assimilation. From the moment a Jew landed on the dock in the United States, he was a citizen, something that for most Jews in the vast temporal and geographical expanse of the diaspora had been out of the question. (And also, from the moment a Jew arrived in the South, he was white.) All those endless centuries of expulsions, forced conversions, restrictions, conscriptions, special levies, disputations, the awareness of which was lodged deep within every Jewish soul — gone in an instant. But along with them, the structures, the glories, the enveloping comforts of Jewish life in the diaspora, just as venerable, were gone too. What to do?

    It did not take long until enough German Jews had arrived in the United States for it to be possible to launch congregations, within-the-faith courtship and marriage practices, and other adapted versions of traditional Jewish life. Through the nineteenth century, German Jews in America were in close contact with friends and relatives in Germany, so they were aware of the progress of Haskalah, the Jewish enlightenment movement. For the first time, at least some Jews in Germany were able to get secular education up to the university level, to join professions, and to live in non-Jewish neighborhoods, and to participate influentially in the intellectual life of Germany; this process of apparent enfranchisement culminated in the full emancipation of German Jews in 1871. It is unbearably poignant to contemplate the optimism of the Haskalah today. In its time it provoked intense religious contention among Jews, and pogroms and other anti-Semitic reactions among non-Jews.

    Still, Reform Judaism, in both Germany and the United States, was well underway decades before Chaim (soon to become Henry) Lehman left Rimpar. The first Reform Temple was founded in Hamburg in 1810. Jews in Charleston, South Carolina, established the first American Reform congregation in 1824. Temple Emanu-El, Massini’s “Great Temple of New York,” was established in 1845. While the Lehmans were still (in Massini’s account) observant greenhorns, Temple Emanu-El had already gotten rid of a great many traditional Jewish practices. It had abolished the wearing of head coverings and prayer shawls, it had established family pews (not as a money-making device, but to replace separate seating for men and women), it had stopped reading the full weekly Torah portion with the participation of the congregation. Massini has Herbert Lehman, the future Governor and Senator from New York, being bar mitzvah’ed at the Great Temple in the early 1890s, but Temple Emanuel had replaced bar mitzvahs with Christian-style confirmations back in 1868.

    Reform Judaism in its original form is the religion that everybody (at least everybody else who is Jewish) loves to hate. It comes across as a pathetic attempt to emulate the High-Church Protestant denominations. Some congregations switched the Sabbath from Saturday to Sunday; many, including Temple Sinai in New Orleans when I was growing up, had organs and choirs, and rabbis who presided bareheaded, wearing ornate robes, over services conducted entirely in English. At the notorious first graduation banquet for rabbis put on by Hebrew Union College in Cincinnati in 1883, shrimp was on the menu. Some of these changes were aimed at removing the myriad daily inconveniences that went along with being a Jew who lived and worked in a non-Jewish world, but at their heart was something far more momentous (and entirely missing in The Lehman Trilogy): a complete abandonment of Jewish particularism and an embrace of universalism. 

    The most obvious manifestation of this was a renunciation and rejection of Zionism. The first American Reform conference, in Philadelphia in 1869, took this stance, and so did the Reform Jews’ equivalent of the revelation at Sinai, the Pittsburgh Platform of 1885. One of its precepts was: “We consider ourselves no longer a nation, but a religious community, and therefore expect neither a return to Palestine, nor a sacrificial worship under the sons of Aaron, nor the restoration of any of the laws concerning the Jewish state.” In the early twentieth century, Temple Emanu-El fired its rabbi for advocating Zionism and a return to traditional religious practices. 

    What the Dreyfus affair was for Western European Jews, what the Kishinev pogrom was for Eastern European Jews, the humiliation of Joseph Seligman was for the German Jews in America. In 1877, Seligman, the German-born banker who had come here as a teenage immigrant in the late 1830s and started out operating dry goods stores in Alabama, arrived at the Grand Union Hotel in Saratoga, New York for his customary stay and was told that there was a new policy: no Jews allowed. This directly undercut the German Jews’ core convictions, which were the basis of the new form of Judaism that they were creating: that they were at the edge of being fully accepted as members of the American elite; that there was no consequential difference, in the mind of either group, between Jews and non-Jews; that universalism was becoming a social reality. But the Grand Union Hotel’s policy soon became almost universal in all the non-Jewish venues to which the German Jews aspired: other fancy hotels, private clubs, “restricted” neighborhoods, universities, prestigious employers. You can find the published versions of the emotions that went along with these policies in the anti-Semitic passages of such writers as Henry Adams and Edith Wharton — and whatever they were writing, what other members of their class were saying privately was surely much worse, a kind of deep, instinctive, almost physical revulsion that is certainly a familiar element in Jewish history but was not supposed to be part of American culture, especially among highly cultivated people whom we German Jews admired.

    One could choose to understand these setbacks — which had a high psychological impact but a negligible material one — as evidence of the eternal, and eternally unacceptable, presence of anti-Semitism, always deserving open, ardent challenge. But many German Jews chose not to understand them this way, which was the traditional way. Instead these setbacks seemed like painful trouble for us caused by other Jews — specifically, by the arrival en masse of Jews who had come to America, to Ellis Island, from somewhere east of the Elbe, who were poor, who lived in slums, who were left-wing, who were proudly and visibly religious and tribal. German Jews longed for a return to the Edenic period before the embarrassing Eastern European Jews had arrived, but that was unattainable, except partially by means of various immigration restriction and relocation schemes.

    Alternatively, we could attempt to persuade non-Jewish elites to understand us in the way that we understood ourselves: not as the brethren of the Eastern European Jewish immigrants, but as an entirely separate group, whose members shared many of the majority’s prejudices. You can see how exquisitely complicated, delicate, painful, and shameful this project was — it was far more stressful, actually, than negotiating any imagined conflict between banking practices and religious practices would have been. It entailed identifying various supposedly Jewish “traits,” as defined by anti-Semites, and then endeavoring to extirpate them from one’s behavior, even from one’s thoughts. And yet social acceptance kept receding, like the mirage of an oasis in the desert. (Remember that even when the United States was at war with Adolf Hitler, many fancy hotels still would not accept Jews as guests.) The political choices that our stance presented got worse and worse as the situation in Germany deteriorated. And the connection to our tradition grew ever fainter, to the point of vanishing.

    If you are an Eastern European-descended American Jew, you surely grew up on horror stories about the snobbish, standoffish, solidarity-averse yekkes (that is, us). Of many possible examples of our bad behavior, here is an especially excruciating one: Julius Rosenwald, the German Jew who as head of Sears Roebuck was the equivalent a hundred years ago of Jeff Bezos today, named his firstborn child after Gotthold Lessing, the first leading German intellectual to be ardently philo-Semitic, which made him an emblem of the German Jews’ dreamed-of future. In 1943, Lessing Rosenwald, having previously opposed American entry into the Second World War, became the president and chief funder of the American Council of Judaism, an organization of German Jews dedicated to preventing the creation of the State of Israel. By that time much of the Reform movement has softened its anti-Zionism, for obvious historical reasons, but the ACJ kept plugging for decades after its founding mission had ended in defeat. It occasionally addressed the plight of non-Jewish residents of Palestine, but it never hid its main concern: that public and “aggressive” Jewish nationalism represented a threat to the hard-earned and fragile position of German Jews in the United States. As late as the early 1940s, many German Jews had a hard time believing that Germany, the seat of civilization, could be capable of mass murder on an unimaginable scale; and even after it became impossible to deny, it remained impossible to discuss. Sometime in the mid-1960s, my teacher at Temple Sinai’s Sunday school in New Orleans (who, like most Jewish educators, was more observant than her students) showed our class Night and Fog, Alan Resnais’ pioneering documentary about the Holocaust, and found that everybody in the class professed to have been unaware that the Holocaust had happened.

    Just as one can ask which of the many succeeding steps in the Lehman family’s financial progression was, by Stefano Massini’s lights, the fateful one that severed the link to the Jewish tradition, one can interrogate the actual, rather than the imagined, story of the German Jews in the same way. Once you have decided not to live inside an entirely Jewish world, there is an endless series of individual steps one can take to make life in the mainstream go more smoothly. Should you wear a kippah to work? Should you eschew non-kosher restaurants? Should you regard intermarriage as an example of commitment to inclusiveness and lack of ethnic prejudice? Should you drop the connection between the Hebrew language and Jewish prayer? Almost all American Jews have to ask themselves questions like these, which are never completely resolvable — but the German Jews answered them in ways that wound up making it impossible to participate in the tradition. The history of the Reform movement in recent decades has been one of step-by-step rejection of the commandments of the Pittsburgh Platform. Even Temple Emanu-El now asks that you lock in a date for your daughter’s bat mitzvah when she is in third grade, and proffers guidance to mourners on how to sit shiva properly. This is surely not because its members are no longer busy making money. It is because a religion can only get watered down so much, until it stops being able to provide meaning for people. The recent history of Reform Judaism in America — its increasing hospitality to elements of the tradition, its acceptance of communal solidarity, the shift in its primary attention from people who don’t love us to people who do — can be understood as a response to the recognition of how much had been surrendered and lost.I live on the Upper West Side of New York, which must be one of the friendliest environments for Jews in the history of the diaspora. I am not aware of any institution here that formally excludes Jews, though they still exist in New Orleans. Secular institutions know not to schedule anything on Yom Kippur and sometimes even Rosh Hashanah. (Even so, on major Jewish holidays we pray under armed guard, and my children have occasionally been subjected to anti-Semitic taunts in school.) Being Jewish here is an easy identity, with so many possible variations, including being proudly ethnic but completely secular, that it is not at all obvious which part of it is the most crucial. Where exactly, at what point in their reforms and revisions, did the German Jews lose the thread? Almost everybody subtracts something from Jewish practice. Many congregations constantly make micro-adjustments, especially during a pandemic: for reasons of public safety or for ideological reasons, do we really need the repetition of the Amidah? (I exclude the ultra-Orthodox and Hasidic worlds, where one can even find additions.) Which subtraction, then, is the fatal one?

    The wrong way to think about this is from the outside in — that is, in terms of what would make Jewishness more acceptable, less strange, to non-Jews, or of what might remove all the conflicts inherent in the project of being a Jew who functions also as a citizen in a non-Jewish state. I live in two professional cultures that are, for the most part, militantly secular (though disproportionately Jewish): academia and journalism. In these, the standard critique of religion is that it is based on a denial of science and that it acts as a seedbed for hate and extremism. You cannot re-engineer Judaism, or any religion, in a way that would effectively remove every element that might reinforce these stereotypes, just as you cannot remove every cultural practice that might read as “Jewish” to people who do not like Jews. There would be nothing left. Nor is external enmity worthy of being dignified with internal self-erasure. Instead, the project should proceed from the inside out: what, from a Jewish point of view, is essential? And that question cannot be answered without straying outside the tight boundaries that the requirement of appearing innocuous to mainstream sensibilities necessarily imposes.

    For thirty-five years I have belonged to one or another Conservative Jewish congregation, which I suppose makes me an intramural convert. Of the myriad differences between the religious services I attend now and the ones I occasionally experienced growing up, the one I would put at the top of the list is the role of the Torah. Tobias Brinkmann, author of Sundays at Sinai, a book about the Chicago equivalent of Temple Emanu-El during the high-water mark of Reform Judaism, reports that the title character of his book, a house of worship named for the site of the giving of the Torah to the Jews, decided that it had no need of a Torah scroll on-site and put it into storage. That was extreme even for High-Church Reform congregations in their heyday, but most of them made gestures in the direction of decoupling Judaism from the Torah.

    Customarily, as just about all Jews who attend services will know, the Saturday morning service nears its climax with an elaborate veneration of the Torah as a physical object: we stand as doors of an ark are opened to reveal it, then it is paraded around the room so that we can touch and kiss it, then it is unwrapped and laid out on a platform and the weekly portion is read aloud directly from the parchment scroll, which is hand-lettered in a stylized calligraphic form of Hebrew that is difficult to read. Members of the congregation take turns going up to the stage to participate in this ritual. I still do not understand why, encountering all this for the first time deep into my thirties, I found myself overcome with emotion, as if a dam had burst — or why, seeing an awkwardly dressed-up little boy whom I had never met, staggering under the weight of the Torah scroll, recite the Shema, the most essential Jewish prayer, in a breaking voice, at his bar mitzvah, I would burst into tears.

    Just about everybody at such a service will have read the weekly portion many times before; still, after it is read aloud, someone offers a sermon or homily with a distinctive interpretation. The idea is that the Torah is a document of such richness and profundity that on every reading a new nugget of meaning can be found — and of course the entirety of the endlessly long and strange Talmud derives from the contemplation of linguistic and conceptual mysteries presented by the Torah. All fundamental Jewish rituals and holidays are rooted in the Torah. Confronted with the Torah’s most dull, baffling, or offensive passages — the apparent endorsements of slavery and murder, the condemnation of homosexuality, the precise instructions about the decoration of sacred spaces — one is supposed to look so deeply that something admirable and usable in the here and now can be discerned, rather than dismissing it as outdated, offensive, and best ignored. A Jewish version of the legal principle of stare decisis applies: you cannot just focus on the good parts of the Torah and drop the bad parts. The entire document demands the respect, the sense of its continuing vitality, that being required to argue with it based on what it says and what has been said about it entails — which is different from either blind obedience or the refusal to engage with anything that seems problematic. Non-Jews often perceive Judaism as being grounded in an elaborate, rather lifeless series of rules, as opposed to more Christian concepts such as “faith,” “belief,” and “grace.” That cold legalistic caricature of Judaism has a long history, and it appears in The Lehman Trilogy. But it misses the essential point: the rules are best understood as evidence of the veneration of the Torah.

    I am not prepared to say that I believe God gave the Torah to Moses at Mount Sinai. But it is not just another book, either. It is not even just another enduring work of literature, or an especially significant historical document. The combination of the text itself, and the distinctively Jewish way of interacting with the text, seems to address every possible aspect of life, from the mundane (whatever’s going on for you, good or bad, this week) to the empyrean (love, justice, history). And although the Torah is open to many interpretations and over time has shaped the consciousness of many people who are not Jewish, it is hard not to read it as a Jewish story. God creates a covenant with a particular tribe, lays down a strict and definite set of conditions for membership, and offers terrible punishments and great rewards (including the gift of land for a nation) that are linked to people’s degree of violation or compliance. These apply individually, but also collectively, to the Jewish people. The Torah is not a deracinated, universalist text. Being Jewish entails a degree of particularism, which in turn requires a degree of solidarity and self-advocacy — and also a constant, never fully resolved struggle, if you live in a non-Jewish environment in the diaspora, to find the right balance between the requirements of the two worlds that you inhabit.

    The fundamental mistake of the German Jews — if you regard our loss of religion as a tragedy, as The Lehman Trilogy evidently does — was in believing that we could scrub all the particularism out of being Jewish and still have something left. A determined campaign to make Judaism more like Unitarianism winds up not with a strengthened, modernized variety of Judaism, but with… actual Unitarianism, which became the religious destination of many German Jews. The German Jews’ version of universalism was a response to being in a tiny minority, from the moment of arrival in America, and so not having access to a thick Jewish culture. There is also an Eastern European Jewish version of universalism, at least in my neighborhood, which entails projecting one’s much-loved Jewish culture and values onto the rest of the world and assuming that they fit perfectly: Abraham Joshua Heschel and Martin Luther King, Jr. were practically the same person! And that often leads to bitter disappointment when it becomes clear that non-Jews and Jews do not actually share an identical consciousness. Why should they? You usually cannot persuade other people that you are like them, and you also cannot persuade other people that they are like you. Making one’s peace with Jewish distinctiveness is actually helpful in coming to terms with the distinctiveness of non-Jews.

    One can see the German Jews as a footnote in Jewish history, but I would argue that our story is far more resonant. Today there is no longer an enveloping Jewish neighborhood culture in the United States, unless you are Orthodox or even more traditionalist. Synagogues are not growing. We are all German Jews now. We all engage in balancing acts in which we may lose our balance and renounce too much or accept too little. Questions of what being Jewish means as a state of being and a set of activities, of whether it is plausible to imagine the disappearance of anti-Semitism, of how Jewishness can coexist with Americanness, of the nature of one’s obligation to the community — these are about as fundamental as anything life has to offer. Every American Jew who is self-aware has to wrestle with them. And they are a lot deeper and more interesting than any supposed contradiction between religion and money. If that contradiction exists at all, it is very far down on the list of what God gave us to struggle with as we try to make what we can of being human.

    How Long Could I Have Been Weightless?

    After the smooth up-pull the car dove fish-efficient
    in the tractor-trailer’s wake. By then the thick wheel

    cuts had tapered down the long, curved grade then vanished,
    leaving undulations in the drifts.

    All the way from Montreal through French-toned
    Vermont we’d held to mostly all alone

    through night-time Massachusetts, the Berkshires
    rhythmic now, the rise and fall of roadways

    lunglike, up and down, the black outside squelching
    with each splat. The snow fell lazy-seeming

    but the mass had force to it, a will thrust like those
    of sea currents, and in the down rush the car’s

    back end began to flex. The side-muscling
    came in series, ripples, quivers, pulse,

    and I was in it counter steering while
    the coffee spilled in the careening

    into, through, and out of, what the frost-dimmed
    lights could see: all murk then,

    the whole world untrustworthy, murk and
    splat, and splat and speed, and ridges:

    the helm backlit by dials,
    my fingers and their grips,

    the road itself a reef and I was skidding, skidding —
    tread and road unbonded into flight.

    How long could I have been weightless?
    Does it matter now?

    I reach now to recall what flew by me:
    trees in kelp shadow, gelid embankments

    snow shoals, formations of a world
    so much like ours, just under water,

    glimpse of where we’re headed
    by degree.

    Four wheels on the snow again,
    clutching, shifting, easing down

    compression bracing on
    momentum’s rush I saw it:

    deep snow swashed in fan pattern
    to the breadth of the road

    the white rig turned over,
    red stamp on the side of

    it: strike of harpoon. What fluke
    of luck had saved me? Which flake

    launched me to air/water,
    racing my breathing, slowing me down?

    Roots

    Then, the future was glaucomic, the bore through mangrove
    in the dugout slow. I recall the water in its color tannic.
    I see now an olive wake dissolving from the churn work
    of the screw. A time would come — it seems it has —
    to redecipher, understand again the meaning of the motor’s
    open vowels louding up a sacred space.

    Corporal Pitt, the bully, said something far beyond himself,
    “You see all what favor frame for madman basket?
    those are aerial roots.” He pointed and we took
    his reedy finger as command, us six good recruits —
    cadet acolytes joined for camping life — and paused
    eye-sweep for crocodiles.

    I plait time to those wetlands often. To be black where
    I live now is to bivouac. White is wilderness in all seasons.
    I carry bankras of one-one sorrows; gods in a haversack of joy.

    Out on long lug-sucking walks through marshes south
    of Boston,
    close-west fairly of the Cape, I wink “like” to the look
    of bulrushes,
    how they call to bible Moses, kinda favor sugarcane.
    Who resists the cat-tails saucery? — such flirts — but
    the names.

    Little Massachuck. Sachuest. Sapowet.
    Say them soft; no, shout these native names,
    names of the plowed near, and housed to,
    the made margin, the selvedged by road,
    the done to as America tends to do with indigenes,
    its what-it-failed-to-kills.

    At water’s edge a man in waders arcs a lure; snaps it
    out for bass. Tammed women with clam baskets hunch
    against a pushy breeze in group leverage. Seashells smaller
    than the ears of newborns crunch in the wake of boots.
    My dry-meniscus knees go skurch on pebble shoals.

    Sinuous chapel festooned-gaudy, by ibis candle-lit,
    I sight you. But how I coulda note full conscious
    your low-key frieze of halophytes, the mangroves’ gazing
    wall of afroed saints? I was just manyouth.

    Once, I pilgrimed to another coast of my island
    to be witnessed to in soulcase by the final two
    uncaught unkilled sea cows,
    figures so of there, but as their wakes were, evanescent.
    They’re gone now like the Arawaks as I too must go.

    I go. Home for now to Providence. Comb-somed,
    bearded, chukking old Bean boots — apparently adaptive.
    Every hair a root.

    Bedazzled

    Air an instrument of the tongue / The tongue an instrument / Of the body …

    — Robert Pinsky

    “Burro Banton a di only veteran artist that go Europe and open the festival and close the festival. Him get two pay.”

    — Peter Metro, dancehall reggae legend

    Hearing Burro trace the sky in couplet,
    the mic from Nicodemus arming
    boom re-arming hand to hand,

    I began to ribbi-bang, bong-widdly,
    find giddy in the sounds of dead books—
    so eftest and cock-a-hoop amused me,

    good nonsense like slang-dang; and
    every Dolby-short cassette respooled by
    pencil foxship made school-ordered scansion drum.

    That Elizabethans rode riddim, bedazzled,
    and the work turned pay itself,
    at night, parts assigned soft-said

    downstage uproof my warm flat white scheme-house,
    the almond tree backdropping, a streetlight key,
        slang-dang and foin pop out of me.