The Rise of the Barbarian Right

    It’s strange, how life can sometimes mimic literature. Consider the story of Jonathan Keeperman, which in crucial ways recalls American Pastoral. Like Philip Roth’s novel, it is a story of how mad ideas can take hold when history unsettles familiar normative coordinates, and when children confront a more dimly lit world than the one faced by their fathers. Even some of the basic details are reminiscent of American Pastoral. Jonathan Keeperman’s father, Fred, came into the world in 1948 at Brooklyn’s Maimonides Hospital and spent his early years in Brownsville. The family-owned a candy store on Pitkin Avenue, and soon Fred was immersed in the “colorful cast of characters who inhabited the immigrant Jewish community into which he was born,” as his obituary put it. (He died two years ago.) 

    The family moved to the eminently Rothian town of Metuchen, New Jersey. Fred joined his high school’s varsity wrestling team, and this in turn won him an athletic scholarship to Knox College in Galesburg, Illinois. There he met his future wife, Rita, a Galesburg local and Catholic-school grad “who taught Fred how to bale hay and put a cow back in the barn,” per the obit. After graduating college, Fred became first a special-ed teacher and then a junior-high vice principal. 

    In 1976, ambition beckoned him to the Bay Area. He went into business with his uncle and finished an evening law degree. Eventually, the family made its forever home in a cul-de-sac in Moraga, a lush, quiet suburb of San Francisco. Fred ran his own small law office, where Rita would serve as the business manager. On the side, he coached sports and led the local education foundation and baseball association, among other civic groups. Fred and Rita Keeperman, in short, enjoyed a full measure of the stability and social capital which were the boomers’ historical inheritance but which would elude later cohorts.

    Jonathan Keeperman was born in Moraga, the third of Fred’s four children. He earned a master’s degree in creative writing at the University of California, Irvine, and would teach as a non-tenured lecturer at the same institution for more than a decade, from 2009 until 2023. During his time at Irvine, Keeperman honorably defended the free-speech right of the campus Republicans to invite an obnoxious speaker, and helped to lead efforts to organize his fellow itinerant instructors under the auspices of the American Federation of Teachers. Writing that instructors “make up the highest percentage [of adjuncts] among all the disciplines in the system,” Keeperman told California Teacher, an AFT publication, in 2016 that “we wanted to look at the labor practices from campus to campus.” He complained of the low pay, the arbitrary power wielded by administrators, and the insecurity that defined the careers of adjuncts. In doing so, Keeperman channeled the anxieties of the educated precariat, which were to propel millennial socialism and the movements associated with Bernie Sanders and, a little later, the Squad.

    Yet Keeperman’s radicalization in those febrile years ran in a different direction than might have been expected from someone of his background. Unlike Merry, Seymour “The Swede” Levov’s daughter in American Pastoral, who swerves to the radical left — all the way to the Weather Underground — in opposition to the Vietnam War, Fred Keeperman’s son has emerged as one of the stars of the “dissident right”: a loose constellation of pseudonymous intellectuals and social scenesters who promote a combination of IQ-based eugenics, the worship of strength, and lifestyle self-help. 

    Roth’s Merry directs her (literally) explosive rage against America’s postwar military-industrial establishment — a discrete and familiar bogey for boomer progressives. But Keeperman and his cohort, the dissident right, identify a more fundamental force as the oppressive enemy: democratic egalitarianism, with its supposed denial of human difference, its general tendency to cut down the high to succor the low. They blame it for pervasive censorship and the H.R.-department quality of modern social life; for the snuffing out of excellence and the “disequilibrium afflicting the contemporary social imaginary,” as Keeperman has written. 

    The dissident right would bury the mildly egalitarian brand of conservatism that in the last century made its peace with equal human dignity, even seeking to extend it to subjects derogated by progressive egalitarianism, such as the unborn child. That conservatism was anchored in the “Judeo-Christian” consensus of the postwar era — a consensus that is fast slipping away, along with the shared moral memory of the horrors of the first half of the twentieth century. The dissident right would replace all that with a more heroic landscape trod by the aristocratic spirit: the one who designates value for himself, hindered neither by the demands of the dysgenic many, nor by popes and priests, nor still by the oozing tyranny of the primordial feminine — the ultimate source of democratic egalitarianism. To hell with your sacred victims, bellows the master subject of history, the noble barbarian, the online Übermensch, as he smashes down the female-dominated egalitarian order. In more extreme versions — Keeperman himself stops well short of these — the noble barbarian might also proclaim: Total N——r Death! (one of their grotesque trademark chants.)

    Shocking stuff. Except, as we will see, for all its capacity to create rhetorical disturbances, the dissident right is merely affirming — in vulgar rhetoric and hateful imagery — the IQ-obsessed, biopolitical future that is already being organized under existing market societies. 

    At some point in the 2010s, Keeperman adopted “L0m3z” as his nom de plume et de guerre, an online persona who could say things that a non-tenured academic could never get away with. Such as: 

    “Lamppost” — meaning, hang or lynch — “the journos.”

    My enemies are dysgenic freaks.
    The sheer tonnage of human filth is overwhelming. An assault on the senses, on the basic right to decency and peace of mind. How does one walk through the cities and not be constantly and involuntarily muttering under his breath: “Billions…billions…”? 

    That last bit is a reference to a social-media meme featuring a frowning, bespectacled figure who, aggrieved or put-upon, declares that “billions must die!” Keeperman/L0m3z would weave much of his output from the memetic threads that normally cocoon online subcultures. (The process should be called memesis.) He softened the material, making it more accessible to a wider audience of conservative “normies,” even as he added a dash of literary flare, as might have been expected from a veteran of one of the nation’s most prestigious writing programs.

    The Travis Bickle-style sentiment expressed by L0m3z — his desire to see a filthy human mass washed away from the face of the earth — is undercut by the fact that the “billions must die” meme-guy is supposed to be taken for a sad-sack creature. Such half-facetiousness is a central feature of the dissident right, a humanizing fig leaf. The movement’s political claims are rarely expressed in earnest or systematic fashion. Rather, arguments are advanced precisely via the joke and the adroit compiling, rejiggering, and interpolating of an existing set of highly mobile memes and symbols.

    The memetic joke serves multiple purposes. For one thing, it shoos away those too dull or too moralistic to get it: that “we” don’t really mean it when we say that billions must die, and also sort of do mean it — wink. More mundanely, the joke supplies a built-in defense mechanism against would-be cancelers and doxers, that is, those who would reveal the real figures lurking behind the pseudonymous avatars. L0m3z himself was recently unmasked as Keeperman in a Media Matters-style exposé in The Guardian, which denounced him for, among other things, reissuing Ernst Jünger’s The Storm of Steel under Passage Publishing, the imprint that he founded
    in 2021.

    Given the emphasis on edgy jokes, the dissident right’s voice can blend with that of other groups of online shitposters, all contributing their share of noise to the social-media cacophony. But Keeperman & Co. are a distinct group, in the business of articulating a distinct worldview. That worldview cannot be understood as highbrow Trumpism. For the dissident right has little to do with the populist upsurge that has engulfed most developed democracies since the mid-2010s, even if the same structural forces have provoked both. For one thing, the social base of the dissident right lies not in Trump country — not in, say, the Rust Belt or Appalachia — but among a segment of the bicoastal professional class. The core group of ideologues is composed of higher-education exiles, “independent scholars,” and non-tenured academics. Arrayed around them are concentric circles of artists and fashionistas, tech and finance bros, podcasters, fuckupnik heirs, and the like, most clustered in Lower Manhattan, Miami, and the Bay Area. Some of these characters could be described as financially stressed, but others are perfectly affluent.

    Sociologically, the dissident right has more in common with the urban left than either camp does with Trumpian America. Indeed, the movement and its hangers-on include not a few former Democratic Socialists of America types. There is, for example, Anna Khachiyan and Dasha Nekrasova, cohosts of the Red Scare podcast. Once a bastion of irreverent vocal-fry Bernie-ism, Red Scare now worshipfully covers the likes of Steve Sailer, the amateur race scientist.

    Sailer, who popularized the term “human biodiversity,” is the author of the America’s Half-Blood Prince: Barack Obama’s “Story of Race and Inheritance,” which appeared in 2009. More recently, Keeperman’s imprint has published a collection of Sailer’s old columns and blog posts. The volume is studded with such hard and brilliant gems of racial pseudoscience as: “Barbados, despite an average IQ of 78, is one of the most pleasant countries in the 3rd World due to its commitment to maintaining a veddy, veddy English culture”; and “since there are so many unmarried Asian men and black women, they should find solace for their loneliness by marrying each other. Yet, when was the last time you saw an Asian man and a black woman together?” 

    Many of the dissident right’s leading personalities, moreover, are what I have called “off-white ethnics”: Jews, Armenians, Romanians, Arabs and North Africans, even some Indians. This frequently puts them at odds with more straightforward white nationalists and anti-Semites — such as the “Groyper” movement led by the flamboyant video-caster Nick Fuentes — who write off even the Poles as “barely white,” let alone a “Mischling” like Keeperman. For their part, the dissident rightists consider unalloyed racial nationalism déclassé, an affront to good taste and a refuge for “low-IQ” white people, whom they hold in almost as much contempt as they do blacks. If you are an intelligent Jewish-American urbanite who wants to play around with certain Nietzschean and eugenic themes, you aren’t going to join tiki-torch-bearing marchers chanting that “the Jews will not replace us.” No, you turn to the dissident right.

    These tensions are not just a matter of ethnic- and class-based rivalry among different groups of haters. They bespeak serious ideological differences. The Groypers are uncomplicated racial-fascist goons: The Jews have orchestrated mass migration, porn addiction, and foreign wars to break our organic unity and weaken our people, etc. The dissident right is a much more complex beast, capable of entertaining sophisticated visions of the political order that might replace the current one, the better to serve the creativity of “natural” or
    IQ aristocrats. 

    For a glimpse of these visions, consider After the War, an anthology of dissident-right flash fiction released this year by Keeperman’s Passage Publishing. In keeping with the dissident-right style, all but a couple of the forty-four contributors appear pseudonymously. This, combined with the short length of the stories, makes reading the book feel like scrolling down an especially freaky X feed. And that’s the point. As the writer “Zero HP Lovecraft” notes in the foreword, 

    a flash-fiction story is short enough that you can conceivably read one story, not only in a single sitting, but in a single interval of consciousness, with no momentary discontinuity. If you grit your teeth and muscle through it, you can read a whole two pages without even once switching contexts to check your social-media feeds.

    Zero HP Lovecraft — the pseudonym is a portmanteau of the name of the Rhode Island horror pioneer and the video-game notion of having zero “health points” — is one of the most virulently racist characters in the dissident-right sphere. “No shit, I’m racist,” he has confessed on the X app. “I have recorded entire podcasts about why blacks are dumber and more violent than whites. I have advocated for [the] mass deportation of anyone darker than cappuccino.” He complains of the “negroid warbling” — rap, hip-hop, and R&B — that supposedly permeates contemporary public spaces. He declares: “I don’t have DNA. I only have TND” (that is, Total N——r Death).

    He is also the author of a handful of strikingly inventive horror stories, written in the tradition of his literary namesake but updated for the age of surveillance capitalism. His gift lies in conveying the subjective experience of people in these milieus — cryptocurrency speculators, “fin-tech” specialists, unemployed online edgelords, and the like — in crackling prose: “We imagined ourselves as samurai-sword VR pirate pioneers, but it turns out we’re pointless argument vegetables growing in walled gardens, harvested for the benefit of robots that serve us ads.” In classic Lovecraftian fashion, he then pulls back the curtain to reveal the hidden, cosmic-scale monsters — technology and capital — that dwarf and menace his human subjects, draining their life essence until the human itself has been rendered superfluous. 

    One can’t but feel a certain awe for the absolute bleakness of his worldview — a bleakness that sometimes bears the awful ring of truth. “They say the bit of folk trivia about a goldfish having a memory of three seconds is just that,” he muses in the foreword to After the War. “But there’s still something so poignant about this image. Trapped in a glass bowl, watched on all sides, an attention span of three seconds: that’s me, that’s you.” Thus, Zero HP Lovecraft laments in substance tech-driven social phenomena that the anthology reflects in flash-fiction form.

    The authors featured in After the War aren’t nearly as perceptive about technology or as fantastically imaginative as Zero HP Lovecraft. Many deploy hackneyed tropes that are the mirror image of the didactic woke-ism that mars much mainstream fiction: if cartoonishly repressive white males supply the grist for the Big Five’s moralizing mill, here it’s the fat, blue-haired, rainbow-pin-wearing, they/them-pronouns-using police officer and similarly left-coded authority figures who serve as easy foils for the protagonists. Other stories are too caught up in insider symbology and jokes to rise to any degree of universal literary merit. A handful are unquestionably clever, however, combining grim humor with memorable conceits made all the more discomfiting by the authors’ foul politics.

    As its subtitle, Stories From the Next Regime, suggests, the anthology invites the reader to envision what it would mean to overthrow our current political, economic, and cultural arrangements in favor of an order more conducive to the adventure and excellence for which the dissident right yearns. The mood is one of anticipation and triumph, even if in many cases, it is really the nihilistic sense of triumph felt by the one who burns everything down. 

    Most of the stories are in the science-fiction genre, with the very best of them managing to pass off their uncanny and disturbing future scenarios as humdrum reality for their characters. Philip K. Dick was probably the grandmaster at generating this effect, and his influence is felt heavily throughout. Indeed, the anthology as a whole could be described as Dickian — that is, if Dick had cheered for the Nazis in his novel The Man in the High Castle, from 1962, now better known as the basis for Amazon’s television series. The stories generally fit inside a limited thematic matrix — a sign that they arise from within a coherent and fairly well-developed ideological movement, rather than a literary scene or sensibility. I notched the recurring themes in the back of my copy as I was reading it, and was struck by how infrequently I needed to come up with new categories for my counting system. Taken together, the handful of categories can double as a guide to dissident-right ideology.

    It should come as no surprise that a significant plurality of the stories involve race, eugenics, “natural” or genetic aristocracy, physiognomy, and the centrality of bloodlines. “The Frowners,” by a writer who goes by “Degree Studies,” is typical. The story dully restages Steve Sailer’s notion of “noticing”: that hereditary differences among large human groups — delineated by race — should be obvious to anyone prepared to take off the lenses of egalitarian piety. The protagonist is a scientist from Earth who, in the distant future, visits Jupiter’s moon Ganymede to present the results of his research into the local xeno-culture. It turns out that “there were distinct physical characteristics of those engaged in violence” among the Ganymedians: namely, “a distinct downward tilt to their face,” absent among the peaceful. But the Ganymedians are not prepared to hear this hard teaching. One of their own scholars pipes up that “there is no reason to believe” that frowners “are innately more violent. In fact, we believe the frown is caused by exposure to violence and injustice.” Substitute skin color for the Ganymedian frown and . . . get it? Noticing!

    The yearning to expose the immutable hierarchy of human types likewise animates “The Pasture,” by “Meta Prime.” It envisions a future in which humans are socially categorized as Sheep, Camels, Lions, and Children. The latter three designate the three-stage metamorphoses of the soul in Thus Spoke Zarathustra. In Meta Prime’s story, only those born as Camels, Lions, or Children are afforded respect, while the Sheep are placed at an early age in a space known as The Pasture, where they are digitally surveilled and cared for. Parents of Sheep hope desperately that their children might graduate from The Pasture into a higher status, thus “transcend[ing] their lineage” and “proving [their] soul to be something more than that of [their] ancestors.” But this rarely happens. Usually the Sheep leave The Pasture and enter society as Sheep. “Even the families who were able to get their offspring placed in ideal positions in The Pasture had little hope of making a difference… They had to live with the fact that souls of their caliber had not built society… and only now had the opportunity to enjoy it by the good graces of the Children, Lions, and Camels who had sacrificed before them.” 

    The message is unmistakable: our actually existing society is maintained only thanks to the efforts of a hereditary or natural aristocracy, whose superiority in talent and responsibility is simply inaccessible to the many. Meta Prime only wishes we had the courage to admit this bare reality and thus furnish the aristocrats with the rewards of rule and respect naturally due to them, while disabusing the small-souled many of their grubby hopes of social ascent or equality. 

    “Blood Ties,” by the writer “Mythpilot,” adds an interesting geopolitical twist to these hereditarian concerns. The story pictures James, the aristocratic ruler of a Western country, retiring to a rare moment of intimacy with his wife, Anna, after the couple has publicly announced the impending marriage of their daughter to a son of the Russian nobility. Delighted about this union, James and Anna reflect fondly on their own, a “grand romance,” a match made in heaven — or more precisely, by “biopolitics,” as Anna reminds her husband. Now, a similar fusion of noble lines promises to seal the peace between their country and the Russians — “the people who once tried to bomb us,” Anna notes. James waxes philosophical: “It’s a new world, darling, and it belongs to us. We used to trust in pieces of paper for peace. Now we trust blood… Peace has a price. Blood for blood.” Anna concurs: “Despite my mother-sadness, I think it’s a marvelous thing. Blood is human, blood is warm, blood is us. This can be the beginning of a new age, a human age.” The “old world” wasted itself for the sake of abstract principles, when all it took to establish harmony between nations was to restore the premodern politics of “personality” and intermarriage. The countless victims of Europe’s monarchic wars could not be reached for comment.

    If Mythpilot offers a farcically gentle vision of a world order built on blood and genes, a contributor who goes by “P.C.M. Christ” — the nom de plume’s theological significance will become clear in a moment — harbors no such illusions. In “Sins of the Fathers,” at once the collection’s most impressive and repellent story, P.C.M. Christ takes the dissident right’s eugenic politics to their terrifying, and ferociously anti-Christian, terminus. 

    In the future, all children born with intellectual or physical disabilities are banished beyond the pale of civilization, so that neither the family nor the state has to bear the burden of “a life that could never recover from its handicaps.” Tyler is one such child. While his older brother, Harrison, is a genetic marvel of intelligence and athleticism, Tyler is “half-formed and hideous” and “crippled by various mental disorders.” Agents of the state are headed over to collect him for removal, and his mother and father must make a choice. Parents are permitted to accompany their children into exile, on the condition that they never return to civilization. Tyler’s mother, Mary, is already firmly resolved: she is staying with Harrison among the healthy. But the father, Chris, is torn. “I need some time, Mary,” he says, but he only has thirty minutes to decide. And then: “I’m going with Tyler.” It is a touching moment, all the more notable for the elliptical brevity with which the author describes it. Yet it soon becomes clear that, for P.C.M. Christ, the father is a contemptible character. 

    Transported with Tyler to a camp far outside the city, Chris finds himself forced to permanently undress and to submit to the egalitarian diktats of a figure called simply “Mother”: a cross between matriarch, corporate diversity manager, and female correctional officer. “Our way of living is communal,” Mother instructs Chris. “Care, sex, love; all are given freely and broadly…We do not allow anyone to be above another.” Again: “There is one sin in this community, and it is unforgivable — that you would place yourself above the group.” Since the community is founded upon a constitutive “weakness,” Mother adds, those who dare inject strength or excellence or hierarchy face “unrelenting punishment,” lest they jeopardize communal “safety.”

    Skeptical at first, Chris learns to accept this state of affairs. So much so that after six months, his male breasts begin to emit milk — “a sign, he was told, of his growing empathy.” Chris’s transformation from husband to milkmaid reaches its apotheosis when a family with a Down syndrome daughter approaches him, and he happily breastfeeds the lot of them, as their bodies and feelings melt into each other. Tyler, meanwhile, shows no improvement, “only growing more and more demanding” over time. But Chris counts himself blessed, feeling “righteousness through pain” and — P.C.M. Christ adds with all the Nietzschean venom he can muster — “gratitude for the hobbling weight of a cross to bear.”

    In addition to its eugenic dimension, P.C.M. Christ’s story introduces a second major theme of the anthology and, by extension, of the dissident right: namely, a fear of female power as the biological engine of social egalitarianism. The idea goes back to the movement’s leading thinker, Costin Alamariu, the Romanian-born political scientist behind the pseudonym-cum-online persona Bronze Age Pervert or BAP (complete with a broken faux-primitive syntax). Alamariu’s Yale dissertation from 2015 — published independently last year as Selective Breeding and The Birth of Philosophy — contends that philosophy at its classical origins was foremost concerned with “the problem of breeding.” (The book briefly topped the Amazon charts.) Eugenics, good breeding, “the standard of nature” as conveyed by Plato’s and Aristotle’s endless talk of excellences among horses and other animals — all this for Alamariu represents the rebellion of the noble barbarian against the female-led egalitarianism that is society’s default form. Alamariu, and Keeperman and dozens of lesser dissident-right figures after him, use the metaphor of the “longhouse” — the communal living space supposedly associated with sedentary agricultural civilization — to represent this matriarchal despotism. 

    In “Sins of the Fathers,” the camp for disabled children ruled by a female disciplinarian — promoting equality even as she flexes her own power — is an obvious longhouse. The same complex of ideas appears throughout the anthology, with the authors variously equating femininity with collective organization, campus diversity nostrums, anti-racism and the removal of Confederate monuments, and the general sapping of vigor and vitality. Yet the contributors to After the War are seemingly divided over the prospects for resistance, with some staging misogynistic orgies of male triumph, while others merely assert a male right to pleasure as the condition of perpetual female domination.

    The opening story by “V.N. Ebert” — one of the more drearily on-the-nose pieces — is narrated by a pilot of spacecraft in a future moon colony. “The Moon had been a libertarian thing” at first, he tells us. But the colony is now menaced by the same creeping bureaucratization that long ago suffocated adventure and heroism on Earth. A visiting female activist embodies this threat. “She didn’t acclimate well,” the pilot observes. “Wanted to organize, whatever that meant.” Her recruitment fliers “had a rainbow flag I remembered from down there.” The girl rails against “exploitation and violence,” but our pilot pays her no heed, and she finally abandons her mission and returns to Earth. Noble barbarian 1, female busybody 0. 

    “Genesis Revelation,” by “Mencius Moldbugman,” transmutes the male triumph over female power into an insane Burroughs-esque mythscape. Here, the horrors of the longhouse are laid bare. It is a “dark prison” whose walls are smeared with “the blood of weak men, the blood of men cowering wild-eyed.” Overseeing this prison are the “doe-eyed but dangerous women” whose power has depended upon the “suffocation” of men down the ages. Our protagonist, identified as “the warrior,” enters the longhouse bent on ending this female despotism. “With purpose, the warrior strode toward the nearest woman and grabbed her.” Defenseless before such boldness, “the woman leaned back and opened her legs, offering herself to her new master. Her sex thanked him for his strength and moistened with relief that her reign had finally come to an end. The other women took heed and did the same.” The dissident-right warrior screws his way to freedom, winning over all womankind except for “one old crone too bitter and barren to bear the blessings of his fruit.” But the harridan, too, eventually succumbs to the warrior’s determination, her everlasting NO to life drowned out by his everlasting YES. Finally, “the warrior stepped out of the longhouse, loyal mothers to his future sons in tow.”

    “A Big Man on Campus,” by “Noble Red,” is equally heavy-handed. On her way to Drag Queen Story Hour at Ruth Bader Ginsburg College in upstate New York, the freshman Margaret spots the only boy on campus: tall, handsome, “the most beautiful young man she’d ever laid eyes on.” A friend informs her that his nickname is Shakespeare — “because everyone gets to shake his spear.” The friend adds without elaborating that “he’s the college rapist, of course.” An official at the registrar’s office unlocks the meaning of this mysterious statement for Margaret. “The number one reason why young women go to college is to get raped,” she explains. Parents don’t intervene, “because they realized that their daughters claiming to have been raped was a marker of high status. But more importantly, it was and is a marker of political affiliation. It means you’re one of the right people.” 

    Problem is, “demand very much exceeds supply.” Enter “Shakespeare”: “We employ a low-status male to rape all the students. He doesn’t really rape them, of course. They just go to his room for thirty minutes and then allege that he did. We log the complaint, inform the authorities, file all the paperwork…The boy never faces legal jeopardy, because the girls all profess to be “too traumatized” to go through with the ordeal of a police investigation and trial. As she enters his room for her own turn, Margaret thinks she can “save” the boy-prisoner of female desire and female politics. Noble Red’s story, then, ends on a note of pessimism when it comes to overcoming the longhouse, at least among the college-attending classes. Yet for the author, the miserable fate of “Shakespeare” isn’t lacking for erotic possibility: the girls hold the power to ruin him, even as they also hold his “spear.” 

    In his own story, “Vampire Island,” Alamariu/BAP proposes a similar pattern for resetting relations between the sexes: not the total defeat of the longhouse in the manner of Mencius Moldbugman’s warrior, but a renegotiation of the terms of female domination. In the wake of nuclear war, the five hundred or so remnants of the LC (“Lewis and Clark”) battalion have taken refuge on Guam, enjoying plentiful food and abandoned fuel and living as equals in generous leisure. A bust of the “Blond Beast” is enshrined at the center of their camp.

    This tropical idyll is interrupted when a few of the men vanish while on excursions. Someone or something is kidnapping the soldiers. More specifically, “the most conspicuously handsome and fit were being picked off.” A party reconnoitering the nearby jungle in search of the missing learns the truth: their comrades have been taken prisoner — for the purpose of breeding — by a band of semi-civilized amazons. “Pumping relentlessly into the frenzied gripping pussies of the ecstatic amazons,” the captive prisoners are now cheered and now flogged by the “violent vampiric cum huntresses.” The horror, the horror! But there are too many amazons, and the men of the battalion resolve to sue for peace. Under the settlement, the amazons are to perform “no more than three extractions per day, and this only a week at a time, with days of rest in between, fed shellfish, pineapple, and cured wild boar by the amazons’ dwarf-like servant class.” For BAP, at least when he is scribbling trashy post-apocalyptic erotica, female domination, rightly ordered, entails male pleasure. 

    Reconciling the contradictions in dissident-right sexuality might appear impossible. One mode of fantasy vents revulsion at female sexuality: as a mysterious and “natural” power in itself and as the source of egalitarian politics that must be vanquished by the noble barbarian. The other eroticizes the status quo of female domination: hence the recurrent figure of the lone and helpless male overpowered by groups of sexually dominant women. The through-line seems to be an inability to view sex through any lens but that of power and counterpower: it’s the brawny male brute or the dominatrix all the way down.

    Interestingly, BAP himself straddles the two modes of fantasy. As a thinker, he promotes the notion that matriarchy is the stifling default state of society, and can only be resisted through heroic male exertion. As a storyteller, however, he imagines amazons lashing his Aryan heroes — like Keeperman, BAP is part-Jewish — and forcibly extracting their lifegiving seed. Then again, BAP’s idol Nietzsche advised: “Going to a woman? Do not forget the whip!” — yet he was also photographed (with an English philosopher) harnessed animal-like to a cart carrying his beloved Lou-Andreas Salomé, a whip in her hand and looking knowingly at the camera.

    The largest plurality of stories in this sickening volume are devoted to imagining the processes of social breakdown, civil conflict, frontier settlement, and population transfer that open up new horizons of freedom for the noble barbarian — or at least, that bring the current order to a close. 

    “Mog the Urbanite” contributes a tightly composed — if politically chilling — tale about a pair of boys examining their grandfather’s collection of strange trophies. The ancient objects carry labels such as: “The Skin of Senator Molembek,” “Warhead From a Minuteman Missile,” and so on. But the one that most absorbs them has a faded label, and the boys can’t figure out what it is. It is a guitar, but the boys wonder if it is a weapon before giving up. Later, as they approach the “throne room” of “the Warlord,” they wonder why such an object would have a sticker on it that reads: “This Machine Kills Fascists.”

    In “A Whole World,” the writer “Golgi Apparatus” recounts the “informal invasion” and “second colonization of Africa” by enterprising privateers. The story, written in a breathless tone and cadence, describes the “CEO monarchs and eccentric pioneers” and “tech-bro caesars” who in the late-twenty-first-century manage to subdue the continent, once more, to the West’s undying Promethean impulse. The United States is in decay, but “in Conakry, a solitary genius known only as the Master rules through a network of undying mechanical servants — kept alive, some whisper, through a twisted Kabbalistic occultism optimized in a laboratory…Ethiopia is under Mormon control. Mercantile hubs and “neo-Singapores” blossom across the Gold Coast. 

    “Reconquista,” L0m3z’s own contribution, treads similar ground. More competently written than the others, it offers a faux-historical narrative about the future takeover of California from Mexican cartels by a militia amid the apparent breakup of the United States. After generations during which they could only dream of possessing “the birthplace of their grandfathers,” the militiamen could possess any mansion they wished. But for now they are celebrating victory with a barbeque on the beach. I couldn’t help but recall that this fantasy of a Californian Reconquista — the recolonization of the former Golden State by its “indigenous” population — is the work of Jonathan Keeperman, son of Fred Keeperman of Pitkin Avenue, Brooklyn.

    Other writers foresee the present egalitarian regime staying in place, and its opponents either escaping to ungoverned spaces or else mounting special operations and a low-intensity counterinsurgency. “Demeter,” a tightly crafted horror tale by “Detective Wolfman,” features a trucker who smuggles statues of forbidden historical figures such as Thomas Jefferson and Robert E. Lee to more tolerant places in South America and Eastern Europe. Interdicted by the villainous FBI, the smuggler reveals himself to be a vampire: the undying and undead Southern spirit, avenging the Lost Cause from beyond the grave.

    A related complex of themes — After the War is a rich document of contemporary political anthropology — has to do with male self-help and self-improvement. In “Under the Willow,” a writer who goes by “William Wheelwright” pompously describes a member of a future caste of warriors, also named William, as being of “epistolary persuasion.” The William of the story, we learn, “was maniacally focused on the perfection of himself. In the gymnasium, of his body. In the library, of his mind. And in the sacristy, of his soul.” Judging by his racist musings on the X app, however, William Wheelwright has a long way to go to basic human decency, let alone the spiritual perfection that he ascribes to his character: last year he responded to a poll asking, “Fellas you have to pick a [girlfriend], which one of these is least objectionable: former escort; sex with dogs; a black.” “Dogs are the least evil,” he wrote.

    Time and again, we encounter young men who have prepared themselves for the coming war-apocalypse-new world through strenuous exercise and the consumption of healthful food: “Raw milk. Berries. A few slices of 100 percent grass-fed organic steaks from cows that were kept in red-light vaults five hours a day,” as one story has it. You know, not like the bugs and zogslop on which the dysgenic masses gorge themselves. The prevailing emotion in these writings is disgust, and they provoke the same feeling in their readers. 

    Between the civil-war reveries, the fantasies of exacting violent revenge against the woke disciplinarians, and the vitalist commitment to the cultivation of minds and bodies fit for armed conflict, it is tempting to view the dissident right as a serious threat of radicalization. Online, no doubt, on various dark and darker webs, they have such an effect. But I think something else is afoot here: namely, the romanticization of social developments that are already unfolding in the United States and other advanced market systems. Put another way, dissident-right culture merely lends a heroic sheen to our actually existing realities and the ideological structures used to legitimate them.

    In actually existing advanced market societies, there is no need to set up camps on the outskirts of cities to house children with Down syndrome, because many fetuses diagnosed with the condition and others of the kind are terminated before birth, and close to a hundred percent in Iceland and Denmark. Other burdensome citizens — not just those facing terminal diseases, but also increasingly the elderly and even young people with mental illness — are increasingly goaded into medically assisted suicide in Canada and the Benelux states, with advocates fighting tirelessly to expand such MAID regimes elsewhere in the West. Categorizing people from birth as Sheep or Lions or what have you, so as to ensure that everyone knows his place, is equally superfluous in today’s market societies. On both sides of the Atlantic, social mobility has largely ground to a halt. In the United States, it takes an average of six generations for the advantages associated with inherited family wealth to disappear, according to research from the Brookings Institution. Among the libertarian right as well as some progressives, this reneging on the egalitarian promise of “meritocracy” is justified on the basis of the hereditary genius and virtue of the rich, Charles Murray-style. The “Pasture” that they recommend is universal basic incomes or forms of so-called negative taxation: handouts aimed at mollifying economically useless individuals, so as to obviate reforms aimed at altering the lopsided distribution of power generated by markets. Confronted with the growing stubbornness of hierarchies in contemporary capitalist society, the dissident right wildly and without any sense of irony expresses its disappointment by inventing a new hierarchical thinking draped in a mystical vernacular, a different sort of inherited hierarchy that is uglier and even less mutable than the one that they deplore. 

    Likewise, the fusion of “noble” blood and genes is a feature of advanced market societies, as Murray pointed out more than a decade ago in Coming Apart. Indeed, sociology has been aware of it as a problem for meritocracy ever since Michael Young coined the term in his dystopian novel-essay hybrid The Rise of the Meritocracy, first published in 1958. Our jet-setting meritocrats are already apt to unite their blood lines, with little to show for it by way of greater social progress or harmony between the nations. The dissident right’s fantasies of intermarried neo-aristocrats merely attach the prestige of the ancien régime to the love lives of tech bros and the consultant class, much as an earlier generation of eugenic ideologues did the same in the late nineteenth and early twentieth centuries. It was repulsive then and it is repulsive now. 

    As for sexual relations defined solely on the basis of power, that, too, is characteristic of the world we inhabit. A sexual imaginary that can only alternate between the male brute who screws his way to freedom or else the vampiric, dominant female — that’s about as radical as the ethics of contemporary pornography. As the critic Geoff Shullenberger has pointed out, moreover, there is nothing particularly novel about the notion that matriarchy is society’s default setting: the radical feminists of the 1970s got there first. Once again the creepy right inverts, and parodies, the radical left: the dissident rightists merely reverse the normative valuation of this state of sexual affairs — even as they also eroticize the matriarch who holds the whip.

    Even the new-frontier fantasies of the dissident right aren’t that far-fetched. China, Russia, and the North Atlantic powers are already mounting a second colonization of Africa. Only, the process isn’t taking place as Silicon Valley-adjacent writers besotted with the Californian Ideology might imagine. It is a project not of bitcoin-flush privateers, but of massive state-directed or state-backed enterprises that are marvels of social organization. On a less dramatic scale, numerous movements on the existing right encourage and organize the resettlement of likeminded families away from disordered urban cores of blue cities and states and toward safer and more socially cohesive red areas. It is their right to do so; but buying real estate is not a radical act.

    Much the same could be said for the dissident right’s obsession with bodybuilding. In their fiction and “nonfiction” (such as it is), the dissident rightists insist that to overcome the schemes of woke despots and h.r. henpeckers, you must train body and mind and lift yourself above the despicable masses. Social antagonism is not to be collectively resolved, but transcended by the exertions of the heroic individual perfecting himself, as the classical sculptor chiseled elegant form out of lowly matter. Yet such self-help programs, such aspirations to a salvific superiority, are nothing new. As the left historian Charles Sellers observed, going back to the nineteenth century, striving middle classes often corralled surging social discontent into a frenzy for self-discipline and what today’s online right would call “clean eating.” As if perfection was ever the solution to a social problem. 

    This is the dark transubstantiation of market society, as it is practiced by the barbarian right: how it, and they, can turn even its most humdrum offerings — going to the gym and counting your calories — into a means for defeating the woke matriarchy or any other real or imagined enemy. Much as, in American Pastoral, Merry’s fury, having erupted in violence, finally exhausts itself in the harmless pseudo-mysticism and lifestyle experimentation that were the endpoint of the New Left, so the dissident right, for all its countercultural energy and its self-congratulatory sense of its own radicalism, ratifies the deeper logic of the very society against which its adherents purport to rebel. It is only an ugly and fevered diversion from what really ails us. 

    True radicalism and true dissent in contemporary America require a critical examination of the meritocratic ideal and the power relations it has served to disguise since the early twentieth century. It should mean rejecting the routine throwing away of weak and vulnerable lives. It would mean reaffirming moral and political universalism, whether rooted in the Bible or in social democracy, especially in our time, when these ideals are under assault from seemingly every quarter. Lending the present state of affairs a new legitimacy on the basis of IQ or racism is not radical. It is simply evil. 

    A Poetry of Place

    When we first met, you said you hoped to write

    a place as yet unwritten, maybe here,

    the last of the café’s lunch crowd clearing out

    with a soft ceramic clink and spray of light 

    through glass to glaze your dark cascade of hair.

    It’s not Manhattan, after all: it’s not

    a place for public life, yet here we sit

    with much between us still unspoken,

    each unfamiliar blossom yet to bloom.

    One Saturday I lingered in the park

    not far from your apartment, the faint perfume

    of evening primrose floating through the dark 

    with petals cool as rain against the skin,

    the season still unchronicled, but you

    had packed your bags and flitted back to Brooklyn,

    from what, and to what end, I never knew.

    Somewhere Else

    The last time we ever spoke

    Missouri suburbs filled with snow

    and snowfall blotted out the oak

    beyond your buried patio.

    You’d never see another spring.

    Falls . . . confusion . . . vertigo . . . 

    Familiar landmarks vanishing,

    you stood up from your wheelchair.

    Where did you think you were going?

    Across the Firth of Forth to Fife,

    to a croft in Pittenweem—why there,

    a place that you would never see?

    Though stranded, at the end of life,

    you still had somewhere else to be.

    Memory Care

    Memory care makes final introductions

    to residents whose names have slipped away

    at the slightest pressure, evanescent

    syllables for those who will not be here long,

    mere bubbles in a froth of foam,

    more transient than resident, some

    transitioning, not to another gender,

    but to another state of being altogether.

    Are you in the bathtub? my father greeted me,

    perhaps replacing past with present, room

    with tub, the week since I last visited

    with a momentary absence. He glimpsed

    the cascade of errors, winced, then shrugged

    and raised his palms in mock surrender.

    Nights are worst, when Mr. D don’t act right,

    but crawls around unplugging clocks and lamps

    and grunting like an animal, until the staff

    step in to strap him down with soft restraints.

    Dreams subject him to a long examination,

    turning over his childhood to find the source of sadness,

    testing each failure, each scene of humiliation,

    as he turns over the exam to find a list of questions

    in an ancient, unfamiliar script

    with blanks that spread like spilt milk.

    Tomorrow — tomorrow he won’t remember

    where he lives, much less these ghostly neural flickers;

    but tonight the dreams remember him.

    What Comes After

    Reconstituted voices,

    scraps of cloud caught in branches,

    the morning campfire of Pu-erh tea

    or mown hay of white peony,

    an old man’s blaser hanging on its peg,

    the human funk of toasted cumin seeds,

    oak burnt to ashes, cinerulent fox fur,

    crapy grape leaves in late November,

    a shirred old pumpkin,

    the soap and pepper of walnut hulls,

    the must of summer clothes left out through fall,

    the shadow of a straw hat 

    hooked on a chair back,

    Parcheesi’s orphaned pawn,

    the clatter of thick French china,

    Bordeaux of old furniture,

    the frazzle of a bee,

    A above middle C,

    oaks in spring bright as lettuce,

    butter and apricot of chanterelles,

    brine on stone after a storm,

    the smell of lake water,

    and all night, the tentative knock 

    of a hull against the dock.

     

    Why College, or What Have We Done?

    Every fall I teach a first-year seminar called “Why College? Historical and Contemporary Perspectives.” On the first day of class I present a list of possible purposes for college and ask students to rank them. “Finding your passion” and “changing the world” are always the top vote-getters, because that is the story we tell about college. Welcoming the new students at convocation, the president declares that they can do whatever they want with their lives, so they should do something they love. And they are also reminded to live for others, not just for themselves. At the University of Pennsylvania, where I teach, that inevitably means trotting out the school’s favorite quote from its famous founder. “The noblest question in the world is, ‘What good may I do in it?’” Benjamin Franklin asked.

    I wish that was the real point of college, and so do the students. But we both know better. The point is to get ahead, and to win the game. That means giving the teacher (in this case, me) the answer that he wants to hear. And outside of class, it means competing for every trophy in sight. Indeed, the competition is what produces the value. A few years ago, a student told my seminar that she had “tried out” for the Alzheimer’s Buddies Club — which sends people to visit patients in nearby hospitals — but that she didn’t “get in.” When she applied, she said, she had to submit an essay explaining why she wanted to participate; then she had to undergo an interview with an officer in the organization, who quizzed her about her “motivations” and “qualifications” for the role. Her story saddened me. I told the class that I didn’t think Penn should sponsor a group that winnowed people so selectively for a volunteer opportunity. 

    It’s a free country, I said, and if students wanted to test and interview each other, that was their own business. But if they wanted Penn’s imprimatur — and its money — that was a different story. Everyone who wished to volunteer should be able to do so; and if more people applied than the hospital could accommodate, they should draw lots to decide who went. Students looked down at their notebooks, avoiding my gaze, and the room got quiet. Finally someone broke the silence. “If they did that,” a brave student explained, “nobody would apply.” Never mind the poignant essay about Grandma and her descent into dementia, or the resumé (the Alzheimer’s club required those, too) showing that you visited nursing homes in high school. The point, again, is to win. And if the game is too easy, there is no point at all.

    As you proceed through college, the stakes get higher. The next shiny object is the post-graduation job, ideally in finance, tech, or management consulting. At last count, sixty percent of Penn’s students entered one of these three fields. We tell them to find their passions and to change the world. But somehow, after four years here, over half of them choose the same thing. Many of them — probably most of them — are not passionate about it. And does anyone seriously believe that sending more people from Penn to Wall Street will make the world a better place? It’s not about that. We have socialized these young people for a Hobbesian war of all against all, where everyone battles for a scarce good. And we rarely — if ever — challenge them to reflect on whether it really is good, and for whom.

    That’s on us. The campus protests last spring over Israel and Palestine — and the related incidents of antisemitism — have occasioned another bout of handwringing about the moral state of our students. This is as old as America itself; from the start, adults have worried about whether the kids are alright. But today’s anxieties have the wrong focus. The big problem at college is not political correctness, or wokeness, or racism, or antisemitism. The big problem is cynicism, spawned by an institution that tells young people one thing and does the opposite. If we truly believed our rhetoric about individual exploration and collective uplift, we would structure college in a very different fashion. But we don’t believe it and the students know it. They have found us out.

    The war of all against all starts well before these people get to college, of course. The first big prize is getting in. Despite all their blather about diversity, the elite universities still draw most of their students from the upper rungs of the economic ladder. And that presents a puzzle for ambitious high schoolers and their worried parents. In a situation where everyone is pretty much the same, how do you stand out? The answer is to cultivate a self with a distinct passion— that word again! —and to compile a set of corresponding experiences, all designed to show that you deserve admission more than the next person. When, in my seminar, we address the history of selective colleges, I show the students John F. Kennedy’s application to Harvard in 1935. Asked why he wants to go there, Kennedy replies that he would like to attend the same school as his father did. He also writes that he has always wanted to be a “Harvard man,” which doesn’t explain why he went to Princeton first. (He dropped out, for health reasons.) His application file also includes a remarkable letter from Joseph P. Kennedy, who admits that his son was “careless” in high school; hence the mediocre grades at Choate, which are on full display in the file. (Shout-out to the Kennedy Library in Boston, which deserves its own Profile in Courage award for posting all this material on the Web.) My students get indignant about Kennedy’s application, pointing out — correctly — that he was a rich kid who got in solely because of his name. True enough, I reply, and Kennedy would have agreed. He never imagined that he had earned his way into Harvard. And in the tradition of noblesse oblige, this meant that he also had a duty to serve others.

    Not so for today’s meritocrats. We seduce them into believing that they are special — more special, of course, than the many thousands of kids we reject. Forget about the battalions of tutors, counselors, and consultants that assisted them along the way. Not to mention the myriad other privileges that they received, simply from the circumstances of their birth. To drive home the point, I tell my students — tongue firmly in cheek — that I was a very good fetus. I didn’t just lay there in the amniotic fluid, sucking on the teat of the maternal nanny state. I pulled myself up by my umbilical cord! I made sure that I was born in a rich country, where I had plenty to eat. And I also chose educated and curious parents, who took me around the world and exposed me to its infinite complexity. The students all smile, sheepishly, because they understand what I’m saying. But we’ve got our story, and we’re sticking to it. Everyone here earned their way in, which makes them better than the people who didn’t.

    That sets them up for misery down the road, when they get turned down by a club or a fraternity. Or, later, by an employer. Or a graduate school. If your entire fate rests in your own hands, you bear all the responsibility for your failures. You will interpret them as existential judgments about your very being. And when you succeed, conversely, you will be less charitable to the people who fail. I won this race on my own merit! If you lost, you must not have enough merit. You should try harder, or maybe just accept your fate. But don’t expect anything from me, because — remember — I deserve to be here. I don’t owe you a thing. Jack Kennedy didn’t believe that for a second, which is why he devoted his life to public service: from whom much was given, much was expected. But many of my students — perhaps most of them — do believe it. This is what we have taught them. Their job is not to serve others; it is to stand out from them.

    But standing out becomes harder when you arrive at a college full of standouts. And it is still harder when almost all of them get A’s. Last year, seventy-nine percent of the grades given to Harvard undergraduates were in the A range (A+, A, or A-), compared to sixty percent a decade earlier. The same fraction of grades at Yale were A’s, up from sixty-seven percent in 2010-2011. “When we act as though virtually everything that gets turned in is some kind of A — where A is supposedly meaning ‘excellent work’ — we are simply being dishonest to our students,” the Yale philosophy professor Shelly Kagan observed. He’s right, and — again — the students know it. In a refreshingly candid essay published last spring, a Harvard undergraduate named Aden Barton admitted that students could succeed in classes without breaking a sweat — sometimes even without showing up. 

    For the past forty years, I have endured a recurring nightmare in which I arrive in a classroom — as a student, not a teacher — and realize that I am a month late. At Harvard, Barton reports, one of his friends didn’t attend any classes at all for the first month of the term. The mere thought of doing that — even for one class — still wakes me up in a panic. But it was no problem for Barton’s buddy, who blew off his entire course schedule. He still had to submit the assigned work. But there is not much of it, Barton writes, and almost anything you turn in will get you an A. The safest move — especially if, like Barton, you have not done the reading — is to mimic the professor’s opinions, which invariably fall somewhere on the left. Most students aren’t like the keffiyeh-clad protesters we saw on TV this spring. They are more like Aden Barton, who echoes a few political platitudes on his way to an easy A.

    That leaves lots of time for the realm where you can stand out: extra-curricular activities. Quoting a dean at NYU, Barton notes that students “feel the need to distinguish themselves outside the classroom” because they are “essentially indistinguishable” inside of it. The dense network of student organizations provides just the ticket, because so many of them are selective. (See: Alzheimer’s Buddies Club.) “At Harvard, one cannot simply ‘join a club,’” the Unofficial Guide to Harvard notes. “Instead, you must prove your worth.” Enter “comping,” which is Harvard-speak for competitive tryouts. “My roommate was wait-listed to volunteer at a homeless shelter,” the Unofficial Guide continues. “Some girl on my floor got cut from a Zumba class. It’s brutal.” At the University of Virginia, one parent recalled, their son was rejected from a pep group that supports the basketball team. He started a rival organization, but almost nobody signed up for it. Then he got wise to the system and required people to submit applications, which immediately quadrupled the number of students who wanted to join. Anything which is good must be competitive. And if it’s not competitive, it can’t be much good. 

    To its credit, Penn has prohibited student groups from collecting resumes from first-year students or from requiring “specific attire” during early-round interviews. But even these good-faith efforts to tone down the war of all against all demonstrate its enduring power. If you cannot demand that first-round candidates wear nice clothes, that means it’s fine to make them dress up for the call-backs. And whereas resumes cannot be solicited from freshmen, Penn decreed, “a list of activities may be requested on a written application.” That sounds a lot like a resume to me, and I doubt that the students see much difference between the two. It’s still about building a personal brand, and — most of all — about besting your peers. “Not all of you will make it in,” the Unofficial Guide to Harvard warns, in its chapter about comping. “Let the Hunger Games begin!”

    Unsurprisingly, the most competitive organizations are often the ones devoted to the prize jobs that students will seek after graduation. A search for “consulting” on the Penn Clubs website yields thirty-one different hits, including Global Research and Consulting, Penn International Impact Consulting, and Consult for America. Type in “finance” and you will get twenty-five results, including Business Brilliance, Penn Impact Investing, and M & A at Penn. (You read that right: we have a student group devoted to mergers and acquisitions.) You can find similar business-themed organizations on campuses around the country. They generate their own revenue on top of what they get from the universities, which have also found creative ways to monetize them via Corporate Partnership Programs (CPPs). In exchange for a fee from a company, the university provides it with member lists of relevant student clubs. That saves money for companies in recruiting new talent, of course, and it also allows them to target hard-to-reach populations. Noting that corporations often have a difficult time “balancing their diversity ratios,” one University of California campus promised them the email addresses of students in the Society of Women Engineers and the Society of Black Engineers.

    The CPPs also link companies to university career services departments, which have become headhunting agencies for the big finance and consulting firms. As the sociologist Amy Binder has observed, we used to think of career services as an office that helps students discover what they want to do. Now it delivers students to prospective employers, especially those who can afford to pay for prime real estate at the career-services center. Harvard even renamed its own career center after a prominent investment banker, who — not incidentally — gave the university a hefty donation. As always, the students get the message. 

    At the Harvard office, one student told Binder, there is an “entire section” devoted to finance and another to consulting. “And then they have the not-for-profits as a general clump [laughs] and then they have ‘other’ [laughs harder],” the student added. Students at Stanford likewise mocked the “gold, silver, platinum” arrangement at the university’s career center: obviously, the companies that put down the most cash got the most prominent billing. As their nervous jibes illustrated, students are uncomfortable with the nakedly transactional nature of this arrangement. But they also acknowledge that it works, in its own grim way. Asked to define a “good job,” one Harvard senior pointed to the firms that dominated the career-services center as well as on-campus recruiting events, where companies shell out big bucks to host information sessions and cocktail parties. “I guess a good job means consulting or finance, because, well, look, that’s what the Office of Career Services has,” the student said.

    Throw in a healthy dose of peer pressure, and it becomes next-to-impossible to resist the siren calls of finance, tech, and consulting. “There was, like, this stampede to start applying, and it wasn’t my conscious decision,” a Harvard graduate recalled. “It was more, I guess, I mean, I hate to use the term ‘fear of missing out.’” I hate it, too, but it is real. Nobody—well, almost nobody — goes to college thinking they want to become a management consultant. Then you watch your friends get dressed up, get interviewed, and land a high-salaried job. And all of a sudden you want the same thing. “There’s definitely a herd mentality,” a Harvard student told the New York Times this spring. “If you’re not doing finance or tech, it can feel like you’re doing something wrong.” It’s here, it’s available, and — most of all — people you admire are competing for it. Why not throw your own hat in the ring, and see where it takes you?

     Because the jobs stink, that’s why. You won’t hear that at the career-services office, which is paid to propagandize for the big firms. If you listen to the students, however, it comes through loud and clear. They know — in their bones — they will not find their passion or create a better planet at Bain Capital or Boston Consulting Group. “Everyone has this ‘change-the-world’ mentality when they come to Stanford,” one student said. “You come wanting to change the world and then you leave wanting to work at McKinsey.” And much of that work falls into the category of “bullshit jobs,” a term coined by the late anthropologist David Graeber. A bullshit job is one that you do not believe in, but that you do anyway; it is “a form of paid employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence,” wrote Graeber. 

    You don’t have to buy Graeber’s neo-Marxist explanation for this phenomenon — that a population kept busy with make-work will not revolt against capitalism — to see that he was onto something. We have created vast bureaucratic armies of managers, analysts, and assistants who despise what they do, and whom nobody would miss if they disappeared tomorrow. “Could there be anything more demoralizing,” Graeber asked, “than having to wake up in the morning five out of seven days in one’s adult life to perform a task that one secretly believed did not need to be performed — that was simply a waste of time or resources, or that even made the world worse?” The question answers itself. 

    Every month brings another report about burgeoning rates of anxiety, depression, and loneliness among teenagers and young adults. In a recent book, Jonathan Haidt attributes the youth mental-health crisis to the ubiquity of smartphones and especially of social media. The argument makes intuitive sense, especially to oldsters who see the kids scrolling all day — and well into the night — on their phones. If you are bombarded 24/7 with curated content from your peers, you will inevitably conclude that they are hotter and happier than you are. What could be more demoralizing — or more depressing — than that? But a monocausal explanation for a complicated phenomenon always fails, and social science has failed to document a clear causal relationship between social media use and mental illness. (In fact, one prominent study found that joining Facebook can enhance well-being.) Nobody seriously questions whether the mental health of young people has plummeted; the big question is why. “I keep asking for alternatives,” Haidt remarked recently, responding to his scholarly critics. “You don’t think it’s the smartphones and social media — what is it?” 

    Perhaps one of the causes is the scourge of bullshit jobs, and the sad sense of inevitability that surrounds them. I do believe that these jobs represent “a scar across our collective soul,” as Graeber memorably declared. But I don’t blame my students for taking them. Students often tell me it would be stupid or foolhardy not to take a bullshit job, even when they know they will hate it. When I graduated from college, I didn’t consider the possibility that I might be unemployable and a burden on my parents. And if you had suggested that I move in with them — something millions of college graduates do today — I would have laughed you out of the room. I grew up in a completely different world, where the continued prosperity and security of the American middle class was taken for granted. My students do not have the luxury of that presumption. They also come from a wider swath of the socio-economic spectrum: while rich kids are still the largest cohort, they are not the only one. And we all know that families are incurring enormous debt to pay for college, which surely increases students’ incentive to find high-paying jobs. Significantly, though, a recent survey at Harvard showed that first-generation and working-class students were no more (or less) likely than their well-heeled peers to take positions in finance or consulting. This isn’t just about the money, or the broader economic anxieties in our society. It is about the culture we have created inside the university itself.

    What can we do to change it?

    One common answer focuses on reviving the humanities, which require students to deliberate on the meaning of life — and, we might hope, make them reconsider their own. In several pieces in the New York Times, my colleague Ezekiel Emanuel has warned, like other commentators, against reducing college to a job training program that downplays — or ignores — the ethical dimensions of education. His first concern was the students who praised Hamas after its October 7 attack on Israel, which demonstrated their “moral obliviousness”: if they had received stronger instruction in the humanities, he argued, they wouldn’t have fallen for that kind of claptrap. (Whether their professors are reinforcing the claptrap is a different issue.) More recently, Emanuel has worried that vocationalism is crowding out the liberal arts tradition and its attendant values of inquiry, exploration, and civic understanding. “Ambitious students eager to land a prestigious consulting, finance or tech job . . . find it too easy to brush aside courses in the arts, humanities and social and natural sciences,” Emanuel and his co-author warned. We need “more Socrates and Plato,” they declared, and less data sciences and accounting.

    The trends they describe are unmistakable. Humanities departments have withered over the past two decades: smaller classes, fewer majors, and a shrinking faculty. A debacle. In part, it is owed to students voting with their feet: they think the jobs are elsewhere, so they gravitate to majors in business and other “practical” fields. But it also reflects conscious decision-making by the universities, which behave like businesses in their own right. In a bracing article earlier this year, the University of Chicago classicist Clifford Ando showed how “revenue-centered management” — one of several practices borrowed from the corporate realm — spurred the university’s business and public policy schools to offer undergraduate majors, so that they could siphon off more tuition dollars. (Willie Sutton said he robbed banks because that’s where the money is; at universities, it’s in the undergrads.) 

    But that means less revenue for the Humanities Division, which must continue to staff the much-vaunted core curriculum classes. It has been forced to hire adjunct instructors, who are themselves so strapped financially that they cannot devote sufficient attention to their classes. Nor do most of them possess the job protections that allow tenure-track faculty to write and to teach without fear or constraint. This says all you need to know about the real priorities at the University of Chicago, which continues to tout its liberal arts bona fides even as it strips the liberal arts of resources. “The only fields that matter are ones that are essentially isomorphic with particular occupations,” Ando darkly concludes.

    A possible solution to the problem — and an increasingly popular one among university administrators — is to show how humanistic disciplines can prepare students for jobs, too. There is plenty of evidence that employers prefer someone with an English degree to a candidate who studied business, because the English major is more likely to have the skills you need in the workplace: written and oral communication, critical thinking, teamwork, and so on. Shakespeare as the road to McKinsey. In the past several decades, moreover, humanities graduates have kept pace with other fields in average annual earnings. But most of our students don’t know that. “The humanities have a marketing problem,” noted a dean at Arizona State University, using another metaphor from the business world.

    Surveys of students at Arizona State University have shown that they associated “humanities” with careers in teaching and (bizarrely) human resources, but nothing beyond that. So the school brought in notable humanities alumni from different professions to spread the message: philosophy, history, and literature can help you succeed in the world. It also created internships that allow students in these fields to explore a variety of occupational paths. “Students want lucrative careers, but they don’t realize that the skills they need for those careers … are all skills that they could be acquiring in the humanities disciplines,” one English professor told the National Humanities Alliance, for its recent report on “best practices” in recruiting students. We need to “catch students early,” she added, and to “challenge the narrative they’ve been fed about the humanities.”

    But we also need to challenge the narrative about bullshit jobs and to expose our complicity — to borrow the activist term du jour — in promoting them. I want young people to study history so they can think critically about what Aristotle called “the good life,” the one that is worth living. But if we continue to sell our students’ futures to the finance and consulting industries, we answer the question for them. What is the good life? The one with lots of money and status, of course. The administrators are right: the humanities can and should prepare students for different types of work — many of which we cannot imagine yet — but we won’t do that well, or honestly, if we are simultaneously channeling them into a tiny band of jobs. That constricts their imaginations, of their own lives and others. And it makes a mockery of the liberal arts, which are supposed to liberate us from our dogmas and our preconceptions. As I tell my students, there is nothing wrong with choosing to become a management consultant, financial analyst, or tech start-up assistant, but there is something enormously wrong with an institution that advertises limitless opportunities and then funnels more than half of its population into consulting, finance, and tech.

    On the last day of my first-year seminar, we discuss William Deresiewicz’ Excellent Sheep. The book is ten years old, but it remains the best single critique of elite American universities that I have ever read. With just a semester under their belts, the students can already recognize their institution — and themselves — in Deresiewicz’s pages: the stressful admission process, the check-the-box classes, the grim “comps” for extracurricular activities, and so on. The real purpose of a place such as Penn, Deresiewicz insists, is to enrich Penn: we produce rich alumni who give us money, which then produces more rich alumni. To get an actual education — the kind that opens your mind rather than just lines your pockets — you must resist the institution. It wants to make you into the sort of person who becomes a business consultant. If that’s not what you want, Deresiewicz warns, you will have to put up a fight.

    Which is where my students start to push back. Yes, they acknowledge, there are students who stand apart from the dominant pressures of Penn. But they are often just as miserable as the stressed-out social climbers, albeit for a different reason: they feel like they do not belong. Most of all, the students say, it’s not fair — or realistic — to expect individual eighteen-year-olds to stand against the crowd. My students can already see the tensions between what we say — find your passion, change the world — and what we do. And they can tell how it produces cynics, who mouth the cliches of personal and social transformation while they trundle off to Bain and McKinsey. Yet it is equally cynical to suggest that there is nothing we can do, as an institution, to alter this reality. It is a collective problem, my students argue, so it also requires collective solutions.

    They are right. And if enough people spoke up — students, faculty, even administrators — we could tweak and alter things for the better. We could institute a weighted lottery for admissions, whereby Penn set a bar and randomized among all the applicants who landed above it: we would still be highly selective (our bar would be high), but we would dispense with the fiction that everyone who is accepted is better than everyone who is rejected. We could require student groups to take all comers, with obvious carve-outs for varsity athletic teams and the like; you would be free to establish your own Hunger Games, of course, but not on our dime. We could put an end to on-campus recruiting, the endless parade of students in fancy clothes lining up for fancy jobs; anyone should be able to apply to the big firms, of course, but there is no good reason that we should subsidize them. We could bar the firms from buying pride of place at the career services office. And we could stop selling them the names of our students — and with that, their souls — which might be the biggest scandal of all.

    What about classwork? Aden Barton, the Harvard undergraduate, reports that it is an “afterthought” for most students. But we could change that, too. We could require students to attend class (imagine!); we could establish minimum reading and writing expectations; we could institute grading curves. The call to revive humanistic study — more Plato, please! — assumes that students would actually read Plato if Plato was assigned. And we know that many of them will not, unless we take specific measures to make sure that they do. None of this would be easy; real change rarely is. But we shouldn’t allow our fraught present-day moment — including the great technological war on reading and more generally on attention — to block us from imagining different futures, which is the whole point of studying the liberal arts in the first place.

    Why college? I still believe in the answers that we hear every year at convocation: to find your place in the world, and to leave it a better place than it was when you entered it. I recognize that we have not made good on that high and right promise. The cynics will tell you that it’s impossible, and sometimes the cynics are right. But if you think otherwise, come join me in the humanistic underground. Come to class on time. Readings are mandatory. Visitors are welcome.

    The Problem of “Popular” Sovereignty

    “In America, the people govern, the people rule, and the people are sovereign.” So said President Donald Trump in his inaugural speech to the United Nations in September 2017. “In foreign affairs, we are renewing this founding principle of sovereignty. Our government’s first duty is to its people, to our citizens… As President of the United States, I will always put America first.” Trump used the terms “sovereign” and “sovereignty” some twenty-one times in his U.N. address. As this brief quote suggests, the meaning of these terms shifted throughout his remarks: first Trump said that the people govern, then he said that those who govern must protect the people, and finally he said that the nation would act in its self-interest. 

    Sovereignty is a concept as much used as little understood in contemporary political discourse. In purely secular terms, sovereignty is the right to rule and to make the rules. Even if we can offer secular accounts of the concept of sovereignty today, the concept’s origins, at least in Western thought, are hardly secular. The sovereign of all sovereigns, of course, is a monotheistic God, at least within the Abrahamic religions of Judaism, Christianity, and Islam. And the paradigmatic assertion of (and submission to) sovereignty is God’s inexplicable command to Abraham to slay his son Isaac. Kierkegaard called this the “teleological suspension of the ethical,” under which justice and reason must give way to the duty to obey the absolute sovereignty of God.

    Earthly sovereignty is either a pale copy of Divine sovereignty or takes its authority from Divine delegation or approbation. It is no accident that Romans 13:1 was a favorite prooftext for magistrates and would-be sovereigns: “Let every person be subject to the governing authorities, for there is no authority except from God, and those authorities that exist have been instituted by God.” Relying on such theological arguments, European monarchs insisted on the Divine Right of Kings: on their right to rule and to make the rules, and indeed on the absoluteness, indivisibility, and non-accountability of their power, at least while on earth. After all, God himself ordained their status.

    This example suggests that one of the most frequent and important features of sovereignty is its ideological function. Claims of sovereignty mystify claims of authority. They disguise what is really going on in a political situation. Assertions of sovereignty are often designed to give the impression that one has the right to rule and to make the rules, and that others cannot and should not interfere with that right. As an ideological concept, sovereignty’s purpose is to generate a false belief in justified subordination to those claiming sovereign authority. The confusions and mystifications of sovereignty emerge from its totalizing rhetoric. 

    In practice, however, sovereignty is always partial and incomplete, hemmed in and limited by other forms of power. No one and no thing — with the exception of the Almighty, of course — is fully and truly sovereign. (And as Milton vividly portrayed in Paradise Lost, even God can face rebellion from his disgruntled angels.) Everyone, to the extent that it even makes sense to call them sovereign, is sovereign only to a certain degree, whether conceptual or empirical. Claims of sovereignty are always bumping up against competing claims by others, a bit like the three mental patients in Milton Rokeach’s The Three Christs of Ypsilanti, each of whom claimed to be Jesus.

    Countries within the international system often complain about unjustified intrusions on their sovereignty, insisting that they be left alone or that they can take justified retaliation against those intrusions. Needless to say, those countries in conflict with them make similar claims. Whether these claims make theoretical sense, they are central to contemporary international relations. Both American conservatives and American progressives sometimes argue that it is important to protect our national sovereignty from foreign encroachments and from international law, though, of course, they may offer very different examples. At the same time, people assert the importance of “state sovereignty” within the United States to limit the very same federal government that is supposed to be sovereign with respect to international institutions. It’s all very confusing.

    Scholars have often noticed the ideological functions of sovereignty and criticized them. In his book on popular sovereignty, Edmund Morgan pronounced the idea to be nothing more than a fiction, used by power-grabbing politicians seeking to legitimate their claims to power. Many years ago Steven Krasner pleaded with political scientists to drop the term because it had no empirical validity. More recently, Don Herzog argued that we should stop talking about “sovereignty” altogether. It is not a meaningful concept, he explains, because almost no one in the contemporary world still believes in the normative desirability of an absolute, indivisible, and unaccountable power attached to any earthly authority. The concept was invented and theoretically elaborated in the sixteenth and seventeenth centuries as a device for bringing the savage European wars of religion to an end. As part of that arrangement, sovereign princes henceforth would get to determine the one true religion that would be accepted within their domains. Cuius regio, eius religio, as the saying went. Yet even as European monarchs were proclaiming their Divine right to rule, thinkers such as Thomas Hobbes were undermining religious claims to sovereignty and attacking them as dangerous. This led to the rise of a new conception of sovereignty located not in individual monarchs but in an imagined collectivity called the people.

    After the collapse of the Divine Right of Kings, a new ideological formation arose to take its place. This idea is popular sovereignty. The early medieval phrase Vox populi, vox Dei — the voice of the people is the voice of God — might suggest that, like the sovereignty of kings, popular sovereignty can similarly be justified on theological grounds. In fact, the phrase was originally used to ridicule the pretensions of popular sovereignty: before it came to be used in an anti-monarchical spirit, it was deployed to suggest that it is both blasphemous and ridiculous to analogize the collective will of ordinary human beings to the voice of God. Today the idea of popular sovereignty is usually articulated in purely secular terms. In his Leviathan, Hobbes — who ridiculed theological explanations — argued that the true sovereign authority is that of “the people” themselves. Desperate to escape an awful state of nature, the people, using their own reason, band together to establish and authorize an all-powerful government — at least as long as it provides security, which, for Hobbes, is the greatest of all political goods.

    As Richard Tuck emphasizes, with the rise of popular sovereignty there emerged a crucial distinction between “sovereignty” and “government.” An absolute monarch citing Romans 13:1 in early modern Europe claimed to be the sovereign and the head of government. There were obvious exceptions — for example, during a regency when the monarch was under age — but for the most part an absolute monarch was both the governor and the sovereign. Louis XIV bluntly proclaimed “L’etat, est moi.” Charles I in England had similar pretensions. But he, of course, lost his head, as did Louis’s successor a century later in France. Both succumbed, as it were, to popular sovereignty.

    Systems of popular sovereignty made the twin concepts of sovereignty and government come apart, especially as the size and the complexity of the state grew. In a system of popular sovereignty, the people, who are the sovereign, create a government, which can take many forms. But depending on how the framework of government is organized, the people may or may not directly participate in governance. In the Constitution of the United States, for example, there is no provision for direct democracy, although there is in many of the individual States. At the national level within the United States, the distinction between sovereignty and governance means that although the people in theory retain the right to rule and make the rules, they may not actually exercise that power in practice. Rather, that power is exercised on their behalf by those elected to represent them. 

    For James Madison, the lack of direct popular governance in the U.S. Constitution was a positive feature. In The Federalist No. 63, he explained that the difference between American government and the governments of ancient democracies “lies IN THE TOTAL EXCLUSION OF THE PEOPLE, IN THEIR COLLECTIVE CAPACITY, from any share” in actual decision-making. The capitalization is Madison’s own, in case we are tempted to overlook the importance of his statement. Madison does not mention Hobbes in The Federalist, no doubt for political reasons, but one can be confident that Madison fully embraced Hobbes’s notion that once a people establish and authorize their government, they should promptly become what Hobbes called the ”sleeping sovereign.” Since the sovereign is asleep, it plays no role in everyday government.

    Much political and legal discourse blurs the distinction between the sovereign and government, even though the distinction is actually central to popular sovereignty. In 1819, for example, in McCulloch v. Maryland, Chief Justice John Marshall referred to Maryland as a “sovereign state.” Of course, Marshall’s point in McCulloch was that even if Maryland called itself “sovereign,” it was hemmed in by any number of features of the national Constitution, including restrictions on its power to tax. Perhaps when Marshall called Maryland a “sovereign state,” he was being ironic. (Perhaps one should read this passage today with a hint of sarcasm.) But in 1819, many people thought otherwise, as some advocates of states’ rights do even to this day.

    The separation of sovereignty and government has two aspects. The first is that the government is not the sovereign, but merely its agent or servant. The second is that the sovereign, at least while sleeping, is not the government. The first point — that the government is not the sovereign — was articulated in 1793 in one of the earliest important cases of the United States Supreme Court, Chisholm v. Georgia. Chisholm involved a suit against the State of Georgia brought by the executor of the estate of a South Carolinian who had contracted with Georgia during the Revolutionary War. Georgia, claiming that it was a sovereign state, argued that it was immune from suit in the federal courts. Justice James Wilson, one of the most important figures at the Philadelphia Convention, emphasized that Georgia was “NOT a sovereign state” (capitalization in the original), while John Jay, the nation’s first Chief Justice (and a co- author of The Federalist), emphasized that in the United States only “the people” are sovereign. Indeed, Jay continued, not only could the states be dragged before federal courts, but the national government itself, being subordinate to “the people,” might also lack the “sovereign immunity” enjoyed by the British monarch. If the federal government enjoyed any immunity, Jay suggested, it was for wholly prudential reasons: if the government refused to pay damages there was no one to make it comply. This implied that, by contrast, the national government could (and would) enforce judgments against a recalcitrant state.

    The second aspect of the distinction between sovereignty and government — that the sovereign is not the government — emerges from the rise of popular governments that displace older monarchical forms. Yet in the very act of displacing monarchies, the division of sovereignty from government created repeated theoretical problems that have not been solved to this day. The gap between sovereignty and government creates the possibility of a similar gap between popular sovereignty and popular governance — in other words, it creates the possibility that those who actually govern can simultaneously claim the authority of “the people” while effectively shutting the people out of governance. If democracies involve sovereignty (that is, rule) by the people, but the sovereign is perpetually asleep, in what sense is popular sovereignty at work, and in what sense is it actually democratic? In other words, popular sovereignty creates the possibility of governmental arrangements that undermine democracy in the name of democracy, and that mystify the actual exercise of power by governing elites. As with the Divine Right of Kings, the concept of popular sovereignty might easily become an ideological weapon in the hands of ruling elites who wish to solidify their claims to power.

    To be sure, there may be no gap between sovereignty and governance, and therefore no democratic deficit, if the people are properly represented. If elections produce a perfect representation of the people — what John Adams idealistically called “mimetic” representation — and if the people’s elected representatives speak truly and faithfully for the people’s genuine interests, the problem of the gap between sovereignty and governance does not arise so urgently. Indeed, assume proper representation of the people and the problem of liberty is also solved. Under the older republican theory, people are slaves when they are subject to the arbitrary command of others, and they are free when they make laws for themselves. It follows that in a well-functioning system of representative government, the people are both sovereign and free. The laws they live under are also the laws that they — or, at least, their genuine representatives — make for themselves. 

    There are, of course, many theoretical problems with the theory of representative government, as thinkers from Rousseau to the present have explored. What constitutes proper, much less perfect, representation is often a matter of serious theoretical dispute. Other problems are practical and empirical. Representation is never close to being mimetic. To adopt Bill Clinton’s famous phrase, we do not have a government that “looks like America,” and people may disagree about which groups in the American mosaic would have to be added, and in which institutions, in order to achieve the necessary resemblance. Historically speaking, many people have not been represented at all in government, and defenders of the status quo have sometimes relied on implausible theories of virtual representation to keep it that way. 

    Even when voting rights have been extended to include more and more of the population, modern representative systems retain deep flaws. The wealthy and the powerful will inevitably corrupt the system, and self-dealing is a perpetual problem. Even if the problem of the corrupting influence of money on politics were fully and finally solved, all systems of representing popular values and preferences involve difficult tradeoffs. In an America with a population of over three hundred thirty million inhabitants, wouldn’t “mimesis” require a House of Representatives with thousands of representatives? That is far too many to allow genuine deliberation and debate. In sum, there is little reason to believe that the political systems we have today adequately address the gap between democratic sovereignty and democratic governance, even if some are assuredly better than others.

    There is yet another problem, which becomes ever more urgent as time goes on. One of the ironies of the modern period is that the theory of popular sovereignty caught on just as states were getting larger and more populous, and their systems of governance more elaborate. One could no longer assume the kind of small homogenous polities of which Athens was the model. In the twenty-first century, for example, Texas has seven times the population of the entire United States in 1790; the largest state, California, is ten times larger. Moreover, the population (and the citizens who participate in governance) are far more diverse than could possibly have been imagined in 1787. That is one reason why Montesquieu’s The Spirit of the Laws was so important to the American Founders, cited more often than any other single work in The Federalist. But the citations were almost always critical. Montesquieu had argued against the possibility of a successful extended (and heterogeneous) republic, and it was crucial for thinkers like Madison to prove him wrong, and explain why popular sovereignty made sense in an extended republic.

    Precisely because popular sovereignty rests on a distinction between sovereignty and governance, it creates a series of practical and theoretical problems, each of which troubles democracies to this day.

    1. How do “the people” change the form of government? Once a government exists, how, if at all, does the popular sovereign alter it? Hobbes, like Locke, imagined that the people would come together in the state of nature to form a government, but human beings rarely create new governing institutions in the total state of anarchy imagined in Leviathan. Rather, in most cases human beings act in medias res, with already constituted authorities and powers in place. The powers that be may resist any potential threats to their own authority. So they will usually oppose awakening the popular sovereign if it means that they will lose power — and they will insist that calls for constitutional reform do not reflect the people’s will, because the sovereign is still snoring — or at best sleepwalking.

    This is the problem of recognizing and organizing “constituent power,” or, to use the fancier French, pouvoir constituant. As a leading contemporary constitutional theorist, Martin Loughlin, has explained, “constituent power articulates the power of the multitude: constituent power is the juristic expression of the democratic impetus.” The problem of constituent power emerges from the separation of governance and sovereignty. To speak as sovereigns, the people must be organized in some way, but the most obvious way for them to be organized — and thus to speak — is through legal forms created by the existing government. This creates the possibility that the people’s voice will be muffled or distorted by these forms, or, possibly, that the voice of the people will be that of a ventriloquist’s dummy. More generally, the problem of constituent power creates the puzzle that the voice that underwrites the legitimacy of government must itself be shaped and constituted by the existing government. (The only exceptions are the rare instances in which catastrophic defeats in war or political revolutions wipe out any existing form of government.) One way around this is to create special conventions that exist solely for the purpose of enacting basic law, but these conventions, too, cannot arise spontaneously in a large country but must be organized by law.

    1. Who are “the people”? Constituent power is a power of the people as sovereign. But who are “the people” who are sovereign? How are they defined, and who has the power to define them? Here again the existing authorities are likely to shape how and which people are heard. Do “the people” consist of every human being living within the geographical confines of a state, or are only some of these human beings part of “the people?” Are all members of the community equal in their peoplehood, or are there gradations of the same? Thomas Jefferson notably spoke in the name of one “people” at the very beginning of the Declaration of Independence. But surely that did not refer to each and every individual person inhabiting the territory of the British colonies (including, for example, visiting sojourners and, more importantly, enslaved persons and members of Indigenous Nations). A similar problem arises for the “We the People” that opens the preamble to the U.S. Constitution and “ordains” what follows. To hear the people speak, one must decide who is allowed to speak in the first place.

    It is an uncomfortable truth that the major twentieth century theorist of pouvoir constituant was the odious Carl Schmitt, who, in his masterpiece Constitutional Theory posited that ein Volk stood behind all constitutions at all times. They (or it) retained the Hobbesian capacity to awaken from its/their slumbers and transform any existing constitutional order as it/they wished, without limits on its/their power. But to theorize in this way already poses the danger that some segment of the population will suppress or ignore other parts in order to assert its primacy. This presents, once again, the ideological mystification of the concept of sovereignty: a dominant group presents itself as the singular Volk that purports to speak authoritatively on behalf of all the various communities and individuals that live within the borders of the state, and thereby suppresses those who do not conform homogeneously to its imagined model.

    1. The populist temptation. The gap between sovereignty and governance naturally anticipates the rise of populism. Populism rests on the claim that “the people” are being unjustly shut out of government by elites, who, according to populist ideology, are opposed to the interests of the people and whose power is therefore illegitimate. One recurrent pathology of populism is its tendency to turn politics into a clash between the virtuous true members of “the people” and the “others,” consisting of a hated ruling class (and other internal enemies of the people). In older days, that might have referred to the capitalist owners of the means of production, though today it is more likely to apply to “meritocratic” elites, intellectuals in general, or members of minority groups whose exclusion helps define “the people.” (One of the ironies of populist politics is that populists who come to power and run important aspects of the government may still imagine that they are not members of the elite.). That is to say, populism can simply be a claim about the (unjustified) gap between sovereignty and governance, and a claim that the people’s agents should pay more attention to the needs and wants of the public – or, more darkly, in the name of that gap, it can be an excuse to engage in status competition and the social subordination of particular groups within a nation.

    Populism, we might add, is not necessarily the same thing as popular constitutionalism. Popular constitutionalism is a claim that the right to interpret the Constitution rests with ordinary people as much as with legal and juridical elites. It asserts that all individuals may practice constitutional interpretation — by analogy to the Protestant idea of a priesthood of all believers directly interpreting Scripture themselves. To be sure, such constitutional protestantism may be combined with an attack on elite culture and institutions. Yet claims of popular constitutionalism can and have been made by elites and by leaders of social movements as well as ordinary citizens, and by members of the political branches as well as members of the general public. 

    1. Ensuring faithful representation. The distinction between sovereignty and governance leads to still another problem. If the government is the agent of the people, what ensures faithful agency by the people’s servants? This question lies at the heart of constitutional design. The point of regular elections, fixed terms of office, separation of powers, and checks and balances is to ensure that the “servants” do not entrench themselves and oppress their erstwhile masters. 

    But the very need for these structures presupposes that representatives may not actually speak for the people they represent. That is why they need to be “checked and balanced.” So the question remains: Who, whether inside or outside government, speaks for “the people,” and how do we know that they speak on their behalf? Moreover, what if the people do not speak in a single voice? Who is it, precisely, that can legitimately claim to speak for “the people,” as distinguished from the multiple “peoples” that comprise a contemporary diverse and pluralistic society? Politicians regularly speak of what the American people want. But if there is not a single “people,” then any particular claim to speak on its behalf is fictional. At best one must stipulate that the vector sum of all of the cacophonous voices represented in political institutions somehow magically reflects the actual will of the people. But this is a stipulation rather than a plausible conclusion. 

    Perhaps, if one is thinking of the people of ancient Athens, one can imagine all of them — or, more to the point, all of the free enrolled male citizens — coming together in collective deliberation leading to a decision by majority vote in lieu of unanimous consent. The idea of a Quaker meeting, where everyone strives to reach consensus, may be an attractive proposition. but it is hard to imagine a sizable organization, let alone a modern nation-state, run on such principles. In any case, the Athenian model quickly gave way to much larger entities, the Roman Empire being the most obvious successor. This created yet another version of Montesquieu’s challenge: the larger the population, the greater the problem of making popular sovereignty a reality. In 1790, the total population of the United States was roughly four million, which is less than that of contemporary Oklahoma, currently the twenty-eighth largest state in the Union. A follower of Montesquieu might well doubt whether the concept of popular sovereignty makes any sense in such a world.

    The question becomes especially important when one thinks either of constitutional formation or significant constitutional change. In both cases, almost by necessity, very small groups claim a mandate to speak for “the people.” They might be bourgeois revolutionaries like the American Founders, or Leninists who style themselves as the vanguard of the working class. How does one justify treating a part as the whole — one political faction as instantiating “the people” even while denouncing others as illegitimate radicals (or reactionaries)? As critics of “populism” such as Jan-Werner Muller note, all such groups posit not only a potentially dangerous unitary notion of peoplehood, but also arrogantly assert that they have the unique ability to “represent” this far larger “people” and to discern the common good, even in the absence of formal elections or any other plausible modes of designation.

    One might be tempted to solve the problem by turning to plebiscitary notions of government or other forms of direct democracy. But plebiscites, a favorite of the egregious Schmitt, or even issue-based referenda, almost always involve up-or-down votes on proposals made by others, who may manipulate the wording of the proposals to ensure their desired outcome. Even if one is generally sympathetic to leavening representative democracy with direct forms of popular participation, the record of initiatives and referenda in the contemporary United States is decidedly mixed. At the end of the exercise, someone in our society is going to feel manipulated and unrepresented. The voice of the people is never simply given; it must be constructed, and the question is always by whom.

    This fact brings us to one of the unintended but important consequences of the rise of popular sovereignty. Since popular sovereignty must be always expressed in some representational form, the age of popular sovereignty also turns out to be the age of constitutions and constitutionalism. Modern democracies, to the extent that they are able to remain democracies, depend on well-working constitutions. Conversely, poorly functioning constitutional systems tend to create cumulative problems that, over time, threaten the stability of democracies and lead to their decay and demise. To the extent that American democracy has been successful, it is because its constitutional system has proved surprisingly adaptable in spite of its many flaws and its many seriously undemocratic features. (We note that the system did break down during the Civil War and had to be reconstructed through new amendments that made it somewhat more democratic.)

    The many flaws and undemocratic features of the American constitutional system have started to catch up with us. That is one reason why our political system seems so dysfunctional and the future of American democracy seems so fraught. Behind the popular anger and frustration that we see all around us is a veritable chasm between the ideology of popular sovereignty and the reality of unresponsive governance. We suffer from a deeply deficient system of popular representation, of which the malapportioned Senate and the broken system of campaign finance are only the most egregious examples. 

    If American democracy is to survive, Americans will once again have to engage in serious constitutional reforms that repair their broken system of representation. The good news is that there is a long history of such reforms in the United States, both at the state and federal levels, and both through formal constitutional amendment and through ordinary legislation (such as the Voting Rights Act of 1965). In fact, nothing is more characteristic of the long American experiment with popular sovereignty. During a seven-year period from 1913 to 1920, the United States adopted four constitutional amendments, and in 1933 alone the country adopted two more (one of which repealed the earlier Prohibition Amendment). This says nothing of the explosion of constitutional creativity at the state level during the Progressive Era. 

    The United States once had many able politicians — Woodrow Wilson and Theodore Roosevelt among them — who
    pushed for constitutional reform. Although serious discussions of changing our representational system have been off the table for many years, especially on the left, we think that there is now increasing interest. (For example, Steven Levitsky and Daniel Ziblatt, whose earlier book How Democracies Die largely ignored the role of constitutional design in undermining democracy, more recently published Tyranny of the Minority: Why American Democracy Reached the Breaking Point, which places the question of reform at its center.) 

    It is not too late for the United States to repair its creaky and increasingly antiquated system of popular representation. For this to happen, however, the American popular sovereign will have to find a way once again to awake from its slumbers and, in the words of the Declaration of Independence, “alter or … abolish” the old “destructive” forms and “institute new government.” As at the Founding and during the Progressive Era, that will require a genuine political conversation between an aroused public and a new generation of enlightened political leaders willing to think audaciously.

    We have noted that the shift from the sovereignty of monarchs to the sovereignty of the people creates a series of problems that reappear in various guises throughout the history of democracies. Since they continuously reappear, we doubt that there are final answers to these problems, rather than a series of evolutionary accommodations that will change over time.

    Don Herzog has argued that the theoretical problems with sovereignty are so great that we should stop using the word. Yet he draws back from suggesting that we must stop talking about popular sovereignty as well, which might be understood as a rejection of democracy itself. One might doubt the real possibility of junking a concept so central to American self-identity and the American constitutional project. Yet the issue cannot be resolved by mere nomenclature. Rather, we believe that the problems of popular sovereignty, which have been present from the beginning of the modern era, mean that modern democrats must decide what degree of misalignment between sovereignty and government they are willing to live with, and how they can best adapt their constitutional systems to the changing needs of changing populations. 

    At the same time, we think that modern states face increasing problems of justifying themselves in terms of popular sovereignty, as tensions within the paradigm become more pronounced. We do not believe that a naïve version of popular sovereignty can adequately explain the governance of a large transcontinental state of three hundred and thirty million people like the United States. We need a new way of imagining popular sovereignty if the concept is to remain workable. 

    We might make an analogy to Thomas Kuhn’s model of scientific revolutions, which explains how widely accepted scientific theories — such as Newtonian mechanics — generate anomalies that eventually lead to new conceptions, like the theory of relativity or quantum mechanics. For many years, popular sovereignty has been the standard model of how to justify and legitimate political power. Most political and legal theorists work within the “normal science” — to use Kuhn’s terminology — of liberal democratic theory. But various problems, or in Kuhn’s terms, anomalies, have always existed within the model. We either ignore them or we construct the equivalent of Ptolemaic epicycles to explain them. As states get larger and more elaborate, and as technological change proceeds, the problems of popular sovereignty that have existed since the beginning start to look even more problematic. The difficulties now seem more urgent. At some point the anomalies in the theory of popular sovereignty might call for a deep rethinking of the concept and produce a new system of political authority with a new justifying ideology.

    The sovereignty of monarchs eventually gave way to popular sovereignty. Perhaps, as society changes, popular sovereignty will someday give way to a new political conception of the right to rule, and humanity will find itself in a post-popular sovereignty world. Following the fall of the Berlin Wall, Francis Fukayama famously argued that liberal democracy had won out over its rivals and that it would dominate political thinking for the rest of history. The decades that followed his bold claim produced many historical rejoinders to it, in the form of democratic backsliding and the rise of new forms of authoritarianism. But there may be still another objection to Fukayama’s argument. It is possible that theories of popular sovereignty only make sense under certain conditions, and once those conditions disappear, new theories of sovereignty will displace them in whole or in part, just as democracies and authoritarian states displaced monarchies.

    What would a successor to popular sovereignty look like? We don’t know yet. Not only has the Owl of Minerva not spread her wings, she still seems to be soundly asleep on her perch. If we look around us, the most obvious candidate for a new political system would be technocracy, in which the right to rule passes to unelected experts. As the state becomes increasingly complex, and technology companies assume more and more practical governing power, perhaps the fiction of popular sovereignty will give way to a new form of technocratic aristocracy. Yet technocracy has serious problems, which should already seem obvious. The very features of technological advancement that bring technocracy to the fore also tend to undermine its authority. It is true, for example, that the digital age has given companies such as Facebook and Open AI increasing powers of influence and governance. What it has not done is bestow the sort of philosophical and sociological legitimacy that should accompany this increased power. Quite the contrary; so far the digital age seems to have created stiff headwinds to the political authority of large technology companies.

    The rise of the printing press undermined the authority of the Church and paved the way for the creation of the public sphere, which, as Jurgen Habermas contended, created a political space for the rise of representative democracies. Similarly, as Benedict Anderson argued, the rise of newspapers helped make possible the belief in nations of large populations spread over vast territories. One cannot tell quite the same story about digital technologies. The Internet makes experts ever more important to successful governance, but it also facilitates populist upheavals. The ability of everyone to be a broadcaster has greatly advanced the growth and spread of knowledge, but also the growth and spread of falsehoods, propaganda, and shared hallucinations. It has also helped generate increasing distrust of technocratic expertise, in much the same way that the mass distribution of religious texts and multiple versions of the Bible encouraged distrust in the expertise of the Catholic clergy and a desire for lay individuals to “do their own research” into what God wanted. Judging by the early years of the digital revolution, new communications technologies seem to have undermined the authority of all experts and made technocrats a despised group among much of the public. Perhaps the tide will turn as the age of artificial intelligence proceeds. Perhaps a new political formation will emerge out of technocracy, encompassing rule by experts assisted by artificial intelligence and algorithmic decision making. Perhaps the form of politics that the digital age facilitates is authoritarian. Or, more hopefully, perhaps the ideal of popular sovereignty still has life in it, suitably transformed in a digital economy.

    We can see the future only dimly at present. But we can draw one important lesson from the past. Democracy is a moving target, and it tends to evolve together with the dominant communications technologies of the time. The democracy of newspapers and books that characterized the political culture of the eighteenth and nineteenth centuries gave way to a democracy shaped by radio, television, and other mass media in the twentieth century. Twenty-first century democracy, if it is to survive, will have to find a way to adapt to digital technologies, pervasive data collection, and artificial intelligence. 

    Lawyers, in their role as the servants of political power, have always had a hand in theorizing and legitimating political change within an overarching status quo. Currently, however, they are mired in ancient discourses of sovereignty — like those promulgated by the U.S. Supreme Court — that ignore its many problems. Realizing popular sovereignty in the twenty-first century will take more imagination. To respond to the political challenges of the future, lawyers will have to put aside their lawbooks and broaden their vision. 

    And those who profess to be political leaders must learn to take a page from our Founders, who were not afraid to experiment with a broken system — the Articles of Confederation. Alexander Hamilton’s initial essay in The Federalist begins with the question whether nations “are forever destined to depend for their political constitutions on accident and force,” or “are really capable… of establishing good government from reflection and choice.” Hamilton argued for the latter, and his answer should be our answer today.

    Respect, or The Missing Relation

    I contemplate a bird. In fact it is a photo of a bird, many times larger than life, hanging on the wall of a café. I’ve never had a chance to scrutinize a bird so carefully before. After I finish admiring its beauty, I turn my attention to its claws, which are pointed and hard, its beak, which is open in a cry, its eyes, which are empty of pity or warmth. I think: this creature is intensely alien to me. It is not a cute little bird, a sweet little bird, look at the pretty little bird. It is not a bird in a children’s book. And it comes to me that I have never understood an animal this way before. That whether in a zoo, on a farm, in my yard — still more with a photo or video clip of the kind that are forever being passed around online — my response to animals has always been to anthropomorphize them, to project my subjectivity onto them, to slobber over them with my emotions, with my needs. To place them in relation to myself. And it comes to me as well that to refrain from doing so, to let the bird, the goat, the possum be exactly what it is, in itself and for itself, without reference to me, to accept it in its otherness, would be to treat it with profound respect.

    I am talking with a former professor of mine. She is telling me that she believes that part of our job as teachers of undergraduates is to help our students, as she puts it, “instrumentalize” the things they learn from us — instrumentalize them, she means, for the sake of social change. I’m skeptical. What do academics know about instrumentalizing anything? More to the point, what business do we have telling students what they ought to do with what we teach them? “Fine,” I say at last (this is some years ago), “as long as you would be okay with one of your students instrumentalizing what they learn from you to try to overturn Roe v. Wade.” She is stunned. The possibility has clearly never crossed her mind — the possibility, that is, that students might have goals that conflict with hers. That they possess an otherness that we as educators must respect.

    A few years later, I come across an essay by this same professor in The Chronicle of Higher Education, the principal organ of news and opinion about the academy. Titled “In Praise of the Academic Cliché,” it champions buzzwords such as
    “performativity,” “intersectionality,” and “heteronormativity” as agents of transformative social potential, especially once “they quietly wriggle through discourse, swimming from theory to classrooms” and thence, beyond the college walls, to essays, podcasts, Twitter, “mainstream journalism and popular entertainment.” The student’s function in the process is to carry them, the way that a deer might carry a tick. “Not all of our students will be original thinkers,” she writes, “nor should they all be. A world of original thinkers, all thinking wholly inimitable thoughts, could never get anything done. For that, we need unoriginal thinkers, hordes of them, cloning ideas by the score and broadcasting them to every corner of our virtual world. What better device for idea-cloning than the cliché?” Instead of teaching undergraduates to avoid clichés, as generations of instructors have done, “we should instead strive to send our students forth — and ourselves too — armed with clichés for political change.” 

    My professor had progressed from wanting to teach her students to instrumentalize ideas to wanting to instrumentalize her students: to recruit, enlist, train, mobilize, and deploy them — “armed,” in “hordes” — for the purpose (her purpose) of “getting things done.” Or rather, she had shown me that the second impulse was implicit in the first. Forget teaching people to think; forget uniqueness, individuality, the soul. The ideas are cloned, and so are the students. Nor is she alone in her desire, as anyone familiar with contemporary academia will know. Quite the opposite, in fact. Some years ago, to take one data point, I spent a couple of weeks at a moderately selective Catholic university: not an elite institution, not one you would think of as a redoubt of progressive ideology. Across the board, across the disciplines, the dean informed me, younger faculty believe their job to be indoctrination. Which means they think their mission is to serve the cause, not the students. The students are tools.

    This is the antithesis of what I am calling respect, be it for a creature or a student: the recognition of another as other and the willingness to let them be such. Call this antithesis projection, intolerance, the will to power, it is surely a persistent part of being human, and it is also surely getting worse. Academics are not the only professionals who have decided that their mission is to save the world and that their clients must be proselytized and propagandized, their personhood be damned, in order to do so. So have teachers, librarians, and social workers. So, perhaps worst of all, have counselors and psychotherapists, who practice the one line of work in which it is even more important than in education to treat one’s charges as individuals, people with their own particular histories, qualities, needs. All around us we are witnessing the loss of this thing that I’m calling respect. 

    The problem is bipartisan. The left speaks constantly of “difference,” but it cannot abide it. This is, again, an old phenomenon freshly intensified. “Deviation” from the party line was a cardinal sin in twentieth-century communism. Leftist groups, accordingly, were notoriously fissiparous, splintering into ever-smaller factions of doctrinally pristine believers. As class politics became self-expressive lifestyleism, the purism seamlessly transferred. A friend from redneck small-town southwest Georgia lived for many years on a commune in central Virginia largely populated by Northeast liberals. They were, she told me, the most intolerant people she’d ever met: accepting and open to all, as long as you were exactly like them. (Those acquainted with other progressive bastions — Berkeley, Cambridge, Brooklyn, et al. — will know what she was talking about.) “Do your own thing,” went the countercultural slogan, with the tacit addendum, “as long as I approve of it.” You could wear beads or berets or dashikis (or later, mohawks or dreadlocks or flannel), but never khakis or a suit, let alone a cross. And now that politics and self-expression have become coterminous (the personal is political, the political is personal), it is all Stalinism all the time. To be my “ally” means that you agree with me, not on a specific issue, as it once would have done, but on all of them, unquestioningly. There’s no more ordering à la carte; you have to swallow the entire menu.

    But the right is no better these days, having likewise largely extirpated its liberal commitments in the name of an epochal moral crusade. Epistemic modesty, à la Edmund Burke, is gone, as is libertarian toleration. Red America, as David French and others have reported, is as heavily policed as Blue. MAGA admits no dissent; its idol is a jealous god; Never-Trumpers are “human scum.” Progressive social power is answered by state censorship, Kendi and Butler by DeSantis and Abbott. Christian nationalism, including in its juridical manifestation as “common good constitutionalism,” promises to make us do what’s best for us, whether we like it or not. In his Dobbs concurrence, Clarence Thomas started to prepare the ground for the repeal of rights to contraception, gay marriage, and gay sex, and thus to their legislative suppression. The Libertarian Party, as Cato’s Andy Craig remarked not long ago, having “experienced a hostile takeover by far-right culture warriors” has embraced “a program of openly bigoted authoritarianism.” In this it is aligned with conservatism’s Orbanists and Putinists, their retreat from democratic pluralism to a fantasy of church and volk.

    This is politics, but beneath it, it is narcissism. Or rather, politics has become a mode of narcissism, which can be defined as the need to make the whole world over in one’s image, to fill it with the self. For its hypertrophy, which has gone beyond the darkest dreams of Christopher Lasch, there are many things to thank, but above all is the internet. We now have the ability not only to create our own reality, but to live uninterruptedly within it. The phrase “my truth” originated as an assertion of the validity of subjective experience, of feelings as real and important. Now it signifies the triumph of subjectivity, its abolition of the objective, the external, the empirical. “The Bible was written by Africans,” I overheard a fellow author confide at a booksellers convention. “I know some people disagree,” she added, having clocked the nearby Semite, “but that’s my truth.” Her right-wing counterparts include the individuals who, dying of Covid, continued to insist that the pandemic was a hoax.

    Narcissism governs the contemporary stance toward art as well. Instead of going to it in the fearful hope that art will trouble us out of ourselves — confront us with genuine difference, and therefore make us different — we insist that it affirm us. Women will aver that they prefer to read books by female authors; sometimes that that’s the only kind they read. Pedagogical authorities concur that children should be given stories about people who “look like them,” that anything else is an injury. When a work remains refractory to our desire for validation — often because it belongs to the past, that foreign country — we rewrite it. Shakespeare is “queered,” Austen is revealed to be a radical feminist, and so forth. Once again, this is an oldish story — it dates to the rights revolutions and the reading practices they spawned — that has in our century become immensely worse. For with its two-way social traffic, the internet has given rise to the phenomenon of fandom, with its enormous powers of insistence. Not just fans — “fandom,” like “kingdom.” Now the audience is able not only to project its desires onto its idols (devotees of Elvis or the Beatles could do that as well), but to make those figures answerable to its projections. Now artists and audience mirror each other, the ego duplicated in an infinite regression.

    I have struggled with these things myself (and not just with regard to birds): with intolerance, with projection, with the impulse to convert. When I started teaching in my late twenties, still in my militant-atheist phase, I had a student, fresh from Catholic school, named John Luke. I really gave that kid a time — not quite explicitly, which was maybe worse, because I never said anything that he could argue with directly. It was all insinuation, a subtle sort of intellectual bullying. I remember bringing in some Nietzsche once (this was freshman composition; other graduate instructors — we were a bunch of smug little bastards — were sneaking in swatches of Marx or Foucault). This will give my students something challenging to chew on, I thought, and if I can win one away from the pale Galilean, then so much the better. I feel a kind of psychic nausea when I think of this today. One definition of evil, I later discovered, understands it as the effort to impose one’s will on others. 

    Slowly, however, I managed to learn. Many years later, I had another avowedly Catholic student. One day, in office hours, she mentioned that she belonged to a campus organization called CLAY, or Choose Life at Yale. I inwardly recoiled. Holy crap, I thought, she’s one of them. (I also thought, good name.) But I managed to keep my mouth shut. She’s got a right to her belief, I reflected, and what’s more, I respect her for standing up for it under what are surely challenging conditions. (It was she whom I was thinking of, in fact, when I reminded my professor that there might be people in her classes who want to overturn Roe v. Wade.) I had gotten to know a lot of students over the years, and I cherished those connections, but there was something special about this one — something cleaner or purer, and precisely because of our differences. It is pleasant to have disciples, but it can also be corrupting. The moment I accepted her for who she was, she got a little realer — became more of an actual person, not an idea of myself echoed back to me — and so, I think, did I. I was over here; she was over there. I didn’t like her for being like me, and she didn’t like me for being like her. We eventually grew to be friends, and some twenty years later we still are.

    Friends. I have an old one, someone who has perpetually disappointed the expectations that people have had of her. We met in a Zionist youth movement, but she later stepped away from any form of Jewish practice or affiliation. She went to a leading professional school, but she abjured the prestigious career paths that her classmates pursued. Raised in an affluent suburban environment, she went off to live in a working-class rural community. “The worst thing you can do to your friends,” she once remarked, “is not be the person they want you to be.” I thought of that when I was having dinner with another friend, another former student, a young man who was taking his time about getting his life on track, in a way I was getting impatient about. He had just broken up with his girlfriend, he told me. Again? I thought. “I’m sorry to hear that,” I said, though he didn’t seem sorry at all. And then I caught myself. “That was a stupid thing to say,” I said. “Why should I care if you have a girlfriend?” Why indeed? Who was I to be “impatient”? I was too identified with him. I needed to let him be who he was going to be, whatever he was going to be.

    If this is difficult to do with friends, it is virtually impossible to do with children. Virtually, but not completely. A parent was telling me that she couldn’t wait for her teenage daughter to go off to college so that she, the parent, could finally get some distance from her. I thought, for years I’ve been advising young adults that they need to separate from their parents. It hadn’t even crossed my mind that parents ought to try to separate from their children, because I hadn’t imagined that such a thing was possible. Later, I read this in Louise Glück, a writer who knew about separateness:

    I’ll never understand
    the claim of a mother
    on a child’s soul.

    So many times
    I made that mistake
    in love, taking
    some wild sound to be
    the soul exposing itself…

    The soul is silent.
    If it speaks at all
    it speaks in dreams. 

    A child’s being is their own. Mothers and fathers, it’s not about you.

    Respect, as I am calling it, shows up in the political realm as tolerance. I used to hate that concept, back in my days as an angry young Jew. Who are you, I thought, to merely tolerate me? Am I supposed to be grateful for that? But our politics of mutual negation has made me wiser. Tolerance, compared to what we have, would be tremendous, would be a terrific achievement. Tolerance, in a democracy, signifies the recognition that the people whom you hate the most — Nazis, let us say, to put it in the starkest terms — have a right to share the political community with you: to speak, vote, advocate, educate, organize, assemble, just like you do. That they are your equals as citizens — I would add, as human beings. Being a Nazi is a civil right; being a Nazi is a human right. To grasp that is to understand the stony way of tolerance.

    My own instruction in this virtue came courtesy of Dave Chapelle. It was one of his Jew jokes, about the Jews controlling Hollywood or some such. My first thought was, fuck this guy. My second was, I’m never going to watch a thing he does again. My third was, idiot, this is what tolerance means, spiritually if not literally: not having to approve of everything another person does, and not disengaging from them even when they anger you, even when they offend you. Being okay with not being okay. What you are tolerating, ultimately, is your own discomfort.

    It’s hard. It’s definitely hard. And it goes the other way as well. I am white, middle-aged, middle-class. When I encounter someone from the other side of one of those divides, my self-consciousness kicks in. It isn’t guilt; it’s a feeling of inauthenticity, like I can suddenly see through my act (I’m so white, so stiff, so deeply uncool) because I imagine that they can. My instinct is to pander, to assimilate myself to them: to fall in with their way of speaking, standing, holding themselves, with their point of view. (Anyone who’s watched a grown-up try to talk to a bunch of teenagers will understand what I mean.) But in time I’ve learned to check that impulse. When faced with difference now, I don’t reject it and I don’t surrender to it. (Keep your back straight, I sometimes literally tell myself.) I’ve also learned that people will respect you more (in the familiar sense) if you just be yourself. And it’s the only way, of course, to build a genuine relationship.

    So what am I saying here? What exactly is this “respect” that I’ve been mulling over ever since I saw that picture of a bird? To help me consider the matter a little more rigorously, I turned to Martin Buber, with his famous I-Thou and I-It. Where does respect fall in relation to that distinction? I-It is instrumental: you use the thing, the creature, the person, for your own ends. I-Thou is relational, being to being. “I contemplate a tree,” Buber writes. “I become bound up in relation to it. The tree is now no longer It. I have been seized by the power of exclusiveness.” In the moment of I-Thou, in other words, the Thou is all there is. I apprehend it in its wholeness, its unity, its being. “The tree is no impression,” no bundle of separate sensations, nor is my relation with some kind of indwelling spirit. “I encounter no soul or dryad of the tree, but the tree itself.”

    There is much to admire in I and Thou, but also much, I find, to question. Buber tells us that to meet the Thou — which finally means the divine — to enter into what he calls a genuine relation, one needs a “full acceptance of the present,” of reality (“The Word of revelation is I am that I am.…That which is is, and nothing more”). One needs to practice not a “seeking” but a “waiting.” Very good. What Buber is describing is a mystical experience, the suspension of time and space and ego, such as we learn about also in other traditions. It is rare; it cannot be achieved by will alone; it is a gift of grace. But it doesn’t, for that very reason, help us much in ordinary life. And the only alternative, he says, is I-It. All is relation (“There is no I taken in itself”), and there are only two relations.

    This will not do. Conversations with friends, acts of love and care, the connection of teacher to student — all these are instrumental? No. There must be something in between his two extremes. We do not need to “Thou” the other in order to refrain from instrumentalizing them. The essence of respect, in fact, is non-identification. It is a refusal of projection. For that, I think, is what the I-Thou ultimately is. It is a strangely non-relational relation. He’s vibing with the tree, as the kids would say, but is the tree vibing with him? It’s just a tree, after all. He may feel a reciprocity (the tree “has to do with me”), but a feeling does not tell you anything except that you are having a feeling. 

    Buber gives the game away when he turns from trees to human beings. “Even if the man to whom I say Thou is not aware of it,” he writes, “yet relation may exist.” But to call this a relation is to strain the term beyond its breaking point. It seems, instead, a private experience, however exalted, one in which the other person functions as a kind of spiritual trampoline. As for creatures, wild animals, Buber has the hardest time with them of all, perhaps because, unlike a tree, they visibly respond to us. Contemplate an animal, be it a backyard bird or a deer in a forest, meet its eyes, and what you are likely to register is not “relation” but a sense of threat, as in, what is this ape and why is it staring at me?

    For Buber, I-Thou is the ground of morality. Its essence is love, the “responsibility of an I for a Thou.” But if we need to love the other in order to treat them correctly, then we’re all in a great deal of trouble. We should not have to empathize or sympathize or understand or “leap the chasm of otherness” or “be in relation.” We only have — but this is not an easy thing — to see the other in itself and for itself. I think of an acquaintance who lives in western Massachusetts and has spent some time in California. He much prefers the Yankees, he has told me, as dour and unfriendly as they often are, because when push comes to shove, you can count on them. They don’t have to like you to help you. The Californians appall him, precisely because their morality is based on feeling, on spurts of universal love. No love today? No help, no recognition, no concern — go soak your head.

    No doubt this is partly a matter of temperament — Buber is terribly moist — but I am for a dry morality. I am for detachment, even alienation, as a hedge against over-identification. I am for letting the other be other (including the universe, which some ventriloquize as the divine). You can love the other, but you can equally leave them alone. Buber speaks of community, the form that “relation” assumes in collective life, but this is not to be confused with actual communities. The latter means that everyone is in each other’s grill, whereas my ideal is the city, where people mind their business unless otherwise requested. Many years ago, I spent some months on a kibbutz. “People here,” a resident told me, “will let you into their living room, but they won’t let you into their bedroom.” He wasn’t talking about polyamory; he was talking about the fact that close quarters can militate against intimacy, because they force you to defend your boundaries. But cities, with their ethic of noninterference, can make not only for strong individuals, but also, in my experience, for strong attachments. No distance, no crossing. No separation, no connection. If I am over here and you are over there, then at least we can say that we know where we are.

    I had been trying to come up with an alternative to Thou and It, a third term for a third dyad: I-That? I-Them? Then I realized the real problem is that pesky “I,” with its knack of getting in the way. We can’t be rid of it, and so we must constrain it. I think of the concept of tzimtzum — the act, in Kabbalah, whereby an infinite God creates the world by contracting himself to make room for it. The ego also tends to fill immensity. Self-contraction is a decent rule of conduct, and a useful prayer would be, Lord help me to make myself small. 

    The Prophetic Environmentalism of Rabindranath Tagore

    The great British historian E. P. Thompson once remarked that “India is not an important country, but perhaps the most important country for the future of the world. Here is a country that merits no one’s condescension. All the convergent influences of the world run through this society: Hindu, Moslem, Christian, secular; Stalinist, liberal, Maoist, democratic socialist, Gandhian. There is not a thought that is being thought in the West or East which is not active in some Indian mind.” Some may cavil at his assertion that India is (or ever was) the most important country for the future of the world. But I want rather to endorse Thompson’s other claim —
    namely, that there has been an astonishing diversity of intellectual opinion in India. This is a product of the country’s size, its cultural heterogeneity, and its daring (if admittedly imperfect) attempt to construct a democratic political system in a deeply hierarchical society. Indeed, among the countries of the so-called Global South, India is notable for the vigor, sophistication, and self-confidence of its intellectual traditions. In this respect it stands out even compared to its larger neighbor China, where the scholarly legacy of the past has been brutally crushed by a totalitarian state. 

    For too long a significant strand of the Indian intellectual tradition has been neglected: its rich speculations about the past, present, and possible future of human relations with the natural world. The burgeoning contemporary literature
    on the history of environmentalism is also guilty of this omission, owing to its narrow geographical focus. The challenge to American intellectual hegemony in this field first came from Europe, and what took place there was noticed in America. The traffic of ideas across the Atlantic was intense. Yet the conversation has been conducted as if environmental movements and environmental thinkers could not exist outside Europe and North America. 

    I hasten to add that this bias did not originate in any sort of colonialist condescension or feeling of racial superiority. Rather, it most likely had its roots in conventional social science wisdom, which stubbornly held that environmentalism was a “full stomach” phenomenon, possible only in societies where a certain level of material prosperity had been reached. By the canons of orthodox social science, countries such as India are not supposed to have an environmental consciousness. They are, as it were, too poor to be green. As the economist Lester Thurow notoriously remarked in 1980,
    “If you look at the countries that are interested in environmentalism, or at the individuals who support environmentalism within each country, one is struck by the extent to which environmentalism is an interest of the upper middle class. Poor countries and poor individuals simply aren’t interested.”

    This haughty dismissal of any possibility of poor countries being interested in the fate of the natural environment was, at least with regard to India, years out of date. In the spring of 1973, a popular peasant movement in the Himalaya, known as Chipko, threatened to hug the hill forests to stop them from being felled by commercial loggers. Many of the participants
    were unlettered, but its leaders, though themselves from peasant backgrounds, were informed and articulate about the wider issues. They wrote essays and tracts (usually in Hindi) tracing the direct link between industrial forestry, soil erosion, landslides, and floods. These showed that what at one level was an economic conflict — between the subsistence demands of peasants for fuel, fodder, and so on, and the commercial motivations of paper and plywood companies — had deeper ecological implications as well. 

    Still, the incomprehension persisted. Consider these remarks, from 1994, by Eric Hobsbawm: “It is no accident that the main support for ecological policies comes from the rich countries and from the comfortable rich and middle classes (except for businessmen, who hope to make money by polluting activity). The poor, multiplying and under–employed, wanted more ‘development’, not less.” Unlike Thurow, Hobsbawm was a historian — a great historian, and therefore more attentive to the messiness of social life, more interested in exploring hidden details than in postulating grand generalizations. And unlike Thurow, whose life was lived largely within the North American academy, Hobsbawm had a keen interest in Latin America, a continent he traveled widely in. He kept himself abreast of current events in Africa and Asia. One would have thought a scholar as learned and well-informed as Hobsbawm would have heard of the Chipko movement in the Himalaya or the Green Belt movement in Kenya. Or at least, given his strong connections with Latin America, of the movement of Chico Mendes and the rubber tappers in Brazil — another example of the environmentalism of the poor. Perhaps Hobsbawm’s Marxist faith did not allow him to see environmentalism as anything other than a bourgeois deviation from the class struggle.

    Chipko was followed by a series of other grassroots initiatives around community access to forests, pasture, and water. They likewise posited subsistence versus commerce, the village versus the city, the peasant versus the state, the subaltern versus the elite. Studying and reflecting on these conflicts, some scholars argued that they showed the way to reconfiguring India’s development path. Given the country’s population densities, and the fragility of tropical ecologies, India had erred in following the energy-intensive, capital-
    intensive, resource-intensive model of economic development pioneered by the West. When the country got its freedom from British rule in 1947, it should have instead adopted a more bottom-up, community-oriented, and environmentally prudent pattern of development. And yet, the argument further proceeded, it was not too late to make amends. 

    The environmental debate in India was at its most vigorous in the 1980s. Scientists, social scientists, journalists, and activists all contributed to it. The debate operated at many levels: philosophical, political, social, technological. It touched on the moral and cultural aspects of humanity’s relations with nature; on the changes required in the distribution of power to promote environmental sustainability; on the design of appropriate technologies that could simultaneously meet economic as well as ecological objectives. The debate embraced all resource sectors — forests, water, soil, transport, energy, biodiversity, pollution, and industrial safety.

    The post-Chipko environmental upsurge led to some institutional changes within India. New laws seeking to conserve forests, protect wildlife, and control pollution were enacted. In 1980 the government of India started a new Department of Environment, upgraded later into a full-fledged Ministry of Environment and Forests. New centers of ecological research were set up in Indian universities. Terms such as “ecological history,” “environmental sociology, “and “ecological economics” began entering the teaching curricula and research agendas of the academy. A new breed of “environmental journalists” came into existence, and their reports on forests, pollution, biodiversity, and grassroots struggles featured in newspapers and magazines.

    In those years, thinkers and activists in India played a profound role in shaping global conversations about humanity’s relationship with nature as well. Indian scholars proposed the idea of “livelihood environmentalism” in contrast to the “full-stomach environmentalism” of the affluent world. Some of the most pungent criticisms of excessive consumption in the West came from Indian writers. Scientists such as Madhav Gadgil and A. K. N. Reddy, journalists such as Anil Agarwal and Sunita Narain, and activists such as Medha Patkar and Ashish Kothari acquired international reputations. Ideas first developed in India were discussed and debated in other countries and continents.

    And then there was the early and original example of Rabindranath Tagore — the poet, novelist, playwright, and essayist; the man who transformed the Bengali language through his prose; the composer who wrote hundreds of songs and set them to music, many of which are still sung decades after his death, among them the national anthems of Bangladesh and of India; the first Asian to win a Nobel Prize and the founder of a major university; the friend of Mahatma Gandhi and the mentor of Jawaharlal Nehru; the painter who took up his brush in his late sixties; the restless traveler who made three trips to Japan and five trips to the United States, and spent time in Europe, Latin America, China, Indonesia and Iran, winning friends, admirers and the occasional critic in all these places. But he was also one other thing. He was a precocious environmentalist — an unacknowledged founder of the modern environmental movement. 

    It is past time to recover Tagore’s thoughts on how nature shapes human life and how humans shape nature. His writings on the use and abuse of nature were more important to his worldview and literary achievement than has generally been recognized. (There are a few notable exceptions: in the 1960s, Niharranjan Ray and G. D. Khanolkar wrote insightfully on the subject.) The rehabilitation of Tagore’s environmental thought is not merely of academic interest; his words and warnings speak directly to the environmental challenges that confront India and the world today. 

    Rabindranath Tagore was born in 1861, into a family of wealth and privilege. In his memoirs, he says of the Bengali children’s primer which gave him the first elements of an education that the “two literary delights that still linger” in his memory were both images of nature”: “the rain patters, the leaf quivers” and “the rain falls pit-a-pat, the tide comes up the river.”

     

    Tagore grew up in the family home in the north Calcutta locality of Jorasanko. This was a three-storied mass of buildings built around several courtyards, in which lived several generations of family members as well as their maids, cooks, and bearers. A room that Rabindranath frequented as a little boy had a window, from which he saw “a tank with a flight of masonry steps leading down into the water; on the west bank, along the garden wall, an immense banyan tree; to the south a fringe of cocoa-nut plants. Ringed round as I was near this window, I would spend the whole day peering through the drawn venetian shutters, gazing and gazing on this scene as on a picture-book.”

    In this pool adjacent to the Tagores’ house, their poorer neighbors came to bathe. The child watched them with fascination, noting their idiosyncrasies — one man “who would never step into the water himself but be content with only squeezing his wet towel repeatedly over his head,” another “who jumped in from the top steps without any preliminaries at all,” a third who “would walk slowly in, step by step, muttering his morning prayers the while.” As the morning progressed the line of bathers grew thinner, until the “bathing-place would be deserted and become silent. Only the ducks remained, paddling about after water snails, or busy preening their feathers…” 

    Once the humans had departed, the boy’s attention wandered to the birds, and then, to a large tree that lay at the tank’s edge. He was fascinated by the “dark complication of coils at its base.” It was of this tree that many years later the poet wrote:

    With tangled roots hanging down from your branches, O ancient banyan tree,

    You stand still day and night, like an ascetic at his penances,

    Do you ever remember the child whose fancy played with your shadows?

    Of his childhood encounters with nature Tagore was to remark: “How intimately did the life of the world throb for us in those days! Earth, water, foliage and sky, they all spoke to us and would not be disregarded.”

    Tagore’s memoirs were written when he was approaching the age of fifty. Towards the end of the book, he reflected on what nature had meant to him over the course of his life:

    From my earliest years I enjoyed a simple and intimate communion with Nature. Each one of the cocoa-nut trees in our garden had for me a distinct personality. When, on coming home from [school], I saw behind the sky-line of our roof-terrace blue-gray, water-laden clouds thickly banked up, the immense depth of gladness which filled me, all in a moment, I can recall clearly even now. On opening my eyes every morning, the blithely awakening world used to call me to join it like a playmate; the perfervid noonday sky, during the long silent watches of the siesta hours, would spirit me away from the workaday world into the recesses of its hermit cell; and the darkness of night would open the door to its phantom paths, and take me over all the seven seas and thirteen rivers, past all possibilities and impossibilities, right into its wonderland.

    Tagore’s family owned vast estates in the eastern part of the Bengal Presidency. When he was in his twenties, his father commanded him to oversee their holdings and their management. He went reluctantly, loth to leave his family and the city, but once there he fell in love with the landscape of deltaic Bengal. His letters to his family are redolent with natural imagery, as he delicately describes the interplay of land, water, plants and animals. Here is a typical example, from a letter written by Tagore to his niece Indira Debi sometime in the 1890s, from an unnamed place in eastern Bengal:

    Our boat is moored off a lonely grass-covered island in the river. The world is at rest. What a glorious day it is, today! Such loveliness all around! After many days I am really meeting Mother Earth again, and it is as if she says “Here he is.” and I reply “Here is she.” We sit side by side without stir or speech. The water gurgles, the sunlight sparkles, the sand crunches. Tiny wild shrubs crane their heads to watch. A stray bird gets up calling chik-chik. It’s all like a dream, and I feel like writing on and on, just about that and nothing else — the gurgle of the water, the glitter and shimmer of the sunshine, and all the dreaminess of this island. I want to wander day after day along these sandy banks, and write about nothing but this — oh, how badly I want to!

    And this, from another letter by Tagore to his niece Indira, written in 1895 from the family estate in Shelidah in Eastern Bengal: 

    We can draw a deep and secret joy from nature only because we feel a profound kinship with it. These green, fresh, ever-renewing trees, creepers, grasses and lichens, these flowing streams, these winds, the ceaseless play of light and shade, the cycle of the seasons, the stream of heavenly bodies filling the limitless sky, the countless orders of life — we are related to all this through the blood-beat in our pulse — we are bound by the same rhythm as the entire universe.

    These letters, written in Bengali, bear comparison to the writings of American naturalists in the nineteenth century, exploring new landscapes with wonder and excitement, capturing their diversities of species and habitats in their prose. The “profound kinship” that Tagore felt with nature makes him akin to John Muir. We know Muir as a pioneering American environmentalist, but this label has thus far been denied the Indian poet, perhaps because his attention to nature, though impressive enough, was merely one part of his extraordinarily various and multi-faceted achievement. 

     

    When he was about thirteen, Rabindranath was taken by his father for a long holiday in the Himalayas. Above the town of Dalhousie (in present-day Himachal Pradesh) the family rented a bungalow. As they climbed up from the plains — father and son being carried on a palanquin by bearers — the terraced hillsides “were all aflame with the beauty of the flowering spring crops.” The boy’s “eyes had no rest the livelong day, so great was my fear lest anything should escape them. Wherever, at a turn of the road into a gorge, the great forest trees were found clustering closer, and from underneath their shade a waterfall trickling out, like a little daughter of the hermitage playing at the feet of hoary sages rapt in meditation, babbling its way over the black moss-covered rocks, there the jhampan bearers would put down their burden, and take a rest. Why, oh why, had we to leave such spots behind, cried my thirsting heart, why could we not stay on there for ever?”

    When he was sixteen, the young man was sent to England for a spell. He first lived in London, where he tried, unsuccessfully, to learn Latin from a tutor. He went for an excursion to the Devon countryside, and was charmed by what he saw. “I cannot tell you happy I was,” he wrote later, “with the hills there, the sea, the flower-covered meadows, the shade of the pine woods…” One day he walked down to the coast, where he found a “flat bit of overhanging rock reaching out as with a perpetual eagerness over the waters; rocked on the foam-flecked waves of the liquid blue in front, the sunny sky slept smilingly to its lullaby; behind, the shade of the pines lay spread like the slipped-off garment of some languorous wood-nymph.” His sensitivity to the natural world was preternatural.

    Tagore was traveling in Europe from the time he was a young boy. It was in 1916, when he was in his early fifties, that he visited Japan for the first time. It was a long and leisurely journey by ship, from Calcutta to the capital of British Burma, Rangoon, and from there to the town of Penang in British Malaya, and from there on to the Chinese city of Hong Kong, also a British protectorate, and from Hong Kong to his final destination, the island nation of Japan, a country that had never been ruled by Europeans (and which thereby added to Tagore’s fascination for it). En route the poet kept a diary, where he recorded his impressions of the ever-changing human and natural world that he and his traveling companions
    were to encounter. 

    Tagore’s first impressions of the premier port of Malaya were altogether pleasant. 

    Our ship reached the port of Penang just as the sun was setting. Seeing the water and land clasp each other in a bond of love, I had a deep sense of the earth’s beauty. The earth, stretching both its arms, was embracing the sea. The faint rays of light that pierced the clouds and fell on the bluish mountains were like the thin vein of gold that covers the face of a bride without completely hiding her features. Water, land and sky played together a divine tune from the gates of heaven as the evening approached. 

    But as the great steamship he was on prepared to dock, Tagore’s mood grew darker. 

    As our ship slowly drew near the wharves, the full horror of the great effort of man to overcome nature became conspicuous: the machine was cutting with its sharp, angular claws into the soft curves of nature. What ugliness the enemies of man within him can create! On every beach, in every port, the greed of man is making grotesque faces at the sky — and thereby banishing itself from the kingdom of heaven.

    In 1927, a decade after his journey to Japan via various British-ruled ports, Tagore chose to visit the Dutch East Indies. This time the poet does not appear to have kept a detailed diary on board. Fortunately, we have a letter he wrote to his niece Pratima describing his first impressions of the little and predominantly Hindu island of Bali. 

    When we crossed over to Bali we saw the Earth in all the freshness of its eternal youth. The old centuries here have their ever new incarnation. The habitations of its people nestle in the lap of shaded woodlands, lulled in a limpid leisure —a leisure decorated with preparations for frequent festivity. In this secluded little island there are no railroads. The railway train is the vehicle of modernity. The modern age is miserly, and reluctant to make provision for any kind of surplus; time is money, says the modern man and, in order to avoid any waste of it, the panting locomotive perspires smokily as it thunders on from country to country. But in this island of Bali the modern time has spread itself over the past centuries and become one with them. It has no need to shorten time, for everything belongs here to all time, as much to the past as to the present. Just as its seasons flow along, opening out flowers of many a color, ripening fruits of many a flavor, so also do its people live on from generation to generation, sustaining the superfluity of their traditional ceremonials, rich in form and color, song and dance.

    Tagore’s evocation of the rural-ecological idyll of Bali continues:

    But if railways are not, there is the modern globe-trotter, and for him there are motor cars. What if this child of a constricted age has come into the land of unbounded leisure — he must all the same get through his sight-
    seeing and his enjoyment within the minimum time. For myself, as I was being whirled along by hills and woods and villages, raising clouds of dust, I felt all the while that this was above all the place where one should walk. There is not much of a loss if one’s eyes are raced over rows of buildings lining a street, but where, on either side of the road, feasts of beauty offer their regalement, this steed of emergency should be kept interned in its garage.

    In the 1920s, the motor car was widely admired as a sign of progress and human achievement, though from our own perspective a hundred years later one cannot but see it as having contributed rather substantially to global warming. Tagore was not prophetic enough to recognize this, of course. Yet it would be a mistake to portray him, as some of his contemporaries did, as an anti-modern Luddite. Rather, he was working his way towards a vision where technological innovation would serve humans and harmonize with nature, rather than dominate them both utterly. He appreciated Bali because its inhabitants did not seek to conquer time or space in the manner of the residents of New York or London. In this little island, the fields, the houses, the modes of transport, all reflected a way of life that sought to blend and merge culture with nature.

    Tagore prized technological innovations on the human scale, where man was a partner rather than a servant of the machine. This passage, written as his ship was entering Penang, is suggestive:

    In the harbor we saw many small boats. There are few things created by man as beautiful as these small sailing boats that skim over the surface of the water to the rhythm of the wind. Indeed, when men have to move in tune with nature their creations cannot be anything but beautiful. The boat has to make friends with the winds and the waves, and so it comes to partake of their beauty; whereas a machine, pretending to look down upon nature from the pinnacle of its power, only displays by this vanity its own ugliness. A steamship has many advantages over a sailing vessel, but the beauty has been lost.

    Decent English translations of Tagore’s early nature poetry are hard to come by. Among the exceptions is a sonnet by Tagore entitled “Sabhyatar Prati” (“To Civilization”) and originally published in 1896 in his collection Chaitali. The poem sharply contrasts the soulless, denaturalized, and concretized city of Calcutta with the verdant beauties of the rural landscape of eastern Bengal. As translated by the Bangladeshi scholar Fakrul Alam, the sonnet reads:

    Give back the wilderness; take back the city —
    Embrace if you will your steel, brick and stone walls
    O newfangled civilization! Cruel all-consuming one,
    Return all sylvan, secluded, shaded and sacred spots
    And traditions of innocence. Come back evenings
    When herds returned suffused in evening light,
    Serene hymns were sung, paddy accepted as alms
    And bark-clothes worn. Rapt in devotion,
    One meditated on eternal truths then single-mindedly.
    No more stone-hearted security or food fit for kings —
    We’d rather breathe freely and discourse openly!
    We’d rather get back the strength that we had,
    Burst through all barriers that hem us in and feel
    This boundless universe’s pulsating heartbeat.

    Tagore’s most famous poem is the verse sequence Gitanjali, which won him the Nobel Prize in 1913. The previous year he was in London, where, at the home of the painter William Rothenstein, he read some of his early poems in his own English translations. In attendance was his friend C. F. Andrews, who later provided a report of the soirée for readers back in India. Andrews remarked: “At every verse the Bengal scenery — the Monsoon storm clouds, the surging seas, the pure white mountains, the flowers and fields, the lotus on the lake, the village children at play, the market throng, the pilgrim shrine—came before the eyes, molded into melodies of exquisite sweetness.” And twenty years after the publication of Gitanjali, Tagore composed a volume of poems called Banabani (The Voice of the Forest), which approached trees and the forest in a mystical and religious spirit. The opening poem of the collection “Vrikshavandana” (“A Prayer to the Tree”) reads, in translation:

    O Tree, you are the adi-prana [first or original breath], you were the first to hear the call of the sun and to liberate life from the prison-house of the rock. You represent the first awakening of consciousness. You brought to the earth beauty and peace. Before you the earth was speechless; you filled her breath with music.

    This was published when Tagore was in his seventies. His poems thus display a lifelong engagement with nature, with what plants, trees, birds, animals, as well as land, sky and water, meant to him and the human world to which he belonged, and also with the human responsibility for not damaging nature’s blessings and wonders.

    In 1914, Tagore published his first extended piece of writing in English. It provided an outline of his thinking on morality, aesthetics, and faith. The tract, called Sadhana, began by speaking of the role of forests in Indian culture. It “was in the forests that our civilization had its birth,” he declared, “and it took a distinct character from this origin and environment. It was surrounded by the vast life of nature, was fed and clothed by her, and had the closest and most constant intercourse with her varying aspects.” 

    “To realize this great harmony between man’s spirit and the spirit of the world,” continued Tagore, 

    was the endeavour of the forest-dwelling sages of ancient India. Even after the forests had given way to cultivated fields, and cities and kingdoms had emerged, Indians continued to look back with adoration upon the early ideal of strenuous self-realization, and the dignity of the simple life of the forest hermitage, and drew its best inspiration from the wisdom stored there.

    Tagore contrasted this attitude with that of the West, which “seems to take a pride in thinking that it is subduing nature; as if we are living in a hostile world where we have to wrest everything we want from an unwilling and alien arrangement of things.” Tagore argued that “in the west the prevalent feeling is that nature belonged exclusively to inanimate things and to beasts, that there is a sudden unaccountable break’” between humans and nature. He firmly rejected such a view. “The Indian mind,” he claimed, never has any hesitation in acknowledging its kinship with nature, its unbroken relation with all.”

    In another essay, published five years later, Tagore called the forests “the one great inheritance” of India and Indians. He offered an intriguing contrast between how forests shaped Indian history and how the sea had shaped the history of northern Europe. “In the sea,” he wrote, “Nature presented itself to these [European] men in her aspect of a danger, of a barrier, which seemed to be at constant war with the land and its children. The sea was the challenge of untamed Nature to the indomitable human soul. And man did not flinch; he fought and won…” Tagore contrasted the European conquest of the sea with the level tracts of peninsular India, where “men found no barrier between their lives and the Grand Life that permeates the Universe. The forest gave them shelter and shade, fruit and flower, fodder and fuel; it entered into a close living relation with their work and leisure and necessity, and in this way made it easy for them to know their own lives as associated with the larger life.” 

    This essay of 1919 was called “The Message of the Forest,” the title mirroring that of his early volume of poems. Three years later Tagore published a sequel, which he titled “The Religion of the Forest.” Here he spoke of how in ancient India, “the forest entered into a close living relationship” with the work and leisure of humans. They did not therefore think of their natural surroundings as “separate or inimical.” So “the view of truth, which these men found, did not manifest the difference, but the unity of all things.” 

    This view of a primordial attachment of Indians to forests was perhaps somewhat rose-tinted. The texts and the scriptures of ancient India by no means speak in one voice on this matter. While the Upanishads do talk of the unity of all creation, and Sanskrit drama does contain moving evocations of nature, one must not overlook the episode in the Mahabharata where the burning of the Khandava forests and the killing of its animals is celebrated as proof of the advance of civilization, the necessary and even mandatory conquest of primitive hunters and gatherers by a sophisticated agrarian civilization. And surely Tagore saw that many, perhaps most, Indians of his day treated forests in severely utilitarian terms, as a source of raw materials rather than of pleasure or spiritual upliftment. Perhaps the writer wanted to believe that his own love for nature, and for forests in particular, was not an idiosyncratic individual taste but a deep and enduring civilizational inheritance.

    Tagore was a builder of institutions. Most significantly, he founded Santiniketan, which began as a school for boys in 1901 and grew into a full-fledged university, and Sriniketan, an accompanying experiment in the renewal of village life, that was started in the 1920s. Both were based in rural Bengal, in what is now the district of Birbhum, about three hours by train from Calcutta. They were located originally on a property owned by the Tagore family, with more lands being acquired over the years, as the institutions expanded in size and grew
    in numbers.

    Krishna Kripalani, a scholar who worked closely with Tagore, summarized his educational ideals in terms of ten maxims, of which the first is: “The child should be brought up in such environments as would provide him with opportunities of direct and close contact with Nature. Civilized existence in society imposes, in any case, such severe restraints on the first, fresh and vital impulses of life that human nature tends to be perverted unless its impulses are renewed and revitalized with constant reference to Nature.” Other maxims include learning through the mother tongue, an equal emphasis on individual initiative and group action, and an appreciation of cultural heritage. Nature makes a reappearance in the sixth maxim, which Kripalani glosses as: “When the child’s senses have been trained to a proper awareness of his surroundings and he has learnt to observe and love Nature, his experiences should then be made intelligible to him, at a later stage, in terms of scientific categories.”

    In July 1927, by which time the school he started, Patha Bhavan, had been in existence for a quarter of a century, Tagore found himself speaking to the Indian Association in Singapore, to an audience of parents whose own children were educated in a resolutely metropolitan environment. He told them of his own school, where “boys are taught amidst natural surroundings. They grow up in the midst of the sights and sounds of Nature, among trees, birds, in the open air. This school seeks to enable my boys to realize their bond of unity with Nature.” In another speech in Singapore, to a gathering of children and teachers in the city’s Victoria Theatre, Tagore expanded on his method of learning in and with nature. He told his audience about how and where the children in his school had their lessons:

    We have a mango grove. It is full of shade, and in summer, full of the beautiful perfume of the mango blossoms and there are innumerable birds and moths and all kinds of insects living on them. This you may think might distract their attention. But that is not so. I allow them sometimes to have their lessons and to look more closely at some of the things which attract their eyes. Very often they call my attention to some strange birds that have come and perched on the bough — “Sir look at the bird? What bird is that?” — right in the middle of their lesson. And then I talk to them about that bird… They should observe that bird. It would have been wrong were their minds absolutely dull to these impressions, and I would much rather be interrupted in my lessons than force them to keep their minds only on what has been placed before them. Often, again, they would speak to me of their admiration for something unusual — such as an especially fine bunch of mango leaves. I find that helps them, and that this constant movement of their mind is necessary for them. It is the method which nature has adopted in her own school for the young.

    In 1921, Santiniketan (the Abode of Peace) became home to a new and more advanced educational experiment, a university which carried the name Visva-Bharati, indicating its ambition to bring the world (visva) to India (bharat), as well as to take India to the world. The university went on to have departments dedicated to the study of Japan and China, to the study of classical and contemporary Indian languages, as well as a celebrated art school.

    The land acquired for the university’s construction was dry and bare. To make the place more appealing to the eye as well as more conducive to the sort of learning he desired to impart, Tagore inaugurated in 1928 what was to become an annual festival. The Briksharopan (tree-planting) ceremony was held in July, shortly after the onset of the monsoon. In a play staged on the occasion, the five basic elements of nature — earth, water, sunlight, air and sky — were represented by five students playing these roles. Saplings of carefully chosen (and mostly indigenous) species were planted by boys and girls with loving care, the ceremony accompanied by music and poetry. Over the decades, these saplings, now full-grown, helped transform a barren landscape into one dotted with trees and groves.

    In a lecture to the Santiniketan community, Tagore explained his idea behind Briksharopan:

    Man’s greed grew as he received Mother Earth’s bounty. … Men cut down trees to meet their endless needs and stripped the Earth of shade. As a result, the air became increasingly hotter, while the fertility of the soil increasingly diminished. That is how northern India, deprived of its shelter of forests, now lies scorched by the harsh rays of the sun. With all this in our minds, we initiated a tree-planting ceremony to teach the children to replenish the plundered stores of Mother Earth.

    The tree-planting ceremony was one of several festivals – Spring Festival, Welcoming the Monsoon, Autumn Festival, Ploughing Ceremony, Harvest Festival — begun in Santiniketan by Tagore, with a view to nurturing among students an affectionate and caring relationship with nature, so that they could seek to harmonize their own lives with its rhythms and variations. 

    Tagore took great care in choosing the shrubs and trees that surrounded his homes in Santiniketan, making sure that there were flowering plants throughout the year. In the campus as a whole, there were groves dedicated to specific species: one for the stately sal trees; another for the trees bearing the most delicious of all fruits, the mango; and so on. When he was away from Santiniketan, Tagore’s letters home often asked about the plants and trees he had left behind or hoped would flourish in his absence. In the summer of 1933, he wrote to his daughter Mira: “Ask them [the staff] to plant neem, shirish, and other trees on the street that leads to my room this monsoon. It’s not a bad idea to plant a few jackfruit trees either.” 

    In these efforts to plant up Santiniketan with trees and flowers, Tagore was surely inspired by the verdant landscape of eastern Bengal in which he had spent so much time in his youth. With an arid, sandy, soil, and with far less water available, the place where the university was located could never remotely parallel the natural beauty of the Padma river and its surroundings, but it could still be made green and pleasant and welcomingly habitable. And so, under the poet’s guidance and instruction, it became.

    Tagore grew up in the city, but became increasingly disenchanted with urban lifestyles. As the writer Aseem Shrivastava observes, Tagore believed that “the ecological alienation of metropolitan life profoundly cripples our sensibility, leaving humanity in a self-destructive state of spiritual destitution.” The poet was thus encouraged to locate his educational experiment, Santiniketan, deep in the countryside rather than anywhere near the city of Calcutta. For Tagore, “open skies, planted fields, and swaying palms [were] more essential to untrammeled learning and the formation of the mind than the hectic cultural exchanges a modern metropolis affords (and a village denies).”

    A century before Tagore, English writers had responded in a similar fashion to the radical alterations in the natural landscape that the expansion of cities such as London represented. In his classic book on the subject, Raymond Williams explains why, for poets in particular, the country conveyed a more appealing ecological aesthetic than the city: “The means of agricultural production – the fields, the woods, the growing crops, the animals — are attractive to the observer and in many ways and in the good seasons, to the men working in and among them. They can then be effectively contrasted with the exchanges and counting-houses of mercantilism, or with the mines, quarries, mills and manufactories of industrial production.” 

    Like his English forebears, Tagore saw modern cities as being parasitic on the natural resources of the countryside. At the same time, he was not unduly romantic about the village life that he had witnessed at first-hand. His family owned large tracts of agricultural land in eastern Bengal. He and his brothers were sent by turn to manage them. Rabindranath was assigned this responsibility in the early 1890s, by which time he was an established poet, admired and much feted in Calcutta. In a lecture given many years later, Tagore wrote of how in this first extended experience of the countryside, “gradually the sorrow and poverty of the villagers became clear to me, and I began to grow restless to do something about it. It seemed to me a very shameful thing that I should spend my days as a landlord, concerned only with money-making and engrossed with my own profit and loss.” Over the next decade, as Tagore spent more time in his estates, these feelings of guilt intensified. He wished to ameliorate the poverty of the peasants through constructive social work. In 1906, he sent his son, his son-in-law, and a friend’s son to the University of Illinois at Urbana-Champaign to study modern methods of agriculture and dairying, with a view to implementing them in India.

    Tagore’s philosophy was anti-industrial but not anti-modern. He wished to renew village life with the principles and techniques of modern science. That is why he sent his son to study agricultural technology in Illinois. But the son did not prove entirely worthy of his father, so Tagore went looking for someone else who could scientifically supervise programmes of rural uplift in the villages around Santiniketan. He found him in the person of an idealistic young Englishman named Leonard Elmhirst, whom he met in New York in 1920.

    Born in 1893, the son of a Yorkshire curate, Elmhirst had studied history at Cambridge before enlisting in the Army during the First World War. He fell sick in Mesopotamia, and came to India to recuperate. There he became interested in agriculture, through meeting the British missionary Sam Higginbottom, who ran an experimental farm outside the northern Indian city of Allahabad. This encouraged Elmhirst to go to Cornell University in upstate New York to study agricultural science. In November 1920, when Tagore was in New York, he heard of the young Englishman and arranged to meet him. This is how, years later, Elmhirst recalled Tagore’s words to him at that meeting:

    I have started an educational enterprise in India which is almost wholly academic. It is situated well out in the countryside of West Bengal at Santiniketan. We are surrounded by villages, Hindu, Muslim, Santali. Except that we employ a number of these village folk for various menial tasks in my school, we have no intimate contact with them at all outside their own communities. For some reason these villages appear to be in a state of steady decline. … Some years ago I bought from the Sinha family a farm just outside the village of Surul, a little over a mile from my school. I hear that you might be interested in going to live and work on such a farm in order to find out more clearly the causes of this decay.

    Elmhirst came out to Santiniketan in November 1921, a year after meeting Tagore in New York. The experiment in Surul originally was called the Institute for Rural Reconstruction, before Tagore came up with the crisper and more elegant “Sriniketan.” Tagore asked Elmhirst to find better methods for villagers to grow their crops and vegetables, to help them gain access to credit and get a fair price for their produce. He also hoped to augment their farm income with cottage industries such as rice milling and umbrella making.

    In Sriniketan, Elmhirst began taking Bengali lessons. In January 1922, Tagore told him that ten Santiniketan students had come to him and were keen to do work in villages after graduating. Since they knew both Bengali and English, they could assist the foreign-born expert in his activities. Tagore now instructed Elmhirst: 

    Stop your Bengali lessons. If you learn too much Bengali yourself you’ll want to go on your own to the village to ask questions. You will then make the great mistake of trying to become indispensable to this enterprise like any foreign missionary. I want you never to go alone to any village but always to take with you either a student or a member of your staff to act as interpreter. Only
    in this way will they learn what kind of questions
    you ask and just how the farmers and villagers frame their answers. These answers they will then have to
    interpret back to you. In this way they will never forget the experience.

    After Elmhirst had been in Sriniketan for a couple of years, Tagore told him that it was time to move on, so as “to give the Indian staff of the young Institute a chance to find their own feet.” Elmhirst traveled with Tagore to China and Japan in 1924, and from there to Latin America. The following year, with the poet’s blessings, Elmhirst married the American heiress Dorothy Straight, and the couple now set up home in rural England in a medieval manor called Dartington Hall, which they refurbished and made the center of an experimental farm. Straight helped to support Sriniketan financially, while her husband remained in close touch with Tagore.

    Tagore was a keen observer of the natural world, yet what we might call his “nature aesthetic” was merely one element of a wider ecological consciousness. He was sharply critical of the environmental devastation caused by unbridled industrialization. When the poet was growing up, the Hooghly river had homes and farms all along it, but then the banks became dotted with factories. Writing in 1916, he observed that he was fortunate in having been born before the iron flood of ugliness began clinging to the river banks near Calcutta. 

    At that time the embankments of the Ganges, like the arms of the villages on the banks, embraced their people and kept them close to their bosom. In the evenings people would go for boat-rides on the river. The current of the people’s hearts and the flow of the river — there was no hard and ugly demarcation between them. The beauty of the Bengali countryside could be seen even in the immediate vicinity of Calcutta. As commercial civilization began to spread, however, the beauty of the countryside was slowly and steadily obscured, to the point where now Calcutta has segregated all of Bengal from the lands surrounding it. The vernal beauty of the country has succumbed to the hideous form of Time, showing its iron teeth, belching smoke and fire. The following year, in a lecture in America, Tagore warned thus against the dehumanizing and destructive cult of the machine: “Take man from his natural surroundings, from the fulness of his communal life, with all its living associations of beauty and love and social obligations, and you will be able to turn him into so many fragments of a machine for the production of wealth on a gigantic scale. Turn a tree into a log and it will burn for you, but it will never bear living flowers and fruit.”

    During the course of Tagore’s life, the city of his birth became a bustling industrial powerhouse. In the second half of the nineteenth century, jute mills proliferated in and around Calcutta, processing the raw fiber grown in eastern Bengal into packing material sent all around the world. The first jute mill was established on the Hooghly in 1855, six years before Tagore was born. In 1869, when he was a little boy, there were a mere five mills with nine hundred and fifty looms operating. By 1910, when Tagore was approaching the age of fifty, there were a staggering thirty thousand looms in operation, exporting more than a billion yards of cloth. The waterways around Tagore’s native city, whose banks once featured little hamlets and fishing boats, were now lined with the large chimneys of the ever-proliferating jute factories, emitting tons of smoke. This transformation repelled him. 

    In 1880, when he was in his late teens, Tagore went for a boat ride up the Hooghly, from Calcutta to the French enclave of Chandannagar, where his brother Jyotindra had a riverside home. Tagore had just returned from a spell in England, and this re-immersion in the Bengal countryside was for him a joyous experience. Of the boat journey and the stay on the river’s banks in Chandannagar, he wrote:

    The Ganges again! Again those ineffable days and nights, languid with joy, sad with longing, attuned to the plaintive babbling of the river along the cool shade of the wooded banks. This Bengal sky full of light, this south breeze, this flow of the river, this right royal laziness, this broad leisure stretching from horizon to horizon and from green earth to blue sky, all these were to me as food and drink to the hungry and thirsty. Here it felt indeed like home, and in these I recognised the ministrations of a Mother.

    That was back in 1880. By the time Tagore came to pen his reminiscences thirty years later, much had changed. 

    That was not so very long ago, and yet time has wrought many changes. Our little river-side nests, clustering under their surrounding greenery, have been replaced by mills which now, dragon-like, everywhere rear their hissing heads, belching forth black smoke. In the midday glare of modern life even our hours of mental siesta have been narrowed down to the lowest limit, and hydra-headed unrest has invaded every department of life. Maybe this is for the better, but I, for one, cannot account it wholly to the good.

    Such passages inevitably recall the great British poets of the eighteenth and nineteenth centuries, who were likewise repelled by the outrage done to nature by the expansion of cities and factories, and who wrote so movingly (and despairingly) about it. Blake (“dark Satanic mills”), Wordsworth, and John Clare come immediately to mind, though perhaps most akin to Tagore’s thinking was William Morris, who, while by no means in the same league as the other three as a poet, had, like Tagore, many interests in life outside his poetry, being an activist and a builder of institutions. Consider this passage from Morris’s long narrative poem “The Earthly Paradise,” from 1868–1870, which begins by asking the reader to —

    Forget six counties overhung with smoke,
    Forget the snorting steam and piston stroke
    Forget the spreading of the hideous town;
    Think rather of the pack-horse on the down,
    And dream of London, small, and white, and clean,
    The clear Thames bordered by its garden green…

    Morris wished for a harmonious relationship between the city and the countryside, and between humanity and nature. And so did Tagore. The parallels between these writers are owed to their shared experience continents apart. They lived through a similar historical process — the radical transformation of landscapes and social relations that modern industrialization brought with it. 

    Unlike those British poets, whose thoughts and experiences were confined to their island nation or at most to a few culturally (and ecologically) akin countries of the Continent, Tagore had a global vision. He had travelled all over the world, encountering many different landscapes, cultures, religions, and ways of life other than those of his native Bengal. As an Indian living under British rule, moreover, he had an understanding of what Britain had wrought in its colonies, something denied to those (otherwise so gifted and acutely sensitive writers) who lived in Britain itself.

    This wider understanding of the modern world is strikingly manifest in some passages of Tagore’s famous tract, Nationalism, from 1917, which includes a profound ecological message that has escaped most commentators. Since the book was originally written in English, it has been far more widely read than his poems, plays, and stories, which first appeared in Bengali. Its readers have focused on its warnings against xenophobia and nationalist hubris while ignoring its powerful environmentalist critique of industrialism and imperialism. Here, for example, is the poet-turned-prophet analyzing the environmental consequences of European imperialism, while speaking of the devastation caused by the rampant greed and new technologies of the new industrial age:

    The political civilization which has sprung up from the soil of Europe [and] is overrunning the whole world, like some prolific weed, is based on exclusiveness. It is always watchful to keep at bay the aliens or to exterminate them. It is carnivorous and cannibalistic in its tendencies, it feeds upon the resources of other peoples and tries to swallow their whole future. It is always afraid of other races achieving eminence, naming it as a peril, and tries to thwart all symptoms of greatness outside its own boundaries, forcing down races of men who are weaker, to be eternally fixed in their weakness. Before this political civilisation came to its power and opened its hungry jaws wide enough to gulp down great continents of the earth, we had wars, pillages, changes of monarchy and consequent miseries, but never such a sight of fearful and hopeless voracity, such wholesale feeding of nation upon nation, such huge machines for turning great portions of the earth into mince-meat, never such
    terrible jealousies with all their ugly teeth and claws ready for tearing into each other’s vitals.

    Those words were spoken at a public event in Japan in 1916. That Asian nation was far more advanced, economically and industrially, than Tagore’s native India, yet he nonetheless hoped that Japan would restrain itself from going all the way down the route mapped by Europe. He reminded his hosts that too eagerly embracing the urban-industrial way of life would be a denial, even a repudiation, of their own culture, of “the spiritual bond of love she [Japan] has established with the hills of her country, with the sea and the streams, with the forests in all their flowery moods and varied physiognomy of branches…” Tagore urged Japan to offer the world a vision of humanity’s relations with nature rather different from that being envisioned and put into practice in modern Europe. The visiting poet reminded the Japanese that “the ideal of maitri [friendship] is at the bottom of your culture — maitri with men and maitri with Nature.” That ideal had to be renewed and reaffirmed, even if it might seem like “an anachronism, when the sound that drowns all voices is the noise of the market-place.” He defiantly stated his own belief “that the sky and the earth and the lyrics of the dawn and the dayfall are with the poets and the idealists, and not with the marketmen robustly contemptuous of all sentiment — that, after all the forgetfulness of his divinity, man will remember again that heaven is always in touch with his world, which can never be abandoned for good to the hounding wolves of the modern era, scenting human blood and howling to the skies.”

    Six years later, in 1922, Leonard Elmhirst, the newly appointed Director of the Institute of Rural Reconstruction in Sriniketan, gave a lecture on the renewal of village life. Elmhirst’s talk was prefaced by some introductory remarks by his mentor and employer, Rabindranath Tagore. Tagore offered a parable of environmental destruction, imagining that on the moon a new race of beings was born, “that began greedily to devour its own surroundings.” 

    Through machinery of tremendous power this race made such an addition to their natural capacity for that their career of plunder entirely outstripped nature’s power for recuperation. Their profit makers dug big holes in the stored capital of the planet. They created wants which were unnatural and provision for these wants was forcibly extracted from nature. When they had reduced the limited store of material in their immediate surroundings they proceeded to wage furious wars among their different sections, each wanting his own special allotment of the lion’s share. In their scramble for the right of self-indulgence they laughed at moral law and took it as a sign of superiority to be ruthless in the satisfaction each of his own desire. They exhausted the water, cut down the trees, reduced the surface of the planet to a desert, riddled with enormous pits, and made its interior a rifled pocket, emptied of its valuables.

    This parable of what might happen on the moon resonated with what Tagore was witnessing on earth in his own time, where the age of industrialism and colonialism had led to an unprecedented assault on the earth and its resources. This imaginary race of rapacious beings on the moon, he continued, “behaved exactly in the way human beings of today are behaving upon this earth, fast exhausting their store of sustenance, not because they must begin their normal life, but because they wish to live at a pitch of monstrous excess. Mother Earth had enough for the healthy appetite of her children and something extra for rare cases of abnormality. But she has not nearly sufficient for the sudden growth of a whole world of spoiled and pampered children.”

    I have chosen to cite Tagore’s own words at length not only to give the reader an experience of their prodigious beauty, but also to establish the depth and the prescience of his environmentalist thinking. He grasped in all its enormity the devastating environmental consequences of industrialism and imperialism, and anticipated by many decades the now influential idea of the ecological footprint, the impact of the vast and unsustainable demands that the production and consumption patterns of a particular nation or social class makes upon the earth. Even though the term “environmentalist’” had not acquired its present meaning in his lifetime, Tagore was indeed a pioneering environmentalist. In his lecture at Sriniketan, Tagore offered in passing an aphorism that can serve, a century and more later, as a maxim of environmental responsibility for our times. “When our wants are moderate, the rations we each claim do not exhaust the common store of nature and the pace of their restoration does not fall hopelessly behind that of our consumption.” 

    Blackbirds

    “She is brown,”

    I said to you, 

    less in annoyance

    than wonder 

    when she flew 

    past us with a certain flamboyance

    not over but under

    our gate

    to settle down

    into the tree beside her mate.

    “But he is black,”

    you replied,

    “and the name is his.”

    “As it always is,”

    I poked.

    “I was your bride

    and took your name,

    yet we are not the same.”

    You’d have joked

    back

    but couldn’t deny it.

    We grew quiet

    when we heard the blackbirds

    sharing words

    between them.

    Whose song 

    it was we would 

    never know, not having seen them

    sing. But it would be wrong

    to say, even if we could.

    Needlefish

    In that instant, 

    dear daughter,

    when they flashed

    like cupid’s arrow

    through the current 

    of saltwater

    where you splashed,

    more narrow

    and more terse

    than any gleam,

    I thought I felt 

    within my gut

    love’s old curse

    entering the dream —

    that through no fault

    of yours but

    beauty, fresh 

    as it is fierce, 

    you should become

    as bait

    to any fish

    whose point would pierce,

    as if from

    nowhere, while I wait.

    Ladybirds

    A ladybird, or ladybug (call it 

    what you will) has crept 

    onto my pillowcase —

    this one so small it 

    can hardly be seen. Except 

    I do see it; it is marking the place 

    where I slept

    like a bloodstain. 

    You shrug, tell me it’s good luck, 

    give our duvet a perfunctory sweep.

    But I cannot possibly sleep

    here: on the windowpane

    a new brood crawls and keeps

    watch — we are sitting ducks.

    It happens almost every night:

    cloaked in red,

    one marauder, or two, takes flight,

    infiltrates our bed

    with dishevelled wings — something thin and black 

    always trailing sideways from its back,

    which it eventually pulls tight

    as if tucking its own covers 

    1. Semi-annual, this infestation

    nevertheless surprises us, like the changing of the clocks.

    The first few we discover

    have the charm of snowflakes

    (no two the same!), but soon a whole nation

    of scarlet flocks

    to the house: a British 

    invasion. Once adored

    but now persona-non-grata:

    isn’t it always the way? Two on the headboard

    are making me skittish —

    one in bloodred, poured

    with black, the other flecked terracotta.