Historians Killing History
In the aftermath of the Hamas attacks of October 7, the subsequent congressional hearings with university presidents, and the encampments that followed, academia has once again found itself at the center of the culture wars, from which it rarely strays far. On one side, critics denounce universities for “wokeness,” while on the other side, defenders of universities condemn the anti-woke critics of reactionary politics and bad faith. These battle lines are tediously familiar to anyone who paid attention to the history wars, which had until October 7 formed the principal theater in the academic culture wars. The combatants are occupying the same lines of trenches from which they fought over the 1619 Project, the Florida AP African-American Studies standards, and so on. Behind the coils of rusting barbed wire, the front has scarcely budged.
It is striking to me, as a somewhat rare creature — a military historian in civilian academia who has studied the work not only of Clausewitz but also of Foucault — how incomplete and self-serving are the arguments of both sides. But one need not know German military theory to appreciate that the narrow slits of pillboxes offer less than comprehensive views of reality. Nor need one know French critical theory to grasp that the tales people tell about themselves and their opponents do psychological work for them. Analyzing these tales requires attention to what they exclude as well as what they include. In all the hue and cry about the university, there has been virtual silence about the ostensibly unexciting subject of scholarly standards — those quaint things supposed to ensure that academics generate and communicate knowledge more rather than less rigorously. Instead, we have had a great deal of talk about ideology, or rather ideological corruption, by and about both academics and their critics. The driving question is not, “what is the evidence for your argument, and is it sufficient?” but “whose side are you on?”
Accusing each other of ideological corruption enables both sides to avoid reckoning with the collapse of scholarly standards in their own ranks. In effect, they have colluded to misdiagnose — or at best incompletely diagnose — the nature of the problem, and in a way that serves both their interests. To paraphrase the description of the Sirius Cybernetics Corporation’s products in the Hitchhiker’s Guide to the Galaxy, the superficial design flaw of ideology hides the fundamental design flaw of declining standards.
The indifference to scholarly standards should be evident to even casual observers of the academic culture wars. The late Harvard law professor Charles Fried justified his refusal to consider the accusations of plagiarism against Claudine Gay on the merits because of who was making them: they were part of an “extreme right-wing attack on elite institutions.” In strikingly similar language, the hedge-fund manager Bill Ackman refused to consider the accusations of plagiarism against his wife on the merits because they were part of “attacks on my family.” This is not the language of standards. This is the language of the bunker.
Likely less evident to the casual observer is the indifference to scholarly standards that has characterized the history wars in recent years, or the way in which shared silence about standards between ostensible opponents in the history wars anticipated the same shared silence in the academic culture wars more broadly. Too many historians, of varying ideological stripes, mimic the forms of scholarship without reproducing its substance. They make trips to archives, consult the secondary literature, and cite sources in footnotes, but their research lacks rigor and integrity. It is not scholarship, it is pseudo-scholarship. The intellectual incompetence — or dishonesty — of many critics of the historical profession simply mirrors that of a great deal of the profession itself. No wonder neither wants to look in the mirror: they would find their enemy, themselves, staring back out at them.
Reframing the problem in terms of standards rather than ideology is important for three reasons. First, it provides the basis for a vital center in academia, which is needed there just as much as in politics. This center, if it is to be real and not merely a band-aid over differences, cannot be defined by a priori ideological commitments; it must be defined by reinforcing commitments to process — be it scholarly or liberal-democratic — and to human dignity. A call for “process” may not sound like an inspiring blast from a trumpet, but (as I have argued previously in these pages) process secures moral substance, to which the heart should thrill. It channels historians towards humanizing and away from dehumanizing the people they study; it encourages them to treat those who lived in the past as subjects to be understood as fully as possible, not as mere objects to be over-simplified and manipulated according to historians’ whims. Thus standards, which distinguish between more and less rigorous process, also distinguish between more and less humanistic substance. Not coincidentally, liberal democracy requires the same humanism.
Second, thinking in terms of standards rather than ideology has implications for how we understand the directionality and the substance of the relationship between universities and politics in seeking to explain the dysfunctionality of both. One formulation has it that (left-wing) politics have corrupted universities; spaces that were once about the pursuit of objective truth have succumbed to the lure of pursuing ideological victory. Another, expressed by the statement that “we all live on campus now,” has it that universities have corrupted politics by leaking out dangerous (identitarian) ideologies. While these two formulations reverse the directionality, both see the substance of the transmission as ideological in character. Hence the transmission of low scholarly standards — the decline of the quality of evidence and argument — has also largely escaped notice and analysis.
Doubtless this transmission in and out of the academic bubble goes in both directions. But the transmission from universities to the public deserves special scrutiny, because the public is not responsible for upholding scholarly standards the way universities are. In effect, the American people have delegated to universities the responsibility for generating reliable and methodologically proven knowledge, and for educating the public (especially its young) in the rigors of the scholarly process through publication and teaching. As the price for accepting that responsibility, academics have demanded certain rights and privileges — most importantly, academic freedom and tenure — which amount to sovereign exemptions from regular political accountability and market forces. Simply put, tenured academics enjoy a degree of economic security available to few others. By accepting payment in the form of these special rights and privileges, academics have accepted special responsibility for upholding scholarly standards in trust for society. If and when they betray this trust, they must take special responsibility for the collapse of scholarly standards in society at large. The rough comes with the smooth — and whether or not academics accept it in principle, it comes in practice. What happens on campus does not stay on campus. When academic disciplines debase themselves through an aversion to rigor, precision, complexity, and non-conformism, when they behave indifferently and even adversarially toward the apparatus of evidence and logic, we should expect the same debasement of public discourse; and we find it everywhere now. This is social accountability more pervasive than legislatures can conjure.
Finally, reframing the problem in terms of standards rather than ideology makes visible the similarities between what has happened in academia and what has happened in other socially important institutions across the ideological spectrum. The police come to mind as a parallel. In return for promising to provide a public good — security — they too have claimed exemptions — “qualified immunity” — from normal political and economic accountability, while insisting on the right to hold everyone else to it. Rather than keeping their promise, they have too often failed to uphold standards internally.
My purpose in indicting this institutional degradation is not to endorse knee-jerk anti-institutionalism. We need institutions. But we also need those institutions to have the honesty and the self-confidence to look in the mirror and admit when they have behaved badly, no matter how badly their critics may be behaving. The resort instead to pointing the finger at outsiders is a terrible evasion, which goes far towards explaining America’s social and political dysfunction today.

I do not propose to attempt a historical account of why and how pseudo-scholarship conquered the historical profession, its critics, and its defenders. Instead I want to try to offer a snapshot view of the historical profession as it exists today. No such view can be comprehensive; there are too many people to know and too many things to read. Mine is based on my experience over a dozen years since being inducted into the profession. Conversations with colleagues have made me believe that my observations are shared by others. I offer them on that basis.
The single most important aspect of the collapse of scholarly standards in the historical profession is the inability to distinguish between rigorous primary-source research and superficial primary-source research, especially archival research. Primary sources refer to sources produced by the people historians are studying (as distinct from secondary sources, which, as a rule of thumb, refer to sources produced by other historians). Primary-source research is supposed to be the heart of what historians do: it is the very basis of original knowledge about the past. Yet far too many dissertations and monographs contain far too little original research, instead offering clever ideas and new interpretations resting on slender and shallow primary-source bases. This is no surprise, since most doctoral programs in history offer little-to-no training in primary-source research. Indeed, there is now a generation of professors who cannot teach students how to do rigorous primary-source research because they never learned how to do it themselves.
Perhaps the principal reason for this lack of training is that many historians regard research as easy and theory as hard. This is absurd. Primary-source research is a craft, and like other crafts it blends art and theory. If we loosely define “critical theory” as questioning everything that we think we know about the world as a potential attempt by powerful people to mislead us, then we might call good archival research applied critical theory. The reason is that good archival research is an exercise in examining the archives for what they are hiding and what they are missing. To do this, one must understand how archives are constructed and interpret sources in their archival context; one must not treat archives as existing in nature, or sources as possessing self-evident meanings independent of their archival contexts. The applied character of this critical theory tends to make its theory-ness invisible to historians socialized to expect theory in textual form.
In addition to requiring theoretical sophistication, good archival research is a grind. If a particular archive is central to a historian’s research, then answering their questions should require months of research there and supplementary follow-up visits, as they realize that they merely skimmed key files because they had failed to understand the files’ potential relevance. This type of research is time consuming, difficult, and expensive. One may get lucky and have the help of a finding aid — or one may have to trawl through hundreds if not thousands of feet of uncatalogued boxes. One takes the tedium and the boredom along with the satisfaction and the pleasure because it really is the only way to develop a sense of a period and a subject, of the many roads not taken along with the few that were, of the universe of information that might once have existed. Without a sense of the whole, one cannot make arguments based on the parts that one has seen with appropriate qualification and precision.
Too much of what passes for serious archival research these days is, to my eye, nothing of the sort. It is not the work of people who went to archives with a commitment to understanding their construction and a sense of excitement about learning new things. Instead it is the work of people who went to archives looking to anoint their argument with a few references for a patina of authenticity. Such historians chase trends more than proceeding from intellectual curiosity, tell tidy stories stripped of the messiness of real life, and produce the historical equivalent of fast fashion, which gets rewarded because the profession can no longer distinguish it from couture. It is a devastating indictment that the techniques of dumbing-down and appealing to emotion that are used to achieve social-media virality also help historians to achieve professional renown.
The failure to master the craft of archival research leads to failures of what historians call “source criticism.” If a historian is interested in government policy, for instance, they need to understand that confining their research to the official records of the most central and highest-level bodies will not give them the same degree of insight as scholars who also take the trouble to consult the records of lower-level departments and other collections. Similarly, they have not done the same source criticism as another scholar if they order a photocopy or scan of a single document from a file cited by the other scholar and do not take the trouble to visit the archive to see the whole file. This is basic stuff, or it ought to be. It is appalling how many historians do not seem to understand these fundamentals.
The inability of too many historians to distinguish between serious and superficial primary-source research is closely connected to what I would describe as functional historiographical illiteracy. My sense is that far too few historians today understand the difference between reading a book and reading in a book. When historians read a scholarly work, they should be reading, in the first instance, to try to understand what questions the author was trying to answer and why the author made the choices that they did. Readers may still end up disagreeing with the author’s answers or thinking that they made bad choices — but readers owe authors the courtesy of trying first to understand them on their own terms. Too many historians skip that step and default straight into criticism mode, especially if what they are reading conflicts with their existing views or their prior sense of how the world works.
I doubt that I am the only historian to have spent graduate school positively reveling in criticism mode. It filled my callow mind with such a sensation of power. But we are supposed to have such immaturity knocked out of us by strict yet compassionate supervisors, and by the process of writing a carefully researched dissertation and learning for ourselves how hard the scholarly craft is. If historians do not learn that lesson because they do not do careful research — if they approach their dissertation as a glorified think-piece and persist in believing that it is much easier to interpret the past authoritatively than it actually is — then they simply do not know how to read the same way as someone who has learned that lesson. They remain blind to the nuance and the qualification with which good scholars make their arguments, or they mistake these subtleties for unimportant details. They try to shoehorn into a conventional category a work that cannot be categorized because it is so original. They read in a book, looking for the key points and getting annoyed that the author did not package them conveniently. Oftentimes, I don’t think the ensuing misrepresentations of other scholars’ work reflect dishonesty — that is, deliberate misrepresentation — but incompetence. Many historians truly believe that their misrepresentations are fair and accurate.
The collapse of standards for both primary- and secondary-source research has facilitated an epidemic of plagiarism. I am not talking about the classic form of plagiarism, which happens to be the easiest to spot: namely, the use of someone else’s words without quotation marks or citations. I am talking about the theft of ideas, which is softer and often more difficult to detect, because it requires a working knowledge of the relevant archives and an understanding of how the scholarly literature on a subject has developed over time. This type of plagiarism is sometimes camouflaged by citations to previous scholars on a relatively minor point of fact while the plagiarist makes off with their broader conceptual breakthrough — the phony attribution functions to cover up the theft. Another common form of plagiarism is source-mining. This is when historians use other scholars’ citations as a blueprint for their own archival research but fail to acknowledge their reliance on the prior work. The pioneers went to the trouble of slogging through box after (uncatalogued) box to find the out-of-the-way nuggets that the derivative historians glibly cited; the latter took a shortcut and pretended that they found those nuggets all on their own.
Peer review, which should have obstructed the collapse of standards and the spread of plagiarism, has done no such thing, because it, too, has broken down. Work that never should have survived peer review has been published on a large scale due to some form of corruption of the process — the reviewers were not competent, or they failed to take the responsibility seriously, or they waved the thing through because it was by a buddy of theirs. Too often, moreover, the rejection by conscientious peer reviewers of a manuscript as wholly unfit for publication does not prevent it from being published, but results in it being published in a different journal with only minor adjustments. The breakdown of peer review has led in turn to an erosion of brands that once might have provided the equivalent of a good housekeeping seal of approval. These days, publication by a “respected” journal or a “prestigious” university press is zero guarantee of quality.
The corruption of the process of peer review has been accompanied by the perversion of the norm of collegiality. Pretending that lousy research is good, misrepresenting the work of other historians, and so on ought to be understood as uncollegial behavior. But collegiality today seems to be understood as being a pleasant sort of person to have lunch with. In practice, it means going along to get along — hardly a recipe for scholarly excellence or integrity. In truth, collegiality isn’t about being “nice,” nor is it confined to the colleagues in front of one’s nose. It is about upholding standards on behalf of one’s colleagues across space and time. The scholarly college is an imagined community, with many members we do not know and will never even hear of, some of whom have died and others of whom have not yet been born. Collegiality means honoring one’s obligations to the entire scholarly college, and by doing so, establishing the college’s moral right to make claims on relatively scarce public resources. We are not entitled to a living as historians; tax-payers and tuition-payers do not owe us their money. We have to make a case that we are worthy of their investment, and the bare minimum for making that case is maintaining the standards that set us apart from lay historians, that is, the public. Those standards are what preserve the value of our credential, which is held by individuals as part of a collective trust. When individuals debase the credential, they harm their colleagues.
Although more could be added, this is sufficient to indicate what a standards-centered, rather than ideology-centered, critique of the historical profession would look like, as well as to enable readers to spot analogues in their institutions and recognize that what has happened in mine belongs to a broader pattern in American society. This critique is not an endorsement of anti-intellectualism. On the contrary. Genuine scholarly expertise — which, by its nature, is hard-won, valuable, and limited — still exists, and it deserves to be defended. My point is that it needs to be defended inside the profession as well as outside it. When we ourselves value expertise so little that we no longer recognize it in our own ranks, uphold it through peer review, or protect it with a properly defined norm of collegiality, we do not deserve that our defenses of it be taken seriously by the rest of society.

Perhaps the strongest evidence that the professional blight I have just described is symptomatic of a collapse of scholarly standards, rather than of ideological capture, is the fact that two groups of historians often constructed — both inside and outside the profession — as ideological opposites share the same scholarly infirmities. Call it the horseshoe of scholarly incompetence.
On the one hand, there are traditional military and diplomatic historians. Broadly speaking, they see themselves, and are seen by others, as continuing to study old topics (hard power, high politics, dead white men, and so on) using old methods of archival research. On the other hand, there are non-traditional historians of race, gender, and sexuality, who see themselves, and are seen by others, as studying new topics (knowledge production, the personal as political, historically marginalized groups, and so on) using new methods of critical theory. The methodological commitments of the two types are understood to imply corresponding ideological commitments, and vice-versa: “traditional” history is coded as conservative or politically right-of-center (for its defenders: hard, manly, substantial, tangible), while “woke” history is coded as progressive or politically left-of-center (for its critics: soft, effete, flimsy, intangible). Of course this dichotomy is an over-simplification, but it is one that I think many historians would recognize, however grudgingly.
Historians and their allies in these two camps have inverse grievances about each other, which are discouragingly similar to those of Americans fighting the culture wars. “Traditional” historians believe that “woke” historians regard them as knuckle-dragging fascists — dismissing them as methodologically unsophisticated on spurious ideological grounds. “Woke” historians believe that “traditional” historians regard their work as faddish and jargony left-wing activism — dismissing them as methodologically pseudo-sophisticated on spurious ideological grounds. Of course, framing the other as ideologically motivated does ideological work for each, implying that they are more objective than their opponents.
This ideological framing excludes a great deal of inconvenient evidence. For one thing, as we have seen, rigorous archival research is an exercise in applied critical theory — the two are not binary opposites. For another thing, anyone who has read the ninth chapter of That Noble Dream, Peter Novick’s classic history of the historical profession, or paid attention to the “new right” in the United States today, knows that the politics of critical theory, postmodernism, relativism, or whatever one wants to call it are protean, not inherently left-wing. Conversely, anyone who got whiplash from watching left-wing historians who yesterday had denied the possibility of objectivity today insisting on “facts” as against the Trump administration’s endorsement of “alternative facts,” or who sees the tensions between the Marxian left and the Foucauldian left, knows that materialism and canons of proof are not inherently right-wing. Epistemological conservatism and political conservatism are not linked as tightly today as they were during the culture wars of the 1990s.
No less important, the “traditional” fields of diplomatic and military history most often championed by the political right and the methodological conservatives suffer from exactly the same type of scholarly weaknesses as they accuse the non-traditional fields of having. Where an identitarian historian sniffs at archival research as the fetishization of documents, a diplomatic historian who had lunch at several archives in several countries now claims expertise in all. Where an identitarian historian began their academic journey as one of the ninety-two percent of undergraduate students in women’s, gender, and sexuality studies at Yale to get A’s, a military historian got their doctorate from a war/strategic studies degree mill. Where an identitarian historian seeks contemporary relevance by reducing the complexity of the past into a tale of white supremacy, a diplomatic historian seeks the same relevance by reading the past backwards through today’s foreign policy categories—and both twist the historical evidence to conform to their present-day agenda. I could go on. And if you think that diplomatic and military historians do not plagiarize, fake their footnotes, and wave their buddies’ lousy work through peer review, then I have an exciting time-share opportunity on a river in Egypt that I’d like to discuss with you.
In other words, as in the broader culture wars, each side of the history wars accuses the other of behavior of which it itself is guilty. Seeing themselves and encouraging others to see them as opposites distracts attention from what they share. Much as the MAGA right and the identitarian left betray a lack of commitment to liberal values, so the supposedly opposite methodological and ideological poles of the historical profession betray a lack of commitment to scholarly standards.
In the historical profession, as in a liberal democracy, there must be a single set of standards that applies against all comers. If historians’ left-wing ideological commitments lead to better scholarship than that produced by historians with right-wing views — as was the case with so much women’s, African-American, and working-class history in the 1950s and 1960s — then great. If historians’ right-wing commitments lead to better scholarship than that produced by historians with left-wing commitments, then great. Ideological commitments do not delegitimate scholarship, but they do not legitimate it either. Scholarship must stand or fall on its own, according to professional standards of originality and importance, evidence and argument, much as citizens are entitled to due process regardless of their political beliefs. If “woke” history does not meet those standards, then it is bad scholarship. If “traditional” history does not meet those standards, then it is bad scholarship.
By all means criticize bad scholarship. But spare us the pretense that any one field or ideology has a monopoly on it. Just as right-wing illiberalism cannot provide a remedy to left-wing illiberalism (or vice-versa), so “traditional” diplomatic and military history cannot provide a remedy to the very real quality-control problems in “woke” history until it addresses its own equally serious quality control-problems (or vice-versa). Would that the problem of the historical profession was “merely” one of left-wing bias! It is much worse than that.

To explain why their profession and universities more broadly are under attack, historians seem to have read Richard Hofstadter on the paranoid style as a how-to guide rather than as an explanatory analysis. They have produced a whole genre of literature blaming exogenous forces beyond their control — notably “neoliberalism” and the Republican Party — for the ills that beset them: fewer college students majoring in history, fewer economically secure jobs for newly minted PhDs in history, the redirection by states and donors of financial support away from the “useless” humanities and towards the “useful” STEM disciplines. If the profession dies, they want the world to know, it was murdered.
There is considerable evidence to support such a narrative. The neoliberal drive to impose market discipline and market values on universities has indeed done enormous damage to academia. It is also true that many Republican politicians are not operating in good faith and propose “cures” that would be no better than the disease. But the murder narrative is incomplete and self-serving — as would be obvious to historians if they were examining anyone but themselves. Every institution facing criticism dismisses its critics as bad-faith hostile outsiders. It is what police departments do when summary executions of black people provoke accusations of racism. It is what the Trumpified Republican Party does when people with eyes and ears accuse it of gross hypocrisy. Institutions like to blame exogenous forces beyond their control for their problems because it enables them to avoid grappling with their own culpability for endogenous forces within their control. Consciences rest easier with a narrative of murder than with a narrative of suicide.
It does not relieve illiberal critics of the historical profession (and of universities more broadly) of their culpability for murder to insist that the profession’s death is also a suicide. The critics must take responsibility for their misconduct; historians must take responsibility for ours. Neoliberalism does not force us to mistake bad work for good. A vast right-wing conspiracy does not make us hire our cronies from graduate school over better scholars. We have done that to ourselves. Like many of our critics, we are not going to be worthy of public trust until we admit this. Nothing better illustrates our transformation into a profession of pseudo-scholars than our institutional unwillingness to do so.
It is little short of a tragedy for the United States that universities have failed to uphold scholarly standards, which mandate the rejection of a bunker mentality. The “no enemies to the left” mentality, like its counterpart on the right, is neither scholarly nor humanistic nor democratic. It is unreflective, inhumane, and authoritarian. Scholarly standards, like liberal-democratic standards, do not come with a two-wrongs-make-a-right exception. Instead of upholding standards, universities are leading the way for other socially important institutions to excuse the abandonment of them.
For any institution, the right to self-governance and the obligation to self-govern are flip sides of the bargain struck with the public: the obligation is the fiduciary responsibility that an institution owes the public in return for the public respecting its autonomy. Academic freedom and tenure exist to protect academics from being policed by outsiders according to non-scholarly standards. They do not exist to relieve academics of the obligation to police ourselves according to scholarly standards. If we default on our obligation to govern ourselves, then we lose the moral high ground to complain when outsiders try to govern us.
When institutions fail to hold themselves accountable, as mine and so many others have, Americans have every right to be angry. Speaking both as a citizen and as a member of an institution, I hope they channel their anger into a determination to make their institutions better and more faithful servants of the public good — not into a determination to destroy their institutions, or to transform corrupt institutions dominated by one political ideology into corrupt institutions dominated by a different political ideology. Such a reform campaign, conducted with the maturity befitting a free people, would require Americans across the political spectrum to examine their own attraction to pseudo-scholarly thinking. If they do not, their republic may disappear into a past with no institution left to study it.