Entering the Vortex
The cold front of academic defunding meets the warm front of AI-fueled research
A strange and unsettling weather pattern is forming over the landscape of scholarly research. For decades, the climate of academic inquiry was shaped by a prevailing high-pressure system, a consensus grounded in the vision articulated by Vannevar Bush in “Science: The Endless Frontier” (1945). That era was characterized by robust federal investment, a faith in the university as the engine of basic research, and a compact that traded public funding for scientific autonomy and the promise of long-term societal benefit. It was a climate conducive to the slow, deliberate, and often unpredictable growth of knowledge, nurtured by a diverse ecosystem of human researchers — the vital “seed stock” of intellectual discovery.
But that high-pressure system is collapsing. A brutal, unyielding cold front of academic defunding has swept across the nation, a consequence of shifting political priorities, populist resentment, and a calculated assault on the university as an institution perceived as hostile to certain political agendas. This is not merely a belt-tightening exercise; it is, for all intents and purposes, the dismantling of Vannevar Bush’s Compact, the end of the era of “big government”-funded Wissenschaft. Funding streams for basic research are dwindling, grant applications face increasingly long odds, and the financial precarity of academic careers deters the brightest minds. The human capital necessary for sustained, fundamental inquiry is beginning to wither.
Simultaneously, a warm, moisture-laden airmass is rapidly advancing: the astonishing rise of AI-based research tools. Powered by vast datasets and sophisticated algorithms, these tools promise to revolutionize every stage of the research process – from literature review and data analysis to hypothesis generation and the drafting of scholarly texts. As a recent New Yorker piece on AI and the humanities suggests, these AI engines can already generate deep research and coherent texts on virtually any subject, seemingly within moments. They offer the prospect of unprecedented efficiency, speed, and scale in the production of scholarly output.
The collision of these two epochal weather systems — the brutal cold front of academic defunding and the warm, expansive airmass of AI-based research tools — is creating an atmospheric instability unlike anything the world of scholarship has ever witnessed. Along the front where these forces meet, a series of powerful and unpredictable tornados are beginning to touch down, reshaping the terrain of knowledge production in real-time.
In the short to medium term, these AI-fueled tornados may appear remarkably beneficial, even salvific. As the traditional structures of academic funding and human researcher development erode, AI tools could provide a seemingly more-than-adequate backfill. Even with fewer human hands toiling in the archives or the lab, AI can process and synthesize existing information at a scale and speed previously unimaginable. Research outputs, measured by the sheer volume of papers and reports, may not only avoid implosion but could very well accelerate. The initial quality of this AI-generated research might even appear to improve, as algorithms become more adept at identifying patterns, drawing connections, and presenting findings in a coherent and persuasive manner. Academic journals, facing a potential decline in submissions from cash-strapped human researchers, may find their pipelines refilled with sophisticated, AI-authored manuscripts. The immediate crisis of defunding, the hollowing out of the human research corps, could be masked by a surge in automated intellectual production. Even if resource-starved universities as institutions implode, optimists will be able to point to this burgeoning output as evidence of continued, perhaps even enhanced, intellectual productivity.
This short-term outlook, however, masks a deeper and more insidious problem. The reliance on AI as a replacement for the “seed stock” of human researchers carries a profound long-term risk. AI, for all its power, is fundamentally a system of pattern recognition and synthesis based on existing data. It excels at processing, organizing, and drawing inferences from the knowledge that humans have already created and digitized. What it does not do, at least not yet, is generate truly novel insights, formulate entirely new questions that break from existing paradigms, or explore areas of inquiry for which no significant digital footprint yet exists.
The truly groundbreaking discoveries, the paradigm shifts, the exploration of entirely new intellectual territories have historically emerged from the messy, intuitive, often inefficient process of human curiosity, serendipity, and deep, sustained engagement with the unknown. These are the products of human minds, shaped by lived experience, guided by intuition, and capable of leaps of imagination that transcend the logical extrapolation of existing data. This is the fundamental role of the human researcher as the generator of original inputs, the explorer of the intellectual frontier. And while it’s true that the vast majority of actually existing academics don’t produce such insights either, the existing system of academia at least is a system that enables the dice to be thrown on hundreds of thousands of individuals, on the chance that a few of them will in fact provide such breakthroughs.
If the current trend of defunding and the resulting decline in the number of human researchers continue unabated, however, we risk facing a future where the creation of truly new knowledge slows to a trickle, and eventually, potentially dries up altogether. The AI engines, for all their processing power, will ultimately be left to endlessly re-process and re-synthesize the same finite pool of human-generated knowledge. The “raw materials” upon which their automated research findings depend — the original observations, the novel experimental data, the groundbreaking theoretical frameworks — will become increasingly scarce. At some point, perhaps gradually, perhaps precipitously, the quality of research may go off a cliff, not because the AI becomes less capable of processing, but because the wellspring of original human insight from which it draws has run dry. (Whether “synthetic data,” itself produced by AIs, can backfill this loss of original human-generated data, remains unclear.) If this comes to pass, the tornados of AI-driven productivity, initially so impressive, will eventually find themselves churning over barren ground, generating increasingly repetitive and ultimately sterile results.
Furthermore, the collapse of investment in the human capital of research excellence carries another devastating potential consequence: the loss of our collective capacity for quality detection and control in the realm of knowledge. Developing the ability to discern the truly significant from the trivial, the rigorously supported from the specious, the genuinely novel from the merely derivative, is a skill honed through years of training, practice, and immersion in the culture of critical inquiry that research universities have traditionally fostered. It is the product of mentorship, peer review within human communities, and the development of an internalized sense of scholarly standards.
As the number of experienced human researchers dwindles, so too does this collective capacity for judgment (in Kant’s sense). Who will be left to critically evaluate the output of the AI, to identify its subtle biases, to recognize the limitations of its data sets, to push back against the confident but ultimately hollow pronouncements of an algorithm? The mechanisms of peer review, currently a human-driven process, could be supplanted by automated systems, creating a feedback loop that reinforces existing patterns rather than challenging them. Without a robust community of highly trained human minds capable of independent critical assessment, society risks becoming overwhelmed by a flood of high-volume, superficially polished, yet fundamentally unoriginal or even flawed AI-generated “research,” lacking the depth, rigor, and genuine insight that only human inquiry can provide. In essence, this is a kind of epistemic analog to “the alignment problem” in ethics, except that humans will presumably always have some innate moral sensibility, whereas intellectual capacity may be radically degraded. In the limit case, there will be no one left who has been sufficiently well-trained to be able to judge whether the quality of the AI-generated outputs are even any good.
In this scenario, the tornados which are initially appearing as engines of massively increased productivity, could devolve into forces of intellectual chaos, churning up a landscape of undifferentiated and ultimately unreliable information, with no human capacity left to discern truth from simulation. This epistemic vortex, fueled by the collision of defunding and uncritical reliance on AI, poses a profound threat not only to the future of scholarly research but to the very possibility of a society capable of generating, evaluating, and acting upon reliable knowledge. The short-term allure of automated research risks blinding us to the long-term catastrophe of intellectual sterility and the loss of the human capacity to know.
I think it’s important now to redefine the field — or practice — of humanities before AI culture deems it a category mistake. It’s the quality of our questions which distinguishes us from bots, not ours’ or their answers. The transformation of human epistemology driven by AI, though certainly a landmark phenomenon, is misconceiving the (un)conscious sources of ontology which make us fully human and are being swiftly forgotten in our breakneck over-determination of AI culture. That kind of mindset would certainly justify a planetary war for scarce resources. If societies were to develop a techno-feudal instrumentalism that is disconnected from the human wellsprings of equality, cooperation and self-sustainment, that will surely leave us wandering in the kind of endless state of waiting for Godot that Beckett portended.
I'm still reading up on "elite overproduction". It could be that be that by sweeping through the economy, AI reduces elite value and makes the few positions remaining very competitive. So the best and brightest (or well connected) continue innovative research, while the rest of us operate upstream, filtering the flood, or downstream in application and implementation. Or, being forced to seek work elsewhere.
Less status for the majority, but it might be functional.