Why Universities Have the Most to Gain From the AI Moment

Mar 5, 2026

A tidal wave of AI is coming for science.

The frontier labs have the compute, the capital, and the engineering talent. AI systems are already solving protein structures, proposing new materials, and automating scientific workflows. Entire startups now exist to build AI-driven laboratories that generate and test hypotheses continuously (periodic labs, future house, lila ai, etc.). The same story applies to education — AI tutors will personalize learning, lectures will become unnecessary, and credentialing will erode.

In this telling, centuries-old institutions may linger, but they will slowly lose relevance.

I can see why people believe this.

But there is a hidden assumption inside the narrative.

Note: I’m going to focus on science in this post and I know the tweet above is about colleges more generally. I think my argument here works generally about the value of ideas across fields.

The hidden assumption

The assumption is that the central constraint in science is execution.

For most of history, this was true.

Experiments were slow. Instruments were expensive. Analysis required enormous amounts of human labor. Scientific progress depended on the ability to gather talent, equipment, and funding in one place.

Universities became the dominant institutions of science largely because they solved that coordination problem.

AI changes this constraint.

If AI systems compress months of research work into days, the cost of execution collapses. Running analyses, building models, generating code, or exploring hypotheses becomes dramatically easier. The number of ideas that can be tested in a given amount of time increases by orders of magnitude.

This accelerates science everywhere.

And that leads to a crucial realization.

Frontier labs can build intelligence – but they cannot monopolize it

The frontier AI labs appear to have enormous advantages: compute, capital, and engineering talent.

But intelligence is not a resource that can remain scarce.

Once the capability exists, competition forces it outward. Techniques spread, models improve, open systems emerge, and the price of intelligence falls.

Frontier labs can build intelligence.

But they cannot monopolize it.

Even if the most advanced models remain expensive to train, the useful capabilities they create will spread rapidly. Smaller labs, startups, and universities will gain access to increasingly powerful tools for reasoning, modeling, and experimentation.

In other words, the same forces that appear to weaken universities also weaken everyone else’s advantages.

Execution becomes cheap everywhere. Small labs can do big science. A three-person team with the right question and access to modern AI tools can now accomplish what once was available to only the biggest labs. This doesn’t eliminate the advantage of scale – it just means scale is no longer sufficient.

And when execution stops being the bottleneck, something else becomes scarce.

When answers are cheap, taste becomes the bottleneck

If AI dramatically increases the rate at which ideas can be tested, the limiting factor in science shifts.

The new bottleneck becomes choosing the right questions.

Scientists often refer to this ability informally as taste.

Scientific taste is difficult to define but easy to recognize. It is the ability to identify questions that are simultaneously important and tractable. It is knowing when a surprising result reveals something fundamental and when it is merely a curiosity. It is the instinct for which direction to explore when the map of a field is still blank.

AI systems may become extremely good at generating hypotheses and exploring large search spaces. But the relevance of scientific discovery ultimately depends on human judgment: what we consider meaningful, important, or worth understanding.

In a world where answers are cheap, taste becomes the bottleneck.

The scientist who chooses the right question will always outperform the scientist who merely runs experiments faster.

But won’t AI develop taste too?

The strongest version of this objection isn’t about mining existing datasets — it’s about the closed-loop AI labs that design and run their own experiments. Systems like these will generate new data, test hypotheses, and iterate faster than any human team. But they still need an objective.

They still explore within a defined problem space. Someone has to decide what’s worth investigating.

The discoveries that create entirely new fields tend not to work that way. CRISPR came from studying bacterial immune systems. Optogenetics came from light-sensitive algae proteins. mRNA vaccines came from unusual RNA chemistry. These weren’t efficient searches through a well-defined space. They were researchers wandering through domains with no obvious connection to the applications that eventually emerged.

Could AI eventually learn that kind of open-ended curiosity?

Maybe. But even then, the question of what’s worth understanding — what counts as meaningful, what matters to us — remains a human judgment for some time. That’s what taste actually is. Not just finding the answer, but deciding what deserves a question.

Universities may become more important because of AI

Large technology companies are extraordinarily good at scaling engineering systems. They are optimized for building products, running massive infrastructure, and executing well-defined goals quickly.

Universities operate differently.

They concentrate people who are curious, independent, and often slightly contrarian. They provide environments where individuals can pursue questions that are not immediately useful, profitable, or even well-defined. They allow ideas from different fields to collide in ways that would be difficult to engineer deliberately.

Universities are also uniquely good at attracting a particular kind of person: someone who wants autonomy over their work. Many talented scientists do not want to be told exactly what problem to solve or what milestone to hit next quarter. They want the freedom to explore.

That desire for autonomy has always been one of academia’s strongest recruiting advantages. In an AI-accelerated world, it may become even more important.

If execution becomes cheap, the institutions that matter most will not necessarily be the ones with the most resources. They will be the ones where unusual ideas appear most often.

Universities are already designed for that.

But they must adapt to take advantage of it.

What universities should actually do

If AI dramatically accelerates scientific execution, universities should reorganize around the things they uniquely provide and get out of the way as fast as possible.

Remove the bureaucratic drag. The biggest friction in academia is not intellectual. It is administrative. Institutional review timelines, procurement cycles, IT restrictions, HR processes, grant application seasons. If the cost of execution is collapsing, then the institutions that remove administrative drag will benefit disproportionately. A lab that can move from idea to experiment in a week rather than a semester has a massive compounding advantage. Most universities are not even close to this. This is a prime area for AI to improve things. Universities are burdened by increasingly complicated compliance from the federal government, which is in part responsible for the massive increase in administrators (yes, like everything else, this too is congress’s fault). AI should make these roles much more streamlined. The simplest thing universities can do is decentralize — let each department and center figure it out, and use that variation as an experiment to see what is the most efficient administrative system.

Hire for taste, not volume. The tenure system rewards people who were good at the old game: publication count, grant dollars, h-index. If taste is what matters, universities need to figure out how to identify and reward it. That probably means hiring more people on the basis of one or two brilliant ideas rather than a long CV, and being more comfortable with researchers who took unconventional paths to get there. This is the hardest change because it cuts against decades of institutional incentive design. But it is also the most important one. This could start now at every faculty search committee.

Lean into the weird. The real advantage universities have is not merely that they allow exploration. It is that they can support things no company would touch because there is no obvious product. The correct response to AI is not to become more like industry. It is to become more like the idealized version of academia — more curiosity-driven, more tolerant of failure, more willing to let someone spend years on something that might turn out to be a dead end. That tolerance is what produced CRISPR, optogenetics, and mRNA vaccines. It is also what will produce whatever comes next.

Advocate for funding changes. Universities conduct only a fraction of total research spending, but they have enormous influence over how that money flows. The current system is broken in well-documented ways. Top researchers spend half their time writing grant applications. Funding skews heavily toward older, established investigators — the NIH now funds five times as many researchers over 50 as under 40, the inverse of what it was in 1980. The result is that scientists start directing their own work later and later, eating into the years when their lifetime innovation potential is highest. Smaller teams consistently produce more disruptive research than large ones, yet the incentives push toward ever-larger labs and ever-longer author lists. Universities should be leading the fight for alternatives: fund people rather than projects and let them freely associate. If you must fund projects, make a lottery-based system that eliminates the grant-writing arms race, and explicitly back high-risk ideas from researchers who lack conventional credentials. If AI makes each dollar of research funding go further, the way we allocate those dollars matters more than ever.

Universities should also treat AI tool literacy as foundational. In a future where AI can do the work of current-day lawyers, architects, and scientists, the question is what humans are for. The answer is that they need to be able to teach themselves anything fast and build anything they can imagine. I think most students didn’t enter their major with the end job in mind. Half the people I know who went to law school wanted to do public good. Unlock that potential. Every field becomes more valuable, not less, when its practitioners can act on what they know.

In the past, “I have an idea for an app” was a thought that died on contact with reality unless you could code or hire someone who could. After decades, we all stop entirely with that exercise because it was fruitless. That barrier is gone now. A student who cares about urban infrastructure can build a planning tool, canvas neighborhoods, and organize to change something real. A student who wants to improve civic participation in their hometown can build the thing that does it — not write a paper about it, not propose it in a grant application, but simply ship it. The future will be written by the next generations, and in the short term, they will still go to college.

This is what universities should be preparing people for: not to perform a specific profession that AI will transform beyond recognition, but to identify real problems and build real solutions to them. The barrier between having an idea and testing it should effectively disappear for every student on campus. Campuses like UC Berkeley already provide students with licenses for tools like Gemini and Cursor. This should be among the first things students encounter after welcome week.

Courses should increasingly resemble studios rather than lecture halls. Students work on real problems, build systems, test ideas, and learn by doing. Faculty act less as lecturers and more as mentors who help students develop judgment. In other words, universities should focus explicitly on cultivating taste.

And in an ecosystem increasingly flooded with AI-generated content, the reputational capital that top institutions have built over centuries becomes a more valuable signal, not a less valuable one. Credibility is a competitive advantage. As scientific output accelerates, that role may become more important.

The opportunity

The coming decade could produce the fastest period of scientific progress in human history.

AI will dramatically increase the speed at which ideas can be generated, tested, and refined. Entire fields may move forward in months rather than decades.

This transformation does not eliminate the need for scientists.

It changes what scientists are for.

In the past, much of science involved executing experiments and analyses. In the future, the central task of science may become identifying the questions that are worth exploring in the first place and how to make meaning out of the facts we learn.

Universities, if they adapt, are well positioned for that role.

Frontier labs may build the most powerful AI systems. But intelligence will not remain scarce. And when intelligence becomes abundant, the most valuable skill is knowing what to do with it.

Addendum

Karpathy

Andrej Karpathy has described a vision for massively parallel AI agents exploring thousands of research directions simultaneously — an army of automated PhD students branching off a shared codebase.

This is genuinely powerful, but it isn’t simulating PhD students. PhD students don’t start from a seed with a defined objective. Even with the most explicit instructions, they wander off course, and spend months (or years) figuring out what question to ask in the first place. Karpathy’s agents simulate the execution phase of research: the grind after you have a question. That will certainly help PhD students, but it isn’t what they do except in the most limited settings (e.g., trying to beat benchmarks to get into NeurIPS). Real discoveries come from people who didn’t weren’t in a search space at all.

Hossenfelder

Sabine Hossenfelder has a more direct counter argument.

She argues that curiosity-driven research has become a shield for unaccountable pseudo-research — pointing to decades of particle physics that produced no technological returns. This is a fair critique, but the examples she cites (ever-larger colliders, speculative theories with no experimental contact) are failures of taste, not failures of curiosity. And colliders are exactly examples of optimizing something through engineering, not parrallel search, not asking which questions to ask. And AI will likely help here: if the cost of testing ideas collapses, speculative work gets evaluated faster. Bad ideas die quicker. I think this is a very narrow counterargument to a field that has been optomizing bad incentives. So she’s not wrong, but it’s not what I’m talking about.

Mental health spending in the context of AI

We’re told that we’ve spent a lot of money on mental health and it has not made a huge difference. The former director of NIMH, Thomas Insel, agrees. NIMH spent $20 billion over 13 years while he was director. OpenAI alone spent $7 billion on compute last year with most going to research. So everything we did get from mental health research came at a shockingly cheap cost compared to AI labs. Imagine if all those researchers were augmented with AI. Zooming out, the United States spends roughly $900 billion on research in all sectors. Universities operate only 11% of the total with industry taking 78% of total research. Of that, this encompasses clinical trials and all health driven research. We simply don’t spend that much money on exploratory research, and AI will make the return on this spending higher.

Further reading: Jascha Sohl-Dickstein’s talk thread. Dario Amodei’s “Machines of Loving Grace” and “The Adolescence of Technology.” Sam Altman’s “The Intelligence Age.”. Royal Swedish Academy of Sciences, “Stockholm Declaration on Research Assessment” (June 2025). “The rise and fall of science research as we know it” Cornell study in Science(Dec 2025).