AI Surpasses Peer Review: End of Human Science?

Published on April 01, 2026 | Translated from Spanish

At the ICLR 2025 conference, a historic threshold has been crossed. A team demonstrated that an AI system, named AI Scientist, can act as an autonomous researcher: formulate hypotheses, conduct the research process, and write a scientific article that passes peer review for publication. This milestone, achieved in just 15 hours and for 140 dollars, is not just a technical advance. It is an earthquake that shakes the foundations of the scientific enterprise and forces us to rethink the future of knowledge, authorship, and the very work of the researcher. 🤯

A robotic arm writes on a blackboard while a background of data and formulas symbolizes knowledge generated by AI.

Mechanics of the milestone: autonomy, speed, and marginal cost ⚙️

The AI Scientist system is not a single model, but an architecture that integrates and orchestrates several AI agents, such as Claude Sonnet and GPT-4o, to emulate each phase of the scientific method. Its autonomy is key: it designs the theoretical framework, plans and executes computational experiments, analyzes results, and writes the final manuscript. The feat does not lie in the complexity of the produced article, which was for a seminar, but in the complete process and its validation. It contrasts radically with human timelines, of months or years, and the high costs associated with salaries, infrastructure, and time, posing a total disruption in the economics of research.

The human dilemma: curators or obsolete? 🤔

This success opens profound debates. If a non-human agent can generate publishable knowledge, what validity and credibility does the peer review process have? Does science become a mass production of plausible but empty articles lacking human insight? Economic accessibility is a double-edged sword: it democratizes research but can flood journals with synthetic content. The role of the human scientist could evolve toward formulating deep questions, ethical research design, and critical curation of machine-generated knowledge. The central question is no longer whether AI can do science, but what kind of science we want it to do.

Does the AI's ability to pass peer review mark the beginning of a posthuman science, or simply an evolution in scientific methodology?

(P.S.: moderating an internet community is like herding cats... with keyboards and no sleep)