Researchers Measure How Much AI Is Used to Write Scientific Articles

Published on January 06, 2026 | Translated from Spanish
Bar graph showing the percentage increase in AI-generated linguistic patterns in academic publications in recent years, overlaid on a background of code and a scientific document.

Researchers Measure How Much AI Is Used to Write Scientific Articles

The scientific community seeks to quantify an emerging reality: the use of large language models to produce academic texts. This phenomenon presents a duality, with benefits in efficiency but also profound threats to the foundations of knowledge. 🔬

A Study Reveals Worrying Trends

The research not only confirms what many perceived, but provides concrete data. A clear increase is detected in certain linguistic patterns and phrases associated with automatic generation. While these systems can help with drafting or synthesizing information, speeding up the publishing process, their indiscriminate use puts at risk the originality and solidity of conclusions.

Key Impacts Identified:
  • Accelerating Workflow: Researchers can produce drafts or summaries more quickly.
  • Eroding Genuine Authorship: The text loses the personal stamp and critical rigor of the human author.
  • Generating Circular Content: The literature can fill with repetitive and superficial ideas.
The real problem arises if the last line of defense, peer review, weakens.

The Crucial Role of Human Reviewers

The peer review system acts as the main filter to detect empty texts generated by AI. Their work is essential to maintain standards. The danger increases exponentially if the reviewers themselves start relying on artificial intelligence tools to make their reports, closing a vicious circle of automation.

Risks if Review is Automated:
  • Loss of Quality Control: Without human judgment, articles with methodological flaws pass through.
  • Homogenization of Discourse: Science becomes an echo of itself, without real innovation.
  • Credibility Crisis: The community and the public stop trusting publications.

A Modern Dilemma for Academia

The community faces a crossroads. On one hand, it has a powerful tool that can drive progress. On the other, it must manage the risk that this tool ends up dominating scientific discourse, to the point that no one can distinguish the authentic from the generated. The challenge is to use AI without allowing it to rewrite the rules of the game. ⚖️