Overview
- Analyzing preprints from 2018 to mid-2024, the study found post-adoption output jumps of more than 36% on arXiv, about 53% on bioRxiv, and nearly 60% on SSRN.
- AI-assisted manuscripts displayed higher linguistic complexity, yet their eventual journal acceptance rates were lower than similarly complex human-written papers.
- Productivity gains were largest for authors likely to be non-native English speakers, with submissions nearly doubling on bioRxiv and SSRN and rising by over 40% on arXiv for those with Asian names at Asian institutions.
- The team identified likely AI use by training on GPT-3.5–rewritten pre-2023 abstracts, and they caution that heavy human editing and publication lags can bias both detection and acceptance-based quality measures.
- AI-assisted papers cited a broader and more recent mix of sources, and experts say LLMs may aid peer review by catching technical errors even as rising submissions add pressure on reviewers.