I recently had an independent researcher ask to endorse him on Arxiv to upload a new pre-print. On the surface, the pre-print appeared relevant to my past work in neuroethics and I was excited to take a look. Upon closer inspection, I found the work to be mostly AI generated and notably mis-cited my own work. Incorrect authors with mostly correct summary of the work

A peek at the references section showed the same incorrect authors and a link to a completely different pre-print published seven months earlier. Incorrect bibliography entry for Schroder et al 2025

I raised this to the original researcher who has since made corrections. A peek at the paper’s associated github repositiory confirmed a larger blast radius for incorrect works:

fix: citation integrity audit — remove 3 fabricated entries, fix 15 errors across 2 verification passes Two-pass bibliography audit of Zenodo preprint (DOI: 10.5281/zenodo.[REMOVED]):

  • Pass 1: Fixed 11 errors including fabricated wu2024adversarial entry, wrong Schroder arxiv ID, wrong Kohno authors, wrong Lazaro-Munoz first author
  • Pass 2: Found 2 more fabricated entries (denning2009neurostimulators, landau2020neurorights), reverted Gemini’s incorrect Meng DOI change, fixed mergelabs2026 author, fixed rushanan2014sok entry type
  • All 35 remaining bib entries verified via DOI/Crossref, URL fetch, PubMed, arxiv, and author publication pages

I also pointed out that Arxiv’s own policies required disclosure on AI tools in the research process and would reject his work in the current state. The author later added a statement via github:

Large language models (Claude, Gemini, ChatGPT) were used during development for: literature review assistance, code generation for data analysis and visualization, editorial review, and cross-validation of technical claims. Claude helpfully re-wrote it in the first person Claude re-wrote the AI disclaimer in first person

I declined the opportunity to endorse his profile on Arxiv.