From:  Confabulated references in the age of AI: contamination of the biomedical scientific literature

 Preventive and corrective strategies should be structured across complementary domains and stakeholder levels.

StakeholderTechnical controlsInstitutional/Editorial frameworksEthical education and trainingCross-institutional collaboration
AuthorsValidate all DOIs and PubMed IDs; manually verify citations.Include AI disclosure statements in manuscripts.Attend AI integrity workshops.Participate in shared bibliometric initiatives.
ReviewersCross-check random subset of references during peer review.Report unverifiable citations to editors.Encourage authors’ transparency.Support creation of shared blacklists of fabricated entries.
Editors and journalsDeploy automated bibliographic validation tools.Adopt explicit AI-use policies; enforce author accountability.Offer reviewer training on AI detection.Collaborate with COPE and Crossref on metadata audits.
PublishersIntegrate LLM-output screening pipelines.Require structured author contribution and verification statements.Provide editorial-board guidance documents.Coordinate global data-integrity task forces.
Readers and institutionsVerify doubtful citations through digital identifiers.Promote awareness campaigns about AI confabulations.Foster critical appraisal skills.Encourage open post-publication peer review.

LLM: large language model.