﻿<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="editorial">
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Explor Med</journal-id>
<journal-id journal-id-type="publisher-id">EM</journal-id>
<journal-title-group>
<journal-title>Exploration of Medicine</journal-title>
</journal-title-group>
<issn pub-type="epub">2692-3106</issn>
<publisher>
<publisher-name>Open Exploration Publishing</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.37349/emed.2026.1001385</article-id>
<article-id pub-id-type="manuscript">1001385</article-id>
<article-categories>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Confabulated references in the age of AI: contamination of the biomedical scientific literature</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-0335-4165</contrib-id>
<name>
<surname>Fiorillo</surname>
<given-names>Luca</given-names>
</name>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role content-type="https://credit.niso.org/contributor-roles/investigation/">Investigation</role>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing—original draft</role>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing—review &amp; editing</role>
<xref ref-type="aff" rid="I1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="I2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="editor">
<name>
<surname>Farrer</surname>
<given-names>Lindsay A.</given-names>
</name>
<role>Academic Editor</role>
<aff>Boston University School of Medicine, USA</aff>
</contrib>
</contrib-group>
<aff id="I1">
<sup>1</sup>Department of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy</aff>
<aff id="I2">
<sup>2</sup>Department of Dental Research Cell, Dr. D. Y. Patil Dental College &amp; Hospital, Dr. D. Y. Patil Vidyapeeth (Deemed to be University), Pune 411018, Pimpri, India</aff>
<author-notes>
<corresp id="cor1">
<bold>
<sup>*</sup>Correspondence:</bold> Luca Fiorillo, Department of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy. <email>lucafiorillo@live.it</email>; <email>luca.fiorillo@unikore.it</email></corresp>
</author-notes>
<pub-date pub-type="collection">
<year>2026</year>
</pub-date>
<pub-date pub-type="epub">
<day>04</day>
<month>03</month>
<year>2026</year>
</pub-date>
<volume>7</volume>
<elocation-id>1001385</elocation-id>
<history>
<date date-type="received">
<day>19</day>
<month>10</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>12</day>
<month>02</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>© The Author(s) 2026.</copyright-statement>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This is an Open Access article licensed under a Creative Commons Attribution 4.0 International License (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.</license-p>
</license>
</permissions>
<abstract>
<p id="absp-1">Large language models (LLMs) like ChatGPT are increasingly used in drafting scientific papers. While they can improve clarity and efficiency, a troubling issue has emerged: the inclusion of fabricated references—nonexistent citations that can mislead, especially in biomedical research where evidence integrity is crucial. Studies indicate that 69% of references in ChatGPT’s medical queries are false, and only 7% of AI-generated medical articles contain accurate references. These fake citations often mimic real authors and journals, making detection difficult. Such inaccuracies can compromise research integrity, skew citation metrics, and reduce trust in scientific literature. To address this, journals are adopting policies requiring disclosure of AI use and human verification of references. Nonetheless, detecting AI-related misinformation remains challenging, and many experts believe the problem is bigger than currently known. Going forward, authors should avoid relying solely on LLMs, and reviewers must scrutinize references carefully. The scientific community needs to balance AI’s usefulness with rigorous oversight, ensuring that the pursuit of efficiency doesn't undermine credibility. Ultimately, safeguarding research from AI-generated misinformation will require combined efforts of transparency, vigilance, and adherence to ethical principles, preserving the integrity of biomedical science.</p>
</abstract>
<kwd-group>
<kwd>large language models</kwd>
<kwd>scientific writing</kwd>
<kwd>systematic review</kwd>
<kwd>research integrity</kwd>
<kwd>fabricated references</kwd>
<kwd>ChatGPT</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p id="p-1">The advent of artificial intelligence (AI)-assisted writing tools has sparked both excitement and debate in the academic world. Nowhere is this more evident than in the biomedical sciences, where clear and precise communication of data is paramount. Proponents of large language models (LLMs) argue that these tools can improve the writing process, enhancing clarity, reducing human biases, and even catching analytical errors that authors might overlook [<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B2">2</xref>]. Indeed, when used responsibly, AI writing aids might help structure complex arguments and support greater precision in scientific manuscripts [<xref ref-type="bibr" rid="B3">3</xref>]. However, alongside these potential benefits, a serious concern has emerged regarding the rigor and integrity of AI-generated scientific texts. In particular, reports have surfaced of manuscripts written with the help of AI that contain fabricated references, sources listed in reference sections that do not exist in any database or journal [<xref ref-type="bibr" rid="B4">4</xref>].</p>
<p id="p-2">This phenomenon is not merely theoretical. By 2023, numerous published papers across fields showed signs of undisclosed ChatGPT use, some going viral for their flaws [<xref ref-type="bibr" rid="B5">5</xref>]. In the medical literature, the problem is especially pernicious. Scientific publications are built on trustworthy citations; each reference should direct readers to prior evidence that supports the new work. If a fictitious reference breaks that chain of evidence, the credibility of the paper collapses. Worse, if other scientists base new research or clinical decisions on false data from a fabricated reference, the damage multiplies. This contamination of the knowledge base can be challenging to detect and undo. The following sections outline what is currently known about AI-related reference fabrication, provide examples of its occurrence, and discuss the implications for the biomedical research community.</p>
</sec>
<sec id="s2">
<title>The rise of “hallucinated” references by LLMs</title>
<p id="p-3">LLMs do not retrieve information in the same way a search engine does; instead, they generate text that statistically resembles the patterns in their training data. As a result, when asked to provide citations or bibliographies, an LLM like ChatGPT may produce references that look legitimate, complete with author names, article titles, journal names, and even volume and page numbers, but which are, in reality, invented. Such invented citations have been termed “hallucinated references,” and their occurrence is well documented.</p>
<p id="p-4">In a recent study published in <italic>Mayo Clinic Proceedings: Digital Health</italic>, Gravel et al. [<xref ref-type="bibr" rid="B2">2</xref>] evaluated ChatGPT’s answers to medical questions and the references it provided. The findings were sobering: out of 59 references generated by ChatGPT, 41 (69%) were fabricated, despite appearing superficially plausible [<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B2">2</xref>]. Most of these fake citations had elements of truth mixed in; for example, the reference might list real researchers who work in the relevant field and a reputable journal, but the combination (authors, title, journal, year) was entirely made-up. It could not be found in any literature database. The authors noted that ChatGPT’s responses were otherwise articulate. Still, the inclusion of such a high proportion of bogus references poses a serious risk if one were to trust the AI’s output uncritically [<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B2">2</xref>, <xref ref-type="bibr" rid="B6">6</xref>].</p>
<p id="p-5">Another independent assessment, by Bhattacharyya et al. [<xref ref-type="bibr" rid="B7">7</xref>] in the journal <italic>Cureus</italic>, underscored the prevalence of this issue across multiple medical topics. In that study, ChatGPT-3.5 was prompted to generate short academic papers (with references) on various healthcare subjects. Among 115 total references produced, 47% were entirely fabricated, and another 46%, while linking to honest articles, contained some incorrect details (such as wrong year, volume, or page numbers). Alarmingly, only 7% of AI-generated references were entirely accurate in that experiment. The most common errors included incorrect PubMed ID numbers (in 93% of the AI-generated references) and mismatched publication details [<xref ref-type="bibr" rid="B7">7</xref>]. These studies confirm that hallucinated citations are not rare but rather a frequent byproduct of using LLMs for academic content generation.</p>
<p id="p-6">Crucially, the fictitious references produced by AI are often highly deceptive. They typically mimic the format of genuine citations and usually incorporate the names of established researchers in the field, along with plausible article titles. For instance, Gravel et al. [<xref ref-type="bibr" rid="B2">2</xref>] observed that many fabricated references used “names of authors with previous relevant publications” and credible-sounding titles in reputable journals. An unwary reader (or even a peer reviewer) might not immediately suspect these references are fake without actively checking them in a database. <italic>Retraction Watch</italic>, a forum that tracks scientific integrity issues, has noted that such nonexistent or error-laden citations have become a signature of LLM-generated text [<xref ref-type="bibr" rid="B5">5</xref>, <xref ref-type="bibr" rid="B8">8</xref>]. In other words, a reference list filled with impressive-looking but untraceable citations is a red flag that an AI may have drafted a manuscript. The underlying reason is apparent: rather than searching and retrieving actual bibliographic data, ChatGPT essentially guesses what a suitable reference might look like, based on patterns in its training data. This limitation of LLMs leads to what one might call “auto-plagiarism of non-existent sources”, the model isn’t stealing existing text, but creating scholarly facsimiles that have no grounding in reality.</p>
<sec id="t2-1">
<title>Operational taxonomy of reference fabrication</title>
<p id="p-7">To improve analytical precision, AI-related reference fabrication can be operationally divided into three primary types:</p>
<p id="p-8">
<list list-type="bullet">
<list-item>
<p>Type I—Fully fabricated references: Non-existent articles or books invented by the LLM, often with plausible authors and journal titles.</p>
</list-item>
<list-item>
<p>Type II—Authentic references with corrupted metadata: Real publications whose bibliographic fields (year, volume, DOI, or page numbers) are distorted or mismatched.</p>
</list-item>
<list-item>
<p>Type III—Chimeric or misattributed references: Citations combining elements of different authentic records (e.g., real authors with incorrect article titles or journals).</p>
</list-item>
</list>
</p>
<p id="p-9">This taxonomy enables a standardized approach to detection and categorization across editorial, bibliometric, and informatics settings.</p>
</sec>
<sec id="t2-2">
<title>Prevalence and error patterns</title>
<p id="p-10">Empirical assessments across biomedical and scientific corpora demonstrate alarming error frequencies. Gravel et al. [<xref ref-type="bibr" rid="B2">2</xref>] observed that ChatGPT generated fabricated citations in 48% of medical responses, while Walters and Wilder [<xref ref-type="bibr" rid="B9">9</xref>] documented approximately 30% fabricated and 20% erroneous references in automatically produced bibliographies. Bhattacharyya et al. [<xref ref-type="bibr" rid="B7">7</xref>] similarly reported high rates of unverifiable citations across biomedical queries. These findings confirm that AI confabulation of references remains a non-trivial and reproducible pattern, highlighting the urgent need for AI-output verification workflows.</p>
</sec>
<sec id="t2-3">
<title>Layered impacts of fabricated references</title>
<p id="p-11">The implications of AI-induced reference fabrication are multifaceted:</p>
<p id="p-12">
<list list-type="simple">
<list-item>
<label>1.</label>
<p>Epistemic harms: Misleading researchers through false attribution of data or methods, undermining the reliability of background sections, and evidence mapping.</p>
</list-item>
<list-item>
<label>2.</label>
<p>Synthesis harms: Contamination of systematic reviews and meta-analyses when spurious sources are unknowingly included, distorting pooled estimates and conclusions.</p>
</list-item>
<list-item>
<label>3.</label>
<p>Clinical and policy harms: Risk of flawed guideline formulation or patient-care recommendations derived from compromised secondary evidence.</p>
</list-item>
<list-item>
<label>4.</label>
<p>Publishing ecosystem harms: Increased workload for reviewers and editors, reputational damage to journals, and erosion of public trust in scientific integrity.</p>
</list-item>
</list>
</p>
</sec>
</sec>
<sec id="s3">
<title>Documented cases of fabricated references</title>
<p id="p-13">What was initially a hypothetical concern has now manifested in concrete cases across the scientific literature. Over the last two years, several published works have been corrected or retracted after it was discovered that they contained references to non-existent studies. These incidents span different disciplines (including biomedical fields) and illustrate how pervasive the problem has become in the wake of rapid AI adoption [<xref ref-type="bibr" rid="B9">9</xref>].</p>
<p id="p-14">One early cautionary example emerged in the field of public health and environmental science. In March 2024, the journal <italic>Environment and Planning E: Nature and Space</italic> published an article on wildlife disease policy that, upon post-publication review, was found to contain “several non-existent citations,” including one referring to a paper in 2026 [<xref ref-type="bibr" rid="B10">10</xref>]. The impossibility of referencing 2026 tipped readers off that something was amiss. A wildlife ecologist who examined the paper noted that numerous citations to reputable sources were “incorrectly cited, abjectly false, or obviously manufactured” within the reference list [<xref ref-type="bibr" rid="B10">10</xref>]. This prompted an investigation by the journal’s editors. The inquiry revealed that the lead author, unbeknownst to the other co-authors, had used ChatGPT to help format and “update” the references before submission [<xref ref-type="bibr" rid="B10">10</xref>]. In doing so, the author apparently trusted the AI to generate or correct citations, but instead, it introduced errors and fake entries. The original submission of the manuscript included accurate references (drawn from the author’s Master’s thesis), but the AI-altered version swapped in bogus sources that initially passed editorial checks [<xref ref-type="bibr" rid="B10">10</xref>]. The journal treated this issue seriously and announced it would take corrective actions. Fortunately, in this case, the authors provided the correct reference list once the problem was identified, and the paper could potentially be amended rather than fully retracted. Nonetheless, for a time, the paper was in the literature with its defective references, meaning anyone who read it shortly after publication could have been misled about the supporting sources of its claims.</p>
<p id="p-15">Another case involved an analysis of communication and media studies that was suspected to be heavily AI-generated. In 2023, a professor reviewing a submission on community radio noted something uncanny: the writing style felt like AI produced it, and several references were questionable [<xref ref-type="bibr" rid="B11">11</xref>]. Among them was a citation of a 2017 article purportedly written by the reviewer herself, yet she knew she had published nothing on that topic after 2012. Indeed, the cited journal had no record of the 2017 paper. This indicated that the reference was entirely fabricated, presumably inserted by an AI tool used by the authors. The paper, initially rejected after peer review, resurfaced in another journal with minimal changes months later, suggesting that the authors attempted to publish the AI-written content elsewhere. Once again, the sharp-eyed reviewer (alerted by an automated system to the new publication) intervened, and the journal had to withdraw the article pending investigation. The authors in this incident defended themselves by citing the use of “bibliometric tools” and unstable indexing in repositories, but the glaring presence of a nonexistent citation attributed to the reviewer undermined their explanation [<xref ref-type="bibr" rid="B11">11</xref>]. This saga highlights how AI-generated errors can slip into the literature through less vigilant outlets, and how fabricated references can even implicate real researchers by name, creating confusion and reputational concerns.</p>
<p id="p-16">Perhaps the most high-profile example to date is the case of an AI-generated book manuscript. In April 2025, Springer Nature published an e-book titled “<italic>Mastering Machine Learning: From Basics to Advanced</italic>”<italic>.</italic> The book was marketed as an introductory text on machine learning, priced at a premium. However, readers soon noticed irregularities in the reference list. Upon closer examination by <italic>Retraction Watch</italic> and others, it was found that out of 46 references in the book, two-thirds either did not exist or contained significant inaccuracies [<xref ref-type="bibr" rid="B5">5</xref>]. Multiple researchers whose names appeared in the citations confirmed that the works attributed to them were entirely fictitious. For example, one cited paper was listed as appearing in a well-known IEEE journal. Still, the researcher, who is named as the author, clarified that, although he had a related unpublished preprint, he had never published in that journal. Another citation in the book referenced a section of a deep learning textbook that, in reality, had no such content on the cited pages. These fabrications were classic LLM confabulations, constructed from snippets of factual information assembled into a fabricated reference. The deception went unnoticed through the publication process, the book did not disclose any AI involvement in its creation (despite containing a section on ChatGPT ethics). When confronted with the allegations, the book’s author did not admit to using AI. Still, he remarked on the difficulty of reliably determining AI-generated content and noted the challenge would only grow as AI becomes more sophisticated. Springer Nature, in response, stated that they have policies requiring authors to declare AI assistance and emphasize human oversight on all submissions [<xref ref-type="bibr" rid="B5">5</xref>]. Because these policies were evidently not followed or enforced in this case, Springer Nature retracted the book in July 2025, citing the discovery that it “referenced works that don’t exist” in its citations [<xref ref-type="bibr" rid="B12">12</xref>]. This is one of the first significant book retractions due to AI-generated content. It underscores that even well-resourced publishers are vulnerable to AI deception if proper checks are not in place. Moreover, it serves as a stark warning that the scholarly record can be compromised on a large scale; a single book with dozens of fake references could misdirect countless readers or researchers until the fraud is uncovered [<xref ref-type="bibr" rid="B13">13</xref>].</p>
<p id="p-17">Notably, not only text authors but also some peer reviewers have been implicated in using AI, potentially exacerbating the problem. Comments on the <italic>Retraction Watch</italic> coverage include anecdotes of reviewers submitting reports rife with AI-generated prose and erroneous citations [<xref ref-type="bibr" rid="B11">11</xref>]. If a reviewer were to “correct” an author’s references using ChatGPT, for instance, they might unintentionally replace real citations with fake ones. This scenario, although anecdotal, highlights the multifaceted risk: AI misuse can enter the publication pipeline at multiple points (author, reviewer, or even editor), thereby increasing the likelihood that fallacious references will slip through.</p>
</sec>
<sec id="s4">
<title>Implications for biomedical research and evidence synthesis</title>
<p id="p-18">The biomedical field is built on a hierarchy of evidence, with systematic reviews and meta-analyses occupying a prominent position in shaping practice and policy. These comprehensive reviews depend on the assumption that all included studies are real, and their data are accurately reported. The emergence of AI-generated references challenges this assumption and raises several concerning implications.</p>
<p id="p-19">First, consider the integrity of a systematic review article itself. A systematic review typically involves identifying and screening hundreds or thousands of publications to include a curated set of relevant studies. If an author uses an AI tool to assist with this process, there is a risk that the tool may generate references to studies that appear relevant but do not actually exist. An unscrupulous or naive author might then include these phantom studies in the review’s analysis. The review could report, for example, “10 randomized trials with a combined 2,000 patients,” supporting a particular therapy, a convincing sample size, when in fact some of those trials were not actual clinical trials. The robust-sounding evidence might convince readers of the review (and even peer reviewers). The danger is that downstream researchers could cite this systematic review or even include it in a meta-analysis, unwittingly proliferating the falsehood. In essence, one AI-seeded lie in a reference list can grow into a forest of misinformation as papers draw upon each other. This cascading effect is how the “fallacy of information” mentioned earlier can take hold, creating a veneer of evidence where none exists.</p>
<p id="p-20">We are likely already seeing early signs of such contamination. <italic>Retraction Watch</italic> maintains a list of papers with suspected undisclosed AI-generated content, and many are in biomedical domains (medicine, biology, psychology) [<xref ref-type="bibr" rid="B11">11</xref>]. Some meta-analyses have had to be corrected or retracted because they included data from papers that were later retracted for misconduct. If those retractions were due to fake references or AI confabulations, the meta-analytic conclusions would have been distorted. Even legitimate meta-analyses may be skewed if they unknowingly incorporate results from systematic reviews that are tainted. The full extent of this problem remains hidden due to the sheer difficulty of manually verifying every reference in every paper.</p>
<p id="p-21">Another important implication is the erosion of trust. Science operates on a mix of trust and verification: we trust that authors cite real, relevant studies, and we verify key findings when necessary. If readers and editors must now approach every reference list with skepticism, the efficiency of scientific communication plummets. A commentary [<xref ref-type="bibr" rid="B14">14</xref>] in <italic>The Scientist</italic> noted that ChatGPT (as of 2023) failed to flag known retracted or debunked papers and even rated them highly credible, illustrating that AI is oblivious to the reliability of sources. Now, if AI is also fabricating sources wholesale, the trust deficit widens further. Researchers may start doubting even genuine references, especially if an article’s prose gives any hint of AI-style generation. This could lead to more labor-intensive peer review and editorial processes, as humans must double-check what was once taken for granted.</p>
<p id="p-22">From an ethical standpoint, introducing fabricated references into a paper constitutes academic misconduct. It violates principles of honesty and accuracy in scholarship. Some might argue that if done unintentionally via AI, it is an “honest mistake.” However, the scale and foreseeability of this issue mean that researchers have a responsibility to actively guard against it. Using an AI tool without verifying its outputs is not an excuse—just as claiming “the software made a mistake” would not absolve a scientist who reports incorrect data analysis without validation. The community is beginning to discuss whether failure to verify AI outputs that lead to the propagation of false information should itself be considered negligence or misconduct. Journals are certainly treating it seriously: the above cases demonstrate that publishers will retract or correct literature once fake references come to light, to protect the integrity of the scientific record [<xref ref-type="bibr" rid="B14">14</xref>].</p>
<p id="p-23">Moreover, fabricated citations have implications that extend beyond the content of a single paper; they also impact bibliometrics and the broader scholarly ecosystem. As Bret Collier (the ecologist who raised the alarm on the wildlife paper) pointed out, manufactured citations can impact journal metrics and author indices [<xref ref-type="bibr" rid="B10">10</xref>]. In the short term, a fake reference might cite a non-existent article, doing no direct harm to metrics. But sometimes these AI-made references partially cite honest articles (e.g., wrong year or journal but correct author). In such cases, a real researcher’s work could be misattributed and not adequately credited, skewing citation counts. Alternatively, the fake references might cite a real journal and give it a fabricated article, potentially confusing citation trackers and databases. If such references slip into databases like Google Scholar or Scopus before being caught, they could artificially inflate citation counts or create phantom entries that clutter search results.</p>
<p id="p-24">Ultimately, the phenomenon necessitates a reevaluation of how we train and mentor researchers. The ease of generating text (including references) with AI might tempt students or less experienced authors to bypass important scholarly habits, such as thoroughly reading the literature. The adage “never cite a paper you haven’t read” is being updated in the AI era to “never trust a citation you didn’t verify.” There is now an urgent need to instill in researchers the discipline of verifying every reference, whether it comes from an AI tool or even from another published paper. As one commentator quipped in response to these incidents: “<italic>Call me old-fashioned, but the best way to cite papers? Read them first, then you know they are not a figment of either your imagination or that of a confabulating LLM.</italic>” [<xref ref-type="bibr" rid="B11">11</xref>].</p>
</sec>
<sec id="s5">
<title>Preventive measures and call to action</title>
<p id="p-25">Confronting this challenge will require both technological and cultural changes in the field of scientific publishing. On the technological side, tools are being developed to detect AI-generated text and even flag potentially fake references. For example, some reference manager software can be augmented to cross-check citations in a manuscript against databases automatically. Journals are beginning to use plagiarism-detection and AI-detection services on submissions, although these are not foolproof and can be bypassed or yield false positives. A proposed system would require that all references in submitted manuscripts be accompanied by DOIs or database IDs that can be automatically verified; a missing or invalid identifier would immediately raise a red flag for editors to investigate. Initiatives like the <italic>xFakeSci</italic> project are exploring machine learning algorithms to identify AI-generated scientific writing by subtle anomalies in text and referenc. In parallel, publishers like Elsevier and Springer Nature have introduced policies for AI: authors must declare AI assistance and are responsible for the content (including the accuracy of references) when AI is used [<xref ref-type="bibr" rid="B7">7</xref>, <xref ref-type="bibr" rid="B12">12</xref>]. Such policies put the onus on authors to be transparent and careful [<xref ref-type="bibr" rid="B15">15</xref>].</p>
<p id="p-26">However, no policy or tool can replace the fundamental role of human oversight and integrity. Authors who choose to use AI in writing must do so responsibly, as <italic>assistive</italic> technology rather than a source of truth. This means rigorously checking every reference ChatGPT suggests, just as one would double-check a reference contributed by another co-author. Researchers need to be educated about the specific pitfalls of LLMs. The fact that ChatGPT can output an entirely fictional and very compelling-sounding reference is not common knowledge among all academics; therefore, raising awareness is critical. Conferences, editorials, and university guidelines should now include warnings about AI confabulations. The present article, in fact, serves as a commentary to raise such awareness in the biomedical community [<xref ref-type="bibr" rid="B13">13</xref>].</p>
<p id="p-27">Peer reviewers and editors, the gatekeepers of scientific quality, must also adapt. It may no longer be sufficient to skim an article and assume the references are legitimate if the prose is acceptable. Some journals have started implementing extra reference checks. For instance, as one online discussion noted, certain publishers link cited references to databases during typesetting, which can catch references that don’t match anything in CrossRef or PubMed [<xref ref-type="bibr" rid="B10">10</xref>]. This is a good practice that should become standard. If a citation cannot be resolved to a real document, it should be queried with the authors. Additionally, journals could explicitly ask reviewers to assess the reference section for any irregularities. While this goes beyond traditional peer review expectations, it is a necessary evolution in response to new threats to quality [<xref ref-type="bibr" rid="B16">16</xref>].</p>
<p id="p-28">On a systemic level, when fake reference papers are discovered, journals should issue swift corrections or retractions to prevent misinformation from spreading. <italic>Retraction Watch</italic>’s database is littered with examples of how slow or reluctant action can allow flawed science to continue influencing new work. Even Stanford University’s list of the top 2% of researchers in the world begins to include the retracted articles for each author in a clear and public way [<xref ref-type="bibr" rid="B17">17</xref>]. In the cases described earlier, interventions by alert individuals (reviewers, readers) were essential. We cannot rely solely on serendipity or whistleblowers; formal mechanisms must be in place. Journals might consider post-publication audits for papers in which AI use is suspected, or even random spot-checks of references in a sample of publications.</p>
<p id="p-29">The scientific community may also benefit from a more open dialogue about the acceptable use of AI in publishing. AI is here to stay, and banning it outright is neither feasible nor productive. Instead, as Fiorillo (2024) [<xref ref-type="bibr" rid="B1">1</xref>] argues, we should “embrace AI tools, rather than demonize them,” but with a <italic>commitment to integrity and ethical guidelines</italic>. This balanced approach means encouraging innovation (like using AI to improve writing clarity or to generate hypotheses), while unequivocally condemning and penalizing careless or fraudulent use (such as dumping AI text and fake citations into a manuscript). It also means providing researchers with the training to use AI effectively. For example, using ChatGPT to proofread grammar is fundamentally different from using it to generate literature reviews or reference lists; the former carries little risk. At the same time, the latter can inject falsehoods if unchecked [<xref ref-type="bibr" rid="B18">18</xref>, <xref ref-type="bibr" rid="B19">19</xref>].</p>
<p id="p-30">As a community, scientists must reinforce a culture where intellectual honesty is paramount. The allure of easy writing should not outweigh the duty we have to truth and accuracy. Junior researchers in particular should be mentored in these values. Just as statistical falsification or image manipulation is not tolerated, we should view the introduction of invented references as a grave misstep, whether born of malice or ignorance. Only with clear expectations and education can we prevent a generation of researchers from normalizing the practice of “write now, verify never.” A promising example of author-driven tools that may contribute to preserving research integrity is the recently introduced Fi-index, developed to limit self-citation practices and enhance bibliometric reliability. Although primarily conceived to address citation bias, a semi-automated method that requires active verification and effort from authors could also represent a valuable strategy to ensure the validity of cited references and mitigate the spread of fabricated sources [<xref ref-type="bibr" rid="B20">20</xref>].</p>
<p id="p-31">Finally, readers of scientific literature (including clinicians who consume evidence to guide practice) should stay vigilant. If something in a paper seems too convenient or too neatly in support of the authors’ narrative, especially a reference that is hard to find or oddly formatted, it might warrant a quick check. The democratization of knowledge means that the wider community, post-publication, can catch many such errors. Authors and journals should welcome this approach and treat readers who flag concerns as allies in upholding scientific quality, rather than adversaries [<xref ref-type="bibr" rid="B21">21</xref>].</p>
<sec id="t5-1">
<title>Publisher and governance responses</title>
<p id="p-32">Major governance bodies and publishers have introduced preliminary safeguards. The International Committee of Medical Journal Editors (ICMJE) (2024) explicitly mandates that AI-assisted technologies must be disclosed and cannot be listed as authors. COPE (2024) emphasizes transparent reporting and editorial responsibility in AI-assisted writing. Elsevier and Nature Portfolio now require formal disclosure of AI tools used in manuscript generation and recommend human verification of all bibliographic elements. While some publishers have implemented automated reference-checking algorithms, there are currently no publicly available, aggregated statistics on manuscript rejections attributable to AI-generated, fabricated references. This lack of systematic reporting remains a significant evidence gap in scholarly publishing (<xref ref-type="table" rid="t1">Table 1</xref>).</p>
<table-wrap id="t1">
<label>Table 1</label>
<caption>
<p id="t1-p-1">
<bold>Preventive and corrective strategies should be structured across complementary domains and stakeholder levels.</bold>
</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th>
<bold>Stakeholder</bold>
</th>
<th>
<bold>Technical controls</bold>
</th>
<th>
<bold>Institutional/Editorial frameworks</bold>
</th>
<th>
<bold>Ethical education and training</bold>
</th>
<th>
<bold>Cross-institutional collaboration</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<bold>Authors</bold>
</td>
<td>Validate all DOIs and PubMed IDs; manually verify citations.</td>
<td>Include AI disclosure statements in manuscripts.</td>
<td>Attend AI integrity workshops.</td>
<td>Participate in shared bibliometric initiatives.</td>
</tr>
<tr>
<td>
<bold>Reviewers</bold>
</td>
<td>Cross-check random subset of references during peer review.</td>
<td>Report unverifiable citations to editors.</td>
<td>Encourage authors’ transparency.</td>
<td>Support creation of shared blacklists of fabricated entries.</td>
</tr>
<tr>
<td>
<bold>Editors and journals</bold>
</td>
<td>Deploy automated bibliographic validation tools.</td>
<td>Adopt explicit AI-use policies; enforce author accountability.</td>
<td>Offer reviewer training on AI detection.</td>
<td>Collaborate with COPE and Crossref on metadata audits.</td>
</tr>
<tr>
<td>
<bold>Publishers</bold>
</td>
<td>Integrate LLM-output screening pipelines.</td>
<td>Require structured author contribution and verification statements.</td>
<td>Provide editorial-board guidance documents.</td>
<td>Coordinate global data-integrity task forces.</td>
</tr>
<tr>
<td>
<bold>Readers and institutions</bold>
</td>
<td>Verify doubtful citations through digital identifiers.</td>
<td>Promote awareness campaigns about AI confabulations.</td>
<td>Foster critical appraisal skills.</td>
<td>Encourage open post-publication peer review.</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p id="t1-fn-1">LLM: large language model.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="t5-2">
<title>Integrated preventive framework</title>
<p id="p-33">This multi‑tiered approach aligns technical verification with ethical accountability, ensuring that AI tools remain aids rather than sources of epistemic risk.</p>
</sec>
</sec>
<sec id="s6">
<title>Conclusion</title>
<p id="p-34">AI language models, such as ChatGPT, offer exciting possibilities for accelerating literature searches, drafting manuscripts, and translating scientific findings for broader audiences. Yet, as this commentary has detailed, they also introduce new perils into the publication ecosystem, chief among them, the generation of fictional references that can subvert the very foundation of evidence-based discourse. The recent surge in papers with AI-written sections has already led to a corresponding uptick in false citations polluting the scholarly record. In biomedical research, where lives and public health policies can hinge on published data, the stakes of this misinformation are exceptionally high.</p>
<p id="p-35">The scientific community must address this issue proactively. This will involve technological solutions (improved reference verification and AI-detection tools) and, more importantly, cultural shifts in how we write and review manuscripts. All stakeholders, authors, reviewers, editors, publishers, and readers, must exercise increased diligence. By reasserting the basics (read what you cite and verify what you write) and combining them with new checks tailored to the AI era, we can prevent the worst outcomes. The volume of retractions required to rectify existing AI-induced errors may be substantial. Still, it is a necessary step to purge the literature of fraudulent information and send a message that quality control is catching up with technology.</p>
<p id="p-36">In conclusion, AI can be a valuable ally in scientific writing if used wisely, but it cannot be trusted blindly. The occurrence of hallucinated references is a glaring reminder that human expertise, our capacity for critical thinking, skepticism, and verification, remains indispensable. Preserving trust in science will depend on our ability to integrate powerful tools like LLMs without compromising the core principles of scholarship. Through collective effort and adherence to integrity, the research community can harness the benefits of AI while mitigating its tendency to generate persuasive falsehoods. As we navigate this new era, let us reaffirm that factual accuracy is non-negotiable. The credibility of biomedical science rests on it.</p>
</sec>
</body>
<back>
<glossary>
<title>Abbreviations</title>
<def-list>
<def-item>
<term>AI</term>
<def>
<p>artificial intelligence</p>
</def>
</def-item>
<def-item>
<term>LLMs</term>
<def>
<p>large language models</p>
</def>
</def-item>
</def-list>
</glossary>
<sec id="s7">
<title>Declarations</title>
<sec id="t-7-1">
<title>Author contributions</title>
<p>LF: Conceptualization, Investigation, Writing—original draft, Writing—review &amp; editing. The author confirms sole responsibility for all aspects of this paper, including the accuracy of the content and the integrity of the references cited. The author read and approved the submitted version.</p>
</sec>
<sec id="t-7-2" sec-type="COI-statement">
<title>Conflicts of interest</title>
<p>The author is the Editorial Board Member and Guest Editor of Exploration of Medicine, but this manuscript is handled separately by other editors, and the author is not involved in the decision-making or the review process.</p>
</sec>
<sec id="t-7-3">
<title>Ethical approval</title>
<p>Not applicable.</p>
</sec>
<sec id="t-7-4">
<title>Consent to participate</title>
<p>Not applicable.</p>
</sec>
<sec id="t-7-5">
<title>Consent to publication</title>
<p>Not applicable.</p>
</sec>
<sec id="t-7-6" sec-type="data-availability">
<title>Availability of data and materials</title>
<p>No external unpublished data, all data is published within the manuscript.</p>
</sec>
<sec id="t-7-7">
<title>Funding</title>
<p>Not applicable.</p>
</sec>
<sec id="t-7-8">
<title>Copyright</title>
<p>© The Author(s) 2026.</p>
</sec>
</sec>
<sec id="s8">
<title>Publisher’s note</title>
<p>Open Exploration maintains a neutral stance on jurisdictional claims in published institutional affiliations and maps. All opinions expressed in this article are the personal views of the author(s) and do not represent the stance of the editorial team or the publisher.</p>
</sec>
<ref-list>
<ref id="B1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fiorillo</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Confronting the demonization of AI writing: Reevaluating its role in upholding scientific integrity</article-title>
<source>Oral Oncol Rep</source>
<year iso-8601-date="2024">2024</year>
<volume>12</volume>
<elocation-id>100685</elocation-id>
<pub-id pub-id-type="doi">10.1016/j.oor.2024.100685</pub-id>
</element-citation>
</ref>
<ref id="B2">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gravel</surname>
<given-names>J</given-names>
</name>
<name>
<surname>D’Amours-Gravel</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Osmanlliu</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Learning to Fake It: Limited Responses and Fabricated References Provided by ChatGPT for Medical Questions</article-title>
<source>Mayo Clin Proc Digit Health</source>
<year iso-8601-date="2023">2023</year>
<volume>1</volume>
<fpage>226</fpage>
<lpage>34</lpage>
<pub-id pub-id-type="doi">10.1016/j.mcpdig.2023.05.004</pub-id>
<pub-id pub-id-type="pmid">40206627</pub-id>
<pub-id pub-id-type="pmcid">PMC11975740</pub-id>
</element-citation>
</ref>
<ref id="B3">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bai</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Kosonocky</surname>
<given-names>CW</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>JZ</given-names>
</name>
</person-group>
<article-title>How our authors are using AI tools in manuscript writing</article-title>
<source>Patterns (N Y)</source>
<year iso-8601-date="2024">2024</year>
<volume>5</volume>
<elocation-id>101075</elocation-id>
<pub-id pub-id-type="doi">10.1016/j.patter.2024.101075</pub-id>
<pub-id pub-id-type="pmid">39569203</pub-id>
<pub-id pub-id-type="pmcid">PMC11573884</pub-id>
</element-citation>
</ref>
<ref id="B4">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mac</surname>
<given-names>Oscar M</given-names>
</name>
<name>
<surname>Boland</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Cadogan</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>An assessment of generative artificial intelligence in responding to clinical queries on tapering antidepressants</article-title>
<source>Res Social Adm Pharm</source>
<year iso-8601-date="2025">2025</year>
<volume>21</volume>
<fpage>924</fpage>
<lpage>30</lpage>
<pub-id pub-id-type="doi">10.1016/j.sapharm.2025.06.107</pub-id>
<pub-id pub-id-type="pmid">40579346</pub-id>
</element-citation>
</ref>
<ref id="B5">
<label>5</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Madhavan</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>RETRACTED BOOK: Mastering Machine Learning: From Basics to Advanced</article-title>
<comment>In: Transactions on Computer Systems and Networks. Singapore: Springer Nature; 2025.</comment>
<pub-id pub-id-type="doi">10.1007/978-981-97-9914-5</pub-id>
</element-citation>
</ref>
<ref id="B6">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fiorillo</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Fi-Index: A New Method to Evaluate Authors Hirsch-Index Reliability</article-title>
<source>Publ Res Q</source>
<year iso-8601-date="2022">2022</year>
<volume>38</volume>
<fpage>465</fpage>
<lpage>74</lpage>
<pub-id pub-id-type="doi">10.1007/s12109-022-09892-3</pub-id>
</element-citation>
</ref>
<ref id="B7">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bhattacharyya</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Miller</surname>
<given-names>VM</given-names>
</name>
<name>
<surname>Bhattacharyya</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Miller</surname>
<given-names>LE</given-names>
</name>
</person-group>
<article-title>High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content</article-title>
<source>Cureus</source>
<year iso-8601-date="2023">2023</year>
<volume>15</volume>
<elocation-id>e39238</elocation-id>
<pub-id pub-id-type="doi">10.7759/cureus.39238</pub-id>
<pub-id pub-id-type="pmid">37337480</pub-id>
<pub-id pub-id-type="pmcid">PMC10277170</pub-id>
</element-citation>
</ref>
<ref id="B8">
<label>8</label>
<element-citation publication-type="web">
<person-group person-group-type="author">
<name>
<surname>Glynn</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Guarding against artificial intelligence--hallucinated citations: The case for full-text reference deposit</article-title>
<comment>arXiv 2503.19848 [Preprint]. 2025 [cited 2025 Nov 25]. Available from: <uri xlink:href="https://arxiv.org/abs/2503.19848">https://arxiv.org/abs/2503.19848</uri></comment>
</element-citation>
</ref>
<ref id="B9">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walters</surname>
<given-names>WH</given-names>
</name>
<name>
<surname>Wilder</surname>
<given-names>EI</given-names>
</name>
</person-group>
<article-title>Fabrication and errors in the bibliographic citations generated by ChatGPT</article-title>
<source>Sci Rep</source>
<year iso-8601-date="2023">2023</year>
<volume>13</volume>
<elocation-id>14045</elocation-id>
<pub-id pub-id-type="doi">10.1038/s41598-023-41032-5</pub-id>
<pub-id pub-id-type="pmid">37679503</pub-id>
<pub-id pub-id-type="pmcid">PMC10484980</pub-id>
</element-citation>
</ref>
<ref id="B10">
<label>10</label>
<element-citation publication-type="web">
<article-title>Journal taking ‘corrective actions’ after learning author used ChatGPT to update references [Internet]</article-title>
<comment>Retraction Watch; [cited 2025 Sep 21]. Available from: <uri xlink:href="https://retractionwatch.com/2024/05/20/journal-taking-corrective-actions-after-learning-author-used-chatgpt-to-update-references/">https://retractionwatch.com/2024/05/20/journal-taking-corrective-actions-after-learning-author-used-chatgpt-to-update-references/</uri></comment>
</element-citation>
</ref>
<ref id="B11">
<label>11</label>
<element-citation publication-type="web">
<person-group person-group-type="author">
<name>
<surname>Paper</surname>
<given-names>rejected for AI</given-names>
</name>
<name>
<surname>fake references published elsewhere with hardly anything changed</surname>
<given-names>[Internet]</given-names>
</name>
</person-group>
<article-title>Retraction Watch; [cited 2025 Sep 21]</article-title>
<comment>Available from: <uri xlink:href="https://retractionwatch.com/2025/06/12/paper-rejected-for-ai-fake-references-published-elsewhere-with-hardly-anything-changed/">https://retractionwatch.com/2025/06/12/paper-rejected-for-ai-fake-references-published-elsewhere-with-hardly-anything-changed/</uri></comment>
</element-citation>
</ref>
<ref id="B12">
<label>12</label>
<element-citation publication-type="web">
<article-title>Springer Nature retracts machine learning book after citations ‘reference works that don’t exist’ [Internet]</article-title>
<comment>The Bookseller; [cited 2025 Sep 21]. Available from: <uri xlink:href="https://www.thebookseller.com/news/springer-nature-retracts-machine-learning-book-after-citations-reference-works-that-dont-exist">https://www.thebookseller.com/news/springer-nature-retracts-machine-learning-book-after-citations-reference-works-that-dont-exist</uri></comment>
</element-citation>
</ref>
<ref id="B13">
<label>13</label>
<element-citation publication-type="web">
<person-group person-group-type="author">
<name>
<surname>Kalai</surname>
<given-names>AT</given-names>
</name>
<name>
<surname>Nachum</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Vempala</surname>
<given-names>SS</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Why Language Models Hallucinate</article-title>
<comment>arXiv 2509.04664 [Preprint]. 2025 [cited 2025 Nov 25]. Available from: <uri xlink:href="https://arxiv.org/abs/2509.04664">https://arxiv.org/abs/2509.04664</uri></comment>
</element-citation>
</ref>
<ref id="B14">
<label>14</label>
<element-citation publication-type="web">
<article-title>ChatGPT Fails to Flag Retracted and Problematic Articles [Internet]</article-title>
<comment>The Scientist; [cited 2025 Sep 21]. Available from: <uri xlink:href="https://www.the-scientist.com/chatgpt-fails-to-flag-retracted-and-problematic-articles-73448">https://www.the-scientist.com/chatgpt-fails-to-flag-retracted-and-problematic-articles-73448</uri></comment>
</element-citation>
</ref>
<ref id="B15">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wallace</surname>
<given-names>MB</given-names>
</name>
<name>
<surname>Siersema</surname>
<given-names>PD</given-names>
</name>
</person-group>
<article-title>Ethics in publication</article-title>
<source>Gastrointest Endosc</source>
<year iso-8601-date="2015">2015</year>
<volume>82</volume>
<fpage>439</fpage>
<lpage>42</lpage>
<pub-id pub-id-type="doi">10.1016/j.gie.2015.05.019</pub-id>
<pub-id pub-id-type="pmid">26112677</pub-id>
</element-citation>
</ref>
<ref id="B16">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Singhal</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kalra</surname>
<given-names>BS</given-names>
</name>
</person-group>
<article-title>Publication ethics: Role and responsibility of authors</article-title>
<source>Indian J Gastroenterol</source>
<year iso-8601-date="2021">2021</year>
<volume>40</volume>
<fpage>65</fpage>
<lpage>71</lpage>
<pub-id pub-id-type="doi">10.1007/s12664-020-01129-5</pub-id>
<pub-id pub-id-type="pmid">33481172</pub-id>
<pub-id pub-id-type="pmcid">PMC7821455</pub-id>
</element-citation>
</ref>
<ref id="B17">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ioannidis</surname>
<given-names>JPA</given-names>
</name>
<name>
<surname>Boyack</surname>
<given-names>KW</given-names>
</name>
<name>
<surname>Baas</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Updated science-wide author databases of standardized citation indicators</article-title>
<source>PLoS Biol</source>
<year iso-8601-date="2020">2020</year>
<volume>18</volume>
<elocation-id>e3000918</elocation-id>
<pub-id pub-id-type="doi">10.1371/journal.pbio.3000918</pub-id>
<pub-id pub-id-type="pmid">33064726</pub-id>
<pub-id pub-id-type="pmcid">PMC7567353</pub-id>
</element-citation>
</ref>
<ref id="B18">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yoo</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Defining the Boundaries of AI Use in Scientific Writing: A Comparative Review of Editorial Policies</article-title>
<source>J Korean Med Sci</source>
<year iso-8601-date="2025">2025</year>
<volume>40</volume>
<elocation-id>e187</elocation-id>
<pub-id pub-id-type="doi">10.3346/jkms.2025.40.e187</pub-id>
<pub-id pub-id-type="pmid">40524628</pub-id>
<pub-id pub-id-type="pmcid">PMC12170296</pub-id>
</element-citation>
</ref>
<ref id="B19">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chetwynd</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Ethical Use of Artificial Intelligence for Scientific Writing: Current Trends</article-title>
<source>J Hum Lact</source>
<year iso-8601-date="2024">2024</year>
<volume>40</volume>
<fpage>211</fpage>
<lpage>5</lpage>
<pub-id pub-id-type="doi">10.1177/08903344241235160</pub-id>
<pub-id pub-id-type="pmid">38482810</pub-id>
<pub-id pub-id-type="pmcid">PMC11015711</pub-id>
</element-citation>
</ref>
<ref id="B20">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fiorillo</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Cicciù</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>The Use of Fi-Index Tool to Assess Per-manuscript Self-citations</article-title>
<source>Publ Res Q</source>
<year iso-8601-date="2022">2022</year>
<volume>38</volume>
<fpage>684</fpage>
<lpage>92</lpage>
<pub-id pub-id-type="doi">10.1007/s12109-022-09920-2</pub-id>
</element-citation>
</ref>
<ref id="B21">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>US calls for research into research integrity</article-title>
<source>BMJ</source>
<year iso-8601-date="2000">2000</year>
<volume>321</volume>
<elocation-id>1369B</elocation-id>
<pub-id pub-id-type="pmid">11099277</pub-id>
<pub-id pub-id-type="pmcid">PMC1173502</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</article>