A researcher published a paper on a made-up disease. Then people started getting diagnosed.
Key Points:
- Researcher Almira Osmanovic Thunström created a fictitious disease called "Bixonimania" and fabricated a research paper to test if AI language models would incorporate false medical information into their outputs.
- Despite clear disclaimers and absurd details in the fake paper, AI chatbots like Microsoft Copilot and Google Gemini began citing "Bixonimania" as a legitimate diagnosis within weeks, demonstrating the vulnerability of AI systems to misinformation.
- The fake disease was even cited in peer-reviewed research papers, which were later retracted, highlighting how easily false information can infiltrate academic literature if not properly verified.
- The experiment underscores broader issues with information consumption, emphasizing that both AI systems and human researchers can be misled by unverified sources, and calls for improved reading, verification, and citation habits in academia and healthcare.
- Thunström has since retracted the fake papers and removed them from AI training data, clarifying that the experiment aimed to reveal the importance of critical evaluation of AI-generated content rather than discredit AI in medicine.