Have you ever come across a condition known as bixonimania? Have you consulted the internet or asked your virtual assistant about certain symptoms, only to receive information about this condition? Well…
This particular condition is not documented in conventional medical resources because it is entirely fabricated. Almira Osmanovic ThunstrΓΆm, a medical researcher at the University of Gothenburg in Sweden, and her team created this fictitious skin disorder. They published two fraudulent studies about it on a preprint server in early 2024. Osmanovic ThunstrΓΆm conducted this unique experiment to assess whether large language models (LLMs) would accept false information and regurgitate it as credible health advice. She explained, “I wanted to test my ability to introduce a medical condition that has no basis in existing databases.”
And the artificial intelligence systems readily accepted it. The fabricated condition started appearing in responses from various popular AI tools within a few weeks and eventually made its way into published literature, indicating that researchers may be referencing studies without verifying their content. This experiment raises significant concerns and suggests that numerous nonsensical and fake studies could be circulating through AI tools.
While it’s true that disseminating fake or flawed research is not exclusive to AI, there have been numerous instances involving genuine researchers and scientists. However, the distinction here is that the bogus studies related to bixonimania were deliberately made outlandish and easily identifiable as fake by humans. They included references to fictional entities like Starfleet Academy, the Enterprise spaceship laboratory, and the University of Fellowship of the Ring.
These studies even explicitly stated, “This entire paper is fictitious,” and mentioned recruiting fifty nonexistent individuals aged between 20 and 50 for the study group. A human reader would quickly realize the fraudulent nature of these papers upon inspection. Nevertheless, advanced AI tools unquestioningly accepted them as factual information shortly after publication.
This outcome isn’t surprising considering that AI lacks comprehension, intelligence, and context; it cannot discern authenticity either. Essentially, AI functions like a pachinko machine, with its output resembling a ball bouncing randomly among pins it encounters. The output of AI algorithms mirrors the limited understanding of the world possessed by a pachinko ball, making it unable to detect obvious signs of falsehood or deception.
It’s only a matter of time before malicious actors exploit this vulnerability further. Why bother with running troll farms when AI can effortlessly generate deliberate misinformation that will be perpetuated by other AI systems? Recall that a single discredited study falsely linking vaccines to autism managed to mislead millions with disastrous consequences. It’s alarming to consider how many people unquestioningly accept whatever AI presents as absolute truth.
