
An investigative report by The Guardian raised concerns about the accuracy of medical information provided in some AI Overview responses for health-related queries, as claimed by health experts. Google contested the allegations, arguing that many instances were based on incomplete screenshots.
The investigation conducted by The Guardian involved testing various health-related searches and sharing the AI Overview responses with charities, medical professionals, and patient information organizations. Google responded to The Guardian’s findings by stating that the majority of AI Overviews are factual and beneficial.
The report highlighted instances where health organizations reviewed AI-generated summaries for accuracy, revealing misleading or incorrect guidance. For instance, in a case involving pancreatic cancer, Anna Jewell from Pancreatic Cancer UK criticized advice to avoid high-fat foods as “completely incorrect” and potentially harmful to patients’ well-being and treatment outcomes.
Furthermore, concerns were raised regarding mental health queries, with Mind’s Stephen Buckley flagging dangerous advice provided in AI summaries related to conditions like psychosis and eating disorders. The report also mentioned inaccuracies in cancer screening information, such as a pap test being wrongly identified as a test for vaginal cancer by the Eve Appeal charity’s CEO.
According to Sophie Randall from the Patient Information Forum, these examples underscored the risk of inaccurate health information being prominently displayed in online searches through Google’s AI Overviews.
Additionally, The Guardian noted that conducting the same search multiple times could yield varying AI summaries sourced from different information outlets.
In response to these claims, Google refuted the examples and conclusions presented in the report. The company emphasized that while some screenshots were incomplete, the linked sources were reputable and advised users to seek expert guidance. Google maintained that the majority of AI Overviews are accurate and helpful, aligning with quality improvements continuously implemented by the company.
The investigation occurs amid ongoing debates surrounding the expansion of AI Overviews since 2024, following initial instances of unusual results like suggestions involving unconventional food pairings. Google assured adjustments to limit queries triggering AI-written summaries and enhance their functionality post-rollout.
Recent data from Ahrefs indicated a higher likelihood of medical YMYL queries prompting AI Overviews compared to average search queries. Concerns about citation support gaps in AI-generated answers were also raised by SourceCheckup’s evaluation framework for medical Q&A in LLMs.
Errors in health-related content provided by AI Overviews are particularly critical due to their prominent placement above standard search results. Publishers have invested significant resources in ensuring accurate medical information is available online – an aspect highlighted by The Guardian’s investigation into Google’s summaries’ reliability at the top of search results.
Moreover, inconsistencies observed in generating different summaries for identical queries over time pose challenges for users verifying information accuracy through repeated searches.
Google has addressed viral criticisms by adjusting AI Overviews previously and expects them to be evaluated on par with other Search features rather than under a distinct standard based on its response to The Guardian’s report.