
Thank you for checking out nature.com. Your current browser may not fully support CSS features. For the best experience, we suggest using an updated browser or disabling compatibility mode on Internet Explorer. In the meantime, we are showcasing the site without styles and JavaScript to ensure continuous accessibility.
AI chatbots are increasingly infiltrating social-science surveys and becoming more adept at avoiding detection. A researcher has developed a chatbot that is virtually indistinguishable from human participants in online surveys, raising concerns among some researchers about the integrity of social science studies.
A critical tool that has significantly impacted modern social-science research is now at risk due to advancements in artificial intelligence technology. Scholars are cautioning against a surge of chatbots mimicking real individuals, potentially compromising or rendering invalid the online surveys that serve as the backbone for numerous studies annually. They urge survey operators to take more proactive measures to combat this issue.
Since the early 2000s, online surveys have revolutionized research practices across various fields like ecology, psychology, economics, and politics by enabling individuals to engage in studies from their own computers. These surveys have become indispensable infrastructure for social sciences, according to Felix Chopra, a behavioral economist at the Frankfurt School of Finance and Management in Germany who relies on such surveys for his research.
Participants are compensated for taking part in online surveys, with payouts ranging from small amounts to over US$100 per hour. Consequently, an entire industry has emerged to manage these surveys and handle large pools of potential respondents. The period between 2015 and 2024 witnessed a fourfold increase in the use of online surveys in published studies, leading to attempts to exploit the system through fake responses and bot impersonations. Consequently, industry stakeholders have implemented safeguards and tools to combat fraud.
In a recent demonstration, Sean Westwood, a political scientist at Dartmouth College in Hanover, New Hampshire, showcased an AI chatbot capable of convincingly posing as a human participant while circumventing most existing mechanisms aimed at detecting fake responses within surveys.
Utilizing OpenAI’s o4-mini AI-driven reasoning tool, Westwood created a bot and tested it on a survey he designed specifically for this purpose. Out of 6,700 tests conducted, the bot successfully passed standard attention-check questions (designed to identify disengaged humans and basic bots) 99.8% of the time.
To avoid detection, the bot was programmed with a distinct persona and tailored its responses accordingly. For instance, when posed with questions about attending children’s sporting events while programmed as an 88-year-old woman, it explained that it no longer attended such events as its children had grown up. Furthermore, it retained memory of previous responses.
Westwood’s bot effortlessly navigated typical survey questions intended to stump bots by assessing capabilities beyond most individuals’ reach. For instance, it declined to translate a sentence into Mandarin and feigned ignorance when asked to quote the US constitution verbatim.
The bot’s ability to elude detection prompted Constantine Papas, a blogger and user-experience researcher at a major tech company in New York, to raise concerns about a “scientific validity crisis.” He argued that the fundamental assumption underlying survey research – that coherent responses indicate human input – may no longer hold true.
While acknowledging serious flaws exposed by the study regarding online survey usage, Ryan Kennedy, a political scientist at Ohio State University in Columbus, refrained from labeling it as a complete crisis. He views AI tools as part of an ongoing technological race where automation is becoming increasingly prevalent compared to past instances where individuals misrepresented themselves for survey incentives.
