All levels of research are being changed by the rise of artificial intelligence (AI). Don’t have time to read that journal article? AI-powered tools such as TLDRthis will summarise it for you.
Struggling to find relevant sources for your review? Inciteful will list suitable articles with just the click of a button. Are your human research participants too expensive or complicated to manage? Not a problem – try synthetic participants instead.
Each of these tools suggests AI could be superior to humans in outlining and explaining concepts or ideas. But can humans be replaced when it comes to qualitative research?
This is something we recently had to grapple with while carrying out unrelated research into mobile dating during the COVID-19 pandemic. And what we found should temper enthusiasm for artificial responses over the words of human participants.
Encountering AI in our research
Our research is looking at how people might navigate mobile dating during the pandemic in Aotearoa New Zealand. Our aim was to explore broader social responses to mobile dating as the pandemic progressed and as public health mandates changed over time.
As part of this ongoing research, we prompt participants to develop stories in response to hypothetical scenarios.
In 2021 and 2022 we received a wide range of intriguing and quirky responses from 110 New Zealanders recruited through Facebook. Each participant received a gift voucher for their time.
Participants described characters navigating the challenges of “Zoom dates” and clashing over vaccination statuses or wearing masks. Others wrote passionate love stories with eyebrow-raising details. Some even broke the fourth wall and wrote directly to us, complaining about the mandatory word length of their stories or the quality of our prompts.
These responses captured the highs and lows of online dating, the boredom and loneliness of lockdown, and the thrills and despair of finding love during the time of COVID-19.
But, perhaps most of all, these responses reminded us of the idiosyncratic and irreverent aspects of human participation in research – the unexpected directions participants go in, or even the unsolicited feedback you can receive when doing research.
But in the latest round of our study in late 2023, something had clearly changed across the 60 stories we received.
This time many of the stories felt “off”. Word choices were quite stilted or overly formal. And each story was quite moralistic in terms of what one “should” do in a situation.
Using AI detection tools, such as ZeroGPT, we concluded participants – or even bots – were using AI to generate story answers for them, possibly to receive the gift voucher for minimal effort.
Contrary to claims that AI can sufficiently replicate human participants in research, we found AI-generated stories to be woeful.
We were reminded that an essential ingredient of any social research is for the data to be based on lived experience.
The rise of AI in academia sparks a debate: How do we balance technological advancement with ethical integrity in research? Misidentification and casual accusations highlight the fragile line between innovation and authenticity. Nature https://t.co/JuB9ZoV2jI
— Genetic Literacy Project (@GeneticLiteracy) March 15, 2024
Is AI the problem?
Perhap the biggest threat to human research is not AI, but rather the philosophy that underscores it.
It is worth noting the majority of claims about AI’s capabilities to replace humans come from computer scientists or quantitative social scientists. In these types of studies, human reasoning or behaviour is often measured through scorecards or yes/no statements.
This approach necessarily fits human experience into a framework that can be more easily analysed through computational or artificial interpretation.
In contrast, we are qualitative researchers who are interested in the messy, emotional, lived experience of people’s perspectives on dating. We were drawn to the thrills and disappointments participants originally pointed to with online dating, the frustrations and challenges of trying to use dating apps, as well as the opportunities they might create for intimacy during a time of lockdowns and evolving health mandates.
In general, we found AI poorly simulated these experiences.
Some might accept generative AI is here to stay, or that AI should be viewed as offering various tools to researchers. Other researchers might retreat to forms of data collection, such as surveys, that might minimise the interference of unwanted AI participation.
But, based on our recent research experience, we believe theoretically-driven, qualitative social research is best equipped to detect and protect against AI interference.
There are additional implications for research. The threat of AI as an unwanted participant means researchers will have to work longer or harder to spot imposter participants.
Academic institutions need to start developing policies and practices to reduce the burden on individual researchers trying to carry out research in the changing AI environment.
Regardless of researchers’ theoretical orientation, how we work to limit the involvement of AI is a question for anyone interested in understanding human perspectives or experiences. If anything, the limitations of AI reemphasise the importance of being human in social research.
Alexandra Gibson, Senior Lecturer in Health Psychology, Te Herenga Waka — Victoria University of Wellington and Alex Beattie, Research Fellow, School of Health, Te Herenga Waka — Victoria University of Wellington
This article is republished from The Conversation under a Creative Commons license. Read the original article.
6 Comments
What value can it have without detailing the selection criteria? It could be 110 members of Gloriavale Christian Community. Logically users of social media are usually younger but on the other hand having time to deal with academics then elderly and retired (I never had time to comment on this forum until I retired). Who pays for this kind of 'health' study?
They could be 100 artificial facebook users with most the awards going to one or two people, add to that the very poorly drafted and highly biased selection criteria and I am surprised they did not realize most the responses were artificial before Chat GPT came onto the market. Bot accounts have been around for a long time and with minimal effort they could use material from other sources to construct scenario responses.
Even human responses to surveys are very bad and without adequate sample sizes and control questions there is often zero value in surveys like this. Plus any person can have multiple facebook accounts. Add in a gift award (even for virtual 'points') and it is a near guarantee most the responders will be fake and unviable for research. In gaming, (where financial & artificial rewards systems are used by design) it is so well known to have fake players online games are now designed for real players often to only number anywhere from 1 in 5 to 1 in 10. The process of fake accounts is often known as farming. With the farms providing benefits that a single real user cannot achieve. Many applications and markets are designed around the set up of farming and selling farms in bulk to other users. Some servers of around 500 "users" can be over 90% farms which provide a benefit to a few actual people and awards in the community. A farm server etc. This is well known in international communities, especially SM sites. If you are a single user with a single account that is more rare these days given the prevalence of fake accounts with seemingly genuine responses. Another recent example is the Good Reads scandals using fake generated accounts where seemingly genuine fake reviews have surfaced again but by now it should be obvious that written reviews, responses to surveys and even scenario responses can be mostly fake from farm accounts if your selection is via online. There are many markets in buying x number of fake accounts to do fake reviews and fake survey responses. You can go full AI, partial bot, fake human responses, any age of the accounts (not the user age but the account age & history data), any location etc.
Researchers really have to be aware of their selection because these issues about online selection have been well known. Any use of social media will result in heavily biased responses and poor checks for real humans. Any researcher even advertising on social media has to have even basic awareness of the heavy bias making the research close to unusable in a wider context and their respondents cannot be social media users or accounts. ID verification processes have to occur to verify actual humanity and even on top of that additional checks for each response being genuine, with controls. 100 people from facebook is a fricken giant red flag that pulls the whole research into question as it flies so far against scientific practice all of the funding might as well be going down a hole.
Time to dust off the Voight-Kampff test to detect humans with genuine answers versus replicants.
What is concerning is the OP was not aware of the many fraudulent AI articles and mainstream journals that publish generated "research" decades before the arrival of ChatGPT.
This is not a new thing. This is a thing that is pervasive in the market making most science research published near valueless unless it has over hundreds of supporting peer review studies, and with some using competing criteria & methods, from different sources. Sadly most science articles have zero peer review these days, none prior or in the process of publishing, which has for decades been a concern to scientists. This has for the past couple of decades lead to massive scandals and major physical harm to millions of people (especially with fraudulent non peer reviewed materials even turning up in prestigious journals like the Lancet which is often assumed to have even minor editorial review but has been proven to do not even basic review checks of material). The 'vaccines cause Autism' fraud was well known prior but sadly with journals losing even basic peer review systems anything gets published. It is not the material or the methods or the data that gets checked but the credentials of the person & prestige of publishing driving science funding.
The OP being so unaware of this, and not even mentioning it does seem like they are way out of the science field to begin with (you get that with social surveyors with public funding).
Anyone with a science background knows fake generated responses and even AI generated research has been around for at least a couple of decades. We often share the most hilariously bad examples over a BBQ and use the tragic ones as cautionary tales.
For instance AI generation in art now has some particularly hilarious short comings (or really long comings) when used in supposedly peer reviewed reputable journals.
Link to articles describing: Scientists aghast at bizarre AI rat with huge genitals in peer-reviewed article
https://arstechnica.com/science/2024/02/scientists-aghast-at-bizarre-ai…
The even funnier and more erroneous bit is not the biology of the animal but the labels. The community comments will give quite a few chuckles and insights into why even human peer reviewers, (AI reviewers have been in the market for a long time) are not doing any peer review (or even checks of article & research content).
We welcome your comments below. If you are not already registered, please register to comment.
Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.