Researchers must address ‘challenge’ of AI-generated responses
Speaking at the virtual event yesterday ( 9th October), Suzan Akbulut, research executive at Opinium, said it is essential for researchers to address the role of AI-generated responses in qual. “Qualitative research focuses on the human experience and capturing the nuances of thoughts, feelings and behaviours. We’re looking at the subjective experiences of individuals. Often in qual we’re using online focus groups and communities as a safe environment where participants can share their true thoughts and as we consider this to be our objective, it’s essential to address an emerging challenge that is the role of AI in our data collecting process.”
The increased use of generative AI platforms to create content is now well-documented in education, where some have warned of the potential for plagiarism, but also poses challenges for researchers.
Akbulut said: “The use of AI raises concerns about the authenticity of the responses that we seek. If the essence of our work is to tap into people’s unique views, AI-generated answers could dilute the richness of our insights. As researchers, we must remind participants that their personal experiences are way more valuable than any generated responses – this is important for retaining the integrity of our data.”
A recent online community project conducted by Opinium to explore the needs, behaviours and attitudes of young drivers resulted in some AI-generated answers from participants. Jonny Dodson, qualitative research executive at Opinium, said the researchers were led to this conclusion as one response contained the LLM stock phrase: ‘To provide further feedback or analysis on this, I would need additional context or information.’
Dodson said: “This was quite an interesting and surprising finding, and after reviewing content from across the community, we observed a pattern of this type of response, which led us to believe that certain participants were using an AI generator to participate.”
In other clues, some responses contained no clear expression of personal view or lacked emotional tone, explained Dodson. Instead, they were characterised by a lack of contextual understanding and an analytical tone. Responses were quite structured, with some including lengthy bulleted lists addressing the question.
“As young researchers, it can be quite daunting to challenge participants, with this being a relatively new concept. The roadmap on how to approach this is still ongoing, but in this case we knew we wanted to retain a comfortable, engaged research environment and still protect the validity of the research,” said Dodson.
The researchers approached the problem by firstly probing participants further on the points they had made in the false answers. They considered removing specific content and asking participants to submit again, but were wary of doing this because of the risk of the previous AI answer leading to a biased response. The researchers also kept an open dialogue with recruiters to set clear expectations for participants and have the ability to replace participants if necessary.
Going forward, the experience led the Opinium researchers to establish some approaches that can be applied in future, including setting expectations through an upfront conversation with the recruiter, covering what’s expected of participants and specifically stating that responses should reflect their own personal views without lifting content from the internet, including any AI tools.
The researchers also reflected on the importance of screeners – using open-ended questions for an additional ‘sense-check’ on responses – and advocated speaking to platform providers to ask what support is available to minimise the use of AI, for example, blocking the pasting of external content.
The experience highlights new challenges for researchers in a world where AI-generated content is increasingly prevalent. Akbulut said: “Having measures in place to address the issue early on is really important, but we also need to implement best practices, whether it’s creating the best environment to encourage honest responses from participants or communicating the value of personal insights.”
She added: “We need to approach this with a mindset of collaboration. As AI becomes more and more integrated into everyday life, it’s important to work alongside our participants and discuss the role of AI in research. We hope that by doing so we can ensure that our findings still remain meaningful while adapting to the evolving digital landscape.”
Elsewhere at the &more conference, Natalie Edwards, managing director at Canopy Insight, said understanding meaning is increasingly difficult in cultural research because of generative AI and the lack of intent behind it.
“We need to consider intentionality – what is the intent behind ‘stuff’? Discerning meaning is increasingly difficult in a generative AI context. We don’t like ambiguity. If audiences can’t verify the authenticity of what we’re seeing, how can we understand the world? There is no intent behind generative AI content; it is not a reliable indicator of meaning.”
Edwards added: “More than anything as researchers, we need to be asking about where this came from. If we don’t understand the provenance of our data, we’re lost. There’s no way of understanding whether it’s trustworthy or not.”
In another session at the conference, HarrisX senior research executive Becca Byrne outlined how the company had used AI for an end-to-end project, with the aim of understanding where it could fit into the traditional research workstreams.
When it came to writing a questionnaire using ChatGPT and Gemini, there was a strong requirement for a lot of human oversight, offsetting the potential value of AI.
“There’s no doubt that LLMs can produce decent questions, albeit basic. However, human input is indispensible. A key realisation from the exercise was that the time involved in refining the prompts, editing the questions, often offset the value added. Therefore we came to the conclusion that for questionnaire development, AI is just a tool, not a replacement.”
However, Byrne encouraged young researchers to experiment with AI “in a safe environment” in order to learn. She said: “Without trial and error, how will we learn? How can we test and learn from AI tools enough to be able to put them in front of our clients? As young researchers, the future of market research is in our hands.
“We can experiment and learn, but don’t be afraid to fail. The most exciting discoveries in our work often come when we push boundaries, so to truly learn, I urge young researchers to go out, explore and try new things, in a safe environment – this could be working on personal projects, collaborating with like-minded researchers or seeking out clients who are open to AI experimentation and using it as a joint learning experience. By exposing ourselves to this evolving landscape of what is modern research, we can gain the knowledge and experience that really is needed that’s going to shape the future of this industry.”

We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.
0 Comments