Majority want brands to be transparent about AI
The survey of 2,000 people aged 18 and over also found that 75% of respondents want to be told when they are not ‘dealing with a real person’ and 67% think that AI should not ‘pretend to be human or act as if it has a personality’.
The majority of people surveyed also agreed that automated AI-driven campaigns should be ‘carefully regulated’ ( 72%) and 61% agreed that people must accept liability if the use of AI results in an accident.
Half of people surveyed ( 51%) agreed that AI should have the right to report them if they are engaging in an illegal activity, the survey found.
Comparing the results with a 2018 survey conducted by Opinium and the IPA with the same question, there has been a 25% decrease in the number of people who think that they should be polite and exhibit good manners when interacting with virtual assistants, from 64% in 2018 to 48% in 2023.
Josh Krichefski, IPA president and chief executive, EMEA & UK, GroupM, said: “AI provides incredible opportunities for our business. As these findings demonstrate, however, the public are understandably cautious about its use – and increasingly so in some areas.
“It is therefore our responsibility to be transparent and accountable when using AI to ensure the trust of our customers.”

We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.
0 Comments