More regulation sought over firms’ use of AI

GLOBAL – Consumers think there should be further regulation over how artificial intelligence is used in business, according to a study from Capgemini, which says companies should establish a code of conduct for how they use AI ethically.

AI automation cognitive_crop

Capgemini Research Institute surveyed 4,400 consumers in six countries (UK, US, France, Germany, the Netherlands and China) and 1,580 professionals from 500 organisations to explore attitudes towards ethics and transparency in AI.

Of the consumers surveyed in the study, 76% think there should be further regulation on how companies use AI, while 75% agreed they want more transparency from companies if a service is based on AI.

Executives in nine out of 10 organisations said they are aware of at least one instance of AI systems resulting in ethical issues, such as collecting and processing healthcare personal data in AI algorithms without consent, AI that is not able to explain how it makes decisions on credit or insurance claims, or using AI for workplace surveillance without employees’ consent. Almost half of the consumers surveyed ( 47%) believed they had felt the impact of such an issue.

Consumers also claimed that they would place more trust in companies whose AI interactions they perceive to be ethical ( 62%). However, 41% said they would complain if they experienced an ‘unethical AI interaction’ and 34% said they would stop engaging with the company.

Over half of the professionals in the study ( 51%) agreed it’s important to ensure AI systems are ethical and transparent, and 41% claimed that their company had abandoned or would abandon a system altogether if it was found to have caused an ethical issue. Additionally, 44% of employees claimed that they or their colleagues had raised concern about potentially harmful use of AI systems within their organisation.

UK executives in the study were most confident of all the countries surveyed that their organisations’ AI systems are transparent ( 36% compared to 25% globally) but only 28% said their systems are ethical or fair.

Anne-Laure Thieullent, AI and analytics group offer leader at Capgemini, said: “This research shows that organisations must create ethical systems and practices for the use of AI if they are to gain people’s trust. This is not just a compliance issue, but one that can create a significant benefit in terms of loyalty, endorsement and engagement.

“To achieve this, organisations need to focus on putting the right governance structures in place, they must not only define a code of conduct based on their own values, but also implement it as an ‘ethics-by-design’ approach, and, above all, focus on informing and empowering people in how they interact with AI solutions.”

We hope you enjoyed this article.
Research Live is published by MRS.

The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.

Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.

For example, there's an archive of winning case studies from over a decade of MRS Awards.

Find out more about the benefits of joining MRS here.

0 Comments


Display name

Email

Join the discussion

Newsletter
Stay connected with the latest insights and trends...
Sign Up
Latest From MRS

Our latest training courses

Our new 2025 training programme is now launched as part of the development offered within the MRS Global Insight Academy

See all training

Specialist conferences

Our one-day conferences cover topics including CX and UX, Semiotics, B2B, Finance, AI and Leaders' Forums.

See all conferences

MRS reports on AI

MRS has published a three-part series on how generative AI is impacting the research sector, including synthetic respondents and challenges to adoption.

See the reports

Progress faster...
with MRS 
membership

Mentoring

CPD/recognition

Webinars

Codeline

Discounts