Businesses should not be ‘blind’ to risks of AI, says ICO
The ICO plans to check whether businesses who have adopted generative artificial intelligence have taken the relevant steps to address privacy risks, and will be “taking action” if there is a risk of harm, according to the data watchdog.
Speaking at Politico’s Global Tech Day today, Stephen Almond, executive director of regulatory risk, will say: “Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks.
“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won't upset customers or regulators.”
In April, the regulator outlined eight areas that organisations should consider before adopting generative AI, including lawful reasons for processing data, ensuring transparency and mitigating security risks.
Organisations must also check whether they are a controller or processor of personal data, prepare a data protection impact assessment, work out how to limit unnecessary processing, decide how to comply with individual rights requests and whether generative AI would make solely automated decisions.

We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.
0 Comments