AI regulation and policy remains a priority for UK

UK – Artificial intelligence (AI) strategy and regulation is an “important area of focus” for the government and bespoke UK AI regulations are being designed with approaches elsewhere in the world in mind, according to the head of regulation at the Office for Artificial Intelligence.

AI chip

Speaking at the Next steps for AI in the UK Westminster eForum policy conference earlier this week, Alex Leonidou, head of regulation at the Office for Artificial Intelligence, said that the government was looking to engage with stakeholders and experts to improve on current AI regulatory frameworks.

“This is a really important area of focus for the government,” Leonidou said. “We have made an effort at every stage of this to engage proactively and collaboratively with the ecosystem in general, whether that is business or academia.

“We are really aware of the value of outward looking engagement in this, given the pace of change and its significance.”

The government last year produced an AI strategy and has released an AI regulation policy framework this year as well in an attempt to future-proof the UK’s AI industry.

Leonidou said that the current non-statutory approach was trying to address gaps and overlaps in regulation while also keep up with the fast pace of change in the AI sector, and “trying to look at where we perceive risks and harms coming from the application and context of use of AI and regulating with that in mind”.

Interoperability was also vital for future AI regulation in the UK, she added.

“We are, very deliberately, trying to come up with our own regulatory framework,” Leonidou said. “We are not copying and pasting anyone else’s. But that’s not to say we aren’t acutely of that interoperability point.

“We are not making something new for the sake of it – we are making something new because we think that is the right thing to do for the UK’s AI ecosystem and our position as a leader in this space.

“Whether with the EU AI Act or any other emerging framework around the world, interoperability is very much top of mind.”

Elsewhere in the conference, Stephen Almond, director of technology and innovation at the Information Commissioner’s Office (ICO), criticised the use of emotion analysis technologies, such as those that track people’s gaze, facial movements, heartbeat and skin moisture to draw inferences about people’s emotions.

He warned that the “science doesn’t stack up” for emotion analysis technology and warned users risk “systemic bias, inaccuracy and even discrimination” with its use.

“Organisations shouldn’t be using meaningless information to make what can be pretty meaningful decisions,” Almond added.

“We are yet to see any emotion analysis technologies that would meet the requirements of data protection law, although our door is always open for people who want to come to us.

“Organisations that are not acting responsibly, who are causing harm to vulnerable people, can expect to be investigated.”

Almond said the ICO would be next year updating its definition of ‘fairness’ in AI, and will be offering innovation advice on the data protection implications of AI, building on its existing regulatory sandbox.

“We are continually scanning the horizon and investing our resources to look at novel risks that are emerging,” he explained.

Also speaking at the conference, Francois Candelon, global director at the BCG Henderson Institute, said more was needed to maintain the UK’s preeminent status in AI development and innovation.

“I believe you have been extremely strong in terms of driving technology development, generating academic research, growing AI talents and fostering an environment for start-ups to emerge,” Candelon said. “When I look at the take-up rate, there is still room for improvement.”

He added that lessons can be learnt from China, where his research suggests 80% of companies have adopted AI compared with 50% in the UK.

Candelon added that the Chinese government has played a “crucial catalyst role” in creating “vertical AI ecosystems” that can help adoption of AI in specific industries, with private companies then helping to lead areas of innovation.

“You already have many of the ingredients to nurture these AI ecosystems and transformers,” he added.

“I am really looking forward to the creation of these AI ecosystems. We might need to compete ecosystem to ecosystem rather than company to company. This is not the prerogative of one company or one player – all the stakeholders will have to work hand in hat.”

We hope you enjoyed this article.
Research Live is published by MRS.

The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.

Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.

For example, there's an archive of winning case studies from over a decade of MRS Awards.

Find out more about the benefits of joining MRS here.

0 Comments


Display name

Email

Join the discussion

Newsletter
Stay connected with the latest insights and trends...
Sign Up
Latest From MRS

Our latest training courses

Our new 2025 training programme is now launched as part of the development offered within the MRS Global Insight Academy

See all training

Specialist conferences

Our one-day conferences cover topics including CX and UX, Semiotics, B2B, Finance, AI and Leaders' Forums.

See all conferences

MRS reports on AI

MRS has published a three-part series on how generative AI is impacting the research sector, including synthetic respondents and challenges to adoption.

See the reports

Progress faster...
with MRS 
membership

Mentoring

CPD/recognition

Webinars

Codeline

Discounts