The sprawling nature of聽the regulatory landscape for artificial intelligence (AI) is a core issue for stakeholders as they push for more international and cross-sectoral harmonisation of rules.
Stakeholders in the AI space told UK regulators that the regulatory landscape for AI is 鈥渃omplex and fragmented鈥 and asked for more coordination and alignment between regulators, a聽 published last week (October 26) reveals.
The submissions came in response to a joint discussion paper prepared by the Bank of England (BoE), the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA), which asked the public to share their views on the risks and benefits of the use of AI in financial services.
It also asked stakeholders to weigh in on whether the technology can be managed through updates to the existing regulatory framework or if a new approach is needed.
The discussion comes as recent developments, particularly with regard to generative AI, have drawn regulators鈥 attention to the potential risks of the new technology, while they try to ensure its safe and responsible development.
According to government figures, the UK鈥檚 AI sector聽 third behind the US and China, contributing 拢3.7bn to the economy and employing 50,000 people.
Extensive regulation for AI
Respondents, which included a wide variety of entities from banks and industry bodies to technology providers, financial market infrastructures and consumer associations, have told regulators that AI in financial services is already subject to many legal requirements and guidance.
鈥淭he suite of regulations governing the use of AI in the financial sector is already extensive, so care is needed to avoid creating unnecessary new requirements,鈥 according to one of the respondents.
For example, the BoE, FCA and PRA have each issued several statements related to operational resilience and outsourcing that are relevant for AI, while existing requirements of discrimination laws, intellectual property law, contract law and forms of ethical guidance also apply.
Data protection laws, such as the UK General Data Protection Regulation (GDPR), also apply to AI use, although stakeholders have raised that there may be some difficulties in 鈥渢he way UK GDPR interacts with AI鈥, while others noted that the lack of understanding by suppliers may lead to some businesses 鈥減otentially gaming or ignoring the rules鈥.
鈥淕iven these complexities, the industry is right to call for a joined-up approach to managing and mitigating AI risks,鈥 according to Pedro Bizarro, chief science officer at Feedzai.
This is especially the case in financial services, 鈥渨here positive consumer outcomes, with respect to fairness and protection, are vital鈥, Bizarro added.聽
Stakeholders, therefore, emphasised the importance of cross-sectoral and cross-jurisdictional coordination, pointing out that 鈥淎I is a cross-cutting technology extending across sectoral boundaries鈥.
鈥淪ince many regulated firms operate in multiple jurisdictions, an internationally coordinated and harmonised regulatory response is critical in ensuring that UK regulation does not disadvantage UK firms and markets while also minimising fragmentation and operational complexity,鈥 respondents said.
UK sets up world鈥檚 first AI Safety Institute
Ahead of this week鈥檚 global summit focusing on AI safety, on Thursday (October 26), UK Prime Minister Rishi Sunak also聽 the launch of the world鈥檚 first AI Safety Institute.
The institute is aimed at exploring the risks of AI, from social harms such as bias, misinformation, fraud or cyber-attacks, through to other risks such as the use of terrorist groups of AI to spread fear or build chemical or biological weapons.
鈥淩ight now, the only people testing the safety of AI are the very organisations developing it,鈥 Sunak said when announcing the institute.
As many of these firms have the incentives to compete and be the first to build the best models, 鈥渨e should not rely on them marking their own homework鈥, Sunak added.
The Prime Minister added that he hopes world leaders in the AI space would come to an agreement at the summit regarding the nature of the risks and can start a global conversation similar to what members of the Intergovernmental Panel on Climate Change did.
Bizarro said he agrees with Sunak's observation about potential bias in technology giants鈥 self-regulation and it is 鈥渋mperative鈥 that these regulatory dialogues involve stronger participation from small and medium-sized enterprises, start-ups, research institutions, academia, open-source groups and representatives of clients or users of those models.聽
鈥淲hile the launch of the world's first AI Safety Institute is commendable, a comprehensive understanding of the capabilities of new AI models and what guardrails or desired criteria are needed can only be achieved if it embraces the entire tech community, rather than solely focusing on major Silicon Valley players鈥, Bizarro told 91天堂原創.
Bizarro stressed that AI could operate 鈥渁s a two-sided coin, enabling new risks, but also enabling new, likely greater benefits, capable of enabling misuse, but also capable of thwarting misuse鈥.
鈥淲ithin the realm of financial crime, AI is playing an expanding role in bolstering security and providing invaluable insights across the financial services industry.
鈥淢oreover, AI's potential extends to identifying the misuse of generative AI, particularly concerning deep fakes, which remains a prominent concern for many.鈥


