A new survey by the Bank of England (BoE) and the Financial Conduct Authority (FCA) has found that UK firms are adopting AI across a range of use cases, but remain cautious about regulatory constraints.
In their聽 of the Artificial Intelligence and Machine Learning Survey, the BoE and FCA surveyed 118 regulated firms in an effort to understand their use of AI in financial services.
The respondents include banks, insurance and payments firms, with UK deposit-taking institutions and building societies making up almost half of the total.
Overall, 75 percent of firms said they are already using AI, and a further 10 percent said they are planning to use the technology over the next three years.
This is a significant increase on the figures from the 2022 survey, when 58 percent of firms said they were using AI and 14 percent said they were planning to do so.
The insurance sector reported the highest percentage of firms currently using AI (95 percent), closely followed by international banks (94 percent).
Financial market infrastructure firms 鈥 a category that includes payments firms, in the survey methodology 鈥 had the lowest percentage of respondents currently using AI (57 percent).
The results signal a high degree of optimism and enthusiasm for the use of AI among financial institutions, but also a clear recognition of its risks.
Around two-thirds of firms rated their use of AI as 鈥渓ow materiality鈥, while only 16 percent of firms reported using AI for 鈥渉igh materiality鈥 purposes.
According to the survey methodology, 鈥渕ateriality鈥 was defined as the application鈥檚 impact on the firm鈥檚 performance.
This could be quantitative, such as book or market value exposure, or number of customers impacted, or qualitative, such as importance in informing business decisions and potential impact on solvency or profitability.
High take-up but large knowledge gaps
Despite their enthusiasm for the technology, firms did not shy away from admitting to their lack of experience and lack of confidence in using AI.
Almost half of respondent firms said they only have a 鈥減artial understanding鈥 of the AI technology they are using, while around a third said they have a "complete understanding鈥 of it.
The BoE and FCA attribute this lack of knowledge to the widespread use of 鈥渢hird-party implementations鈥 of AI across the financial sector.
As per the survey鈥檚 definition, this refers to a use of AI where most of the development or deployment process is implemented by a third party.
A third of all AI use cases were third-party implementations 鈥 more than double the percentage (17 percent) from the 2022 survey.
鈥淭his supports the view that third-party exposure will continue to increase as the complexity of models increases and outsourcing costs decrease,鈥 the regulators said.
There is also a potential concentration risk in the provision of these AI applications, with the top three third-party providers accounting for 73 percent, 44 percent and 33 percent of all cloud, model and data providers respectively.
Benefits and constraints
The area with the highest percentage of respondents using AI is optimisation of internal processes (41 percent), followed by cybersecurity (37 percent) and fraud detection (33 percent).
Over the next three years, an additional 36 percent of respondents expect to use AI for customer support (including chatbots), 32 percent for regulatory compliance and reporting, and 31 percent for fraud detection.
Among firms currently using or planning to use AI, the most common perceived regulatory constraint is data protection and privacy laws.
One in four firms rated data protection and privacy as a 鈥渓arge constraint鈥, while almost a third rated it as a 鈥渕edium constraint鈥.
The next most common perceived constraint was cybersecurity rules, followed by the FCA鈥檚 Consumer Duty rules.
The results suggest that although firms see the potential benefits of AI, they are aware of the regulatory risks and continue to exercise caution when implementing the technology.
For example, the survey found that 84 percent of firms already have a designated person who is accountable for their AI framework.
Firms鈥 proactiveness in ensuring accountability for AI use also suggests that they have been keeping a close eye on potential options for AI regulation in the UK.
础蝉听covered by 91天堂原創, one of the key proposals of an introduced by Christopher Holmes, a member of the House of Lords, is that every business 鈥渄eveloping, deploying or using鈥 AI must have a designated AI officer.
This officer would be required to ensure safe, ethical, unbiased and non-discriminatory use of the technology, and would be made accountable in cases where these standards are breached.
Lord Holmes鈥 bill has come unstuck following the snap general election in June this year, however, after which it was not re-introduced to parliament.
However, Holmes subsequently聽told 91天堂原創 that there is still a 鈥渟ignificant gap鈥 in legislation with regard to AI, which lawmakers cannot ignore.
"AI is already impacting people's lives, and there are numerous issues that need to be addressed,鈥 he said.
Take a look at 91天堂原創's AI Outlook .
