All services Fund and Asset Managers Private and Institutional Asset Owners Debt, Capital Markets and Corporate
Close
Close
Close

Use of generative AI language models by financial institutions in Hong Kong and Singapore 

14 Apr 2025

With the introduction of generative artificial intelligence language models (AI LMs) into the public domain, both commercial and open-source AI LMs are now readily accessible to financial institutions. The use of AI LMs may enable asset managers and other financial institutions to handle client interactions, internal manual processes and operations more efficiently. However, their adoption comes with certain risks, which, if not properly managed, could have negative legal, reputational, operational or financial implications for licensed corporations (LCs), their clients and investors. 

In this article, we highlight the key risks firms should keep in mind and discuss how the Hong Kong and Singapore regulators are responding. 

What are the risks?

 Risks of using AI LMs to handle operational processes and client interactions include:  

  • Inaccurate, biased, unreliable and inconsistent outputs 
  • Increased risk of cyber-attacks, inadvertent leakage of confidential information and breaches of personal data privacy and intellectual property laws  
  • Concentration and operational resilience risks due to reliance on a limited number of external service providers for the development, training and maintenance of AI LMs 

The potential for AI “hallucinations” and unpredictable behaviour is a significant risk when generative AI is used in mission-critical areas. 

Regulatory expectations in Hong Kong

To facilitate the industry’s responsible use of AI LMs, the Securities and Futures Commission of Hong Kong (SFC) issued a circular on 12 November 2024 outlining its expectations for LCs in relation to their use.  

Key points to note: 

  • Applicability – The requirements of the circular apply to firms that offer services or functionalities provided by AI LMs or AI LM-based third-party products as part of their regulated activities  
  • Risk-based implementation – Firms may implement these requirements in a risk-based manner, depending on the materiality of the impact and the level of risk associated with the specific use case or application of the AI LM 
  • High-risk applications – The SFC considers the use of AI LMs for investment recommendations, advice or research to be high-risk, as inaccurate outputs may lead to unsuitable financial product recommendations or misinformation for investors 

AI LMs in operational processes

In many cases, employees use AI LMs primarily to enhance operational efficiency, e.g. to extract relevant information, rather than to produce investment research or analyses. Such administrative uses of AI LMs do not pose a material threat to firms’ operations. 

Nonetheless, firms should conduct due diligence before deploying AI LMs and review them on an ongoing basis if there are any changes to the terms, in order to assess cybersecurity and data risks and mitigate them via appropriate policies. 

This process should also include examining the AI LM’s terms of use relating to the information supplied by users, as information provided in prompts may be used for training purposes to further improve the performance of the model. This could expose the firm to risk if confidential information is included in the training data. 

Firms should continue to monitor their use of AI LMs so that senior management can ensure appropriate policies and procedures are in place for future changes, consistent with the core principles set out in the SFC’s circular and commensurate with the materiality and level of risk. 

Regulatory oversight in Singapore

In Singapore, the Monetary Authority of Singapore (MAS) conducted a thematic review of the use of AI LMs in banks in 2024 and has since released an information paper outlining good practices for AI governance and oversight; AI identification, inventorisation and risk materiality assessment; and AI development, validation, deployment, monitoring and change management. 

AI-related risks identified by MAS

The MAS noted significant use of AI in the areas of risk management, customer engagement and servicing, as well as to support internal operational processes such as decision tree making and suspicious transaction identification. MAS summarised the potential risks as follows:  

  • Financial risks, where poor accuracy of AI used for risk management could lead to poor risk assessments and consequent financial losses 
  • Operational risks, where unexpected behaviour of AI used to automate financial operations could lead to operational disruptions or errors in critical processes 
  • Regulatory risks, where poor performance of AI deployed to support anti-money laundering (AML) efforts leads to non-compliance with regulations 
  • Reputational risks, where incorrect or inappropriate information from customer-facing AI-based systems, such as chatbots, leads to customer complaints and negative media attention, resulting in reputational damage 

Good practices for AI governance

MAS considers that good practices include establishing cross-functional oversight forums to avoid gaps in AI risk management; updating control standards, policies and procedures and clearly setting out roles and responsibilities to address AI risks; developing clear statements and guidelines to govern areas such as the fair, ethical, accountable and transparent use of AI across the firm; and building AI capabilities to support both innovation and risk management. 

Risk mitigation and compliance measures

Risk mitigation measures include the implementation of policies and procedures to identify AI usage and risks across the business so that appropriate risk management can be applied; putting systems and processes in place to ensure the completeness of AI inventories that capture the authorised scope of use and provide a central overview of AI usage to support oversight; and the assessment of the risk materiality of AI, covering the key risk dimensions, such as the impact of AI on the firm and stakeholders, the complexity of the AI used and the firm’s reliance on AI, so that relevant controls can be applied proportionately. 

Technical and operational safeguards

MAS recommends that firms pay greater attention to data management, model selection, robustness and stability, explainability and fairness, and reproducibility and auditability. 

Good practices include independent validation or review of higher-risk AI prior to deployment to ensure that development and deployment standards have been adhered to. For AI of lower risk materiality, peer reviews calibrated to the risks posed by the use of AI are sufficient prior to deployment. 

To ensure that AI behaves as intended when deployed and that any data and model discrepancies are detected and addressed, firms should undertake pre-deployment checks, closely monitor deployed AI based on appropriate metrics, and apply appropriate change management standards and processes. 

MAS delves into some detailed practices on deployment, testing and monitoring in the information paper, providing recommendations for firms deciding whether to deploy AI or expand deployment into new areas. Click here to read the paper in full. 

How can IQ-EQ help?

IQ-EQ’s expert regulatory compliance team works closely with firms in developing AI governance frameworks, providing specialist support that includes: 

  • Drafting policies and procedures for the use and control of AI 
  • Integrating AI risk monitoring into existing compliance programmes 
  • Providing guidance on AI-related regulatory developments in Asia 

IQ-EQ represents the largest independent regulatory compliance firm in the Asia-Pacific region with 100+ regulatory compliance specialists now part of our teams in Hong Kong and Singapore. 

To find out more or to discuss the guidance shared in this article, get in touch with us today.  

Working with IQ-EQ has been seamless – you and your team understand our business, advise us appropriately, and handle your side of our collective partnership so that we can focus on making good investment decisions. Evan Gibson SVP, Merchants Capital

Get in touch with us today

We’re ready to listen.

Make an enquiry

Interested in joining our team?

We are always on the lookout for passionate people that possess IQ and EQ to join our growing team.

View job vacancies