Careers

Learn more

Qualified professionals

Learn more

Trainee & intern programmes

Learn more

Offices

New York

Learn more

San Francisco

Learn more
A&L Goodbody logo
Artificial Intelligence – governing the transformational journey in the funds sector

Asset Management & Investment Funds

Artificial Intelligence – governing the transformational journey in the funds sector

This article is published in the Finance Dublin Yearbook 2025 forthcoming

Wed 07 May 2025

8 min read

The increasing use and future potential of artificial intelligence (AI) within the financial services industry is widely acknowledged. However, as recently emphasised by the Central Bank of Ireland (Central Bank) “the transformational journey is really only just beginning.” This observation is highlighted in the Central Bank’s February 2025 Regulatory & Supervisory Outlook Report (Report), which points out that while many financial service providers intend to use AI in the future, the application of AI systems within the funds sector has thus far been relatively limited.

As alternative investment fund managers (AIFMs), UCITS management companies (together, Fund Managers) and fund service providers explore the transformative potential of AI, they must also consider the applicability of existing and evolving regulatory and supervisory frameworks. These currently include the recently introduced EU AI Act, ESMA’s AI guidance for investment firms in the provision of retail services, and for Irish regulated entities, the Central Bank’s supervisory expectations.

Transformative potential through the lens of the regulators

Earlier this year, ESMA published a Trends, Risks and Vulnerabilities (TRV) risk report on the use of AI in EU investment funds, which provides valuable insights into how AI is being used by asset managers. The report examines the extent to which AI tools are reshaping the investment process, highlighting survey-based evidence that asset managers increasingly use generative AI (GenAI) and large language models (LLMs), primarily to support human-driven investment decisions and to enhance productivity in activities such as risk management, compliance, and administrative tasks. While ESMA’s analysis supports existing evidence of a growing interest in AI among asset managers, it also indicates that most funds that use AI have yet to adopt a systematic investment approach rigorously entrusted to AI-based models. ESMA also observes that AI applications outside of the investment process, such as marketing, client interactions, compliance, and other administrative and control tasks, could represent significant value creation opportunities.

The Central Bank’s focus on the transformative potential of AI is evident in its 2024 and 2025 Reports, highlighting its significance for entities under its supervision. The 2025 Report provides an up-to-date supervisory perspective on AI, showing how it is changing the risk landscape for financial services firms. Despite the current relatively limited use of AI within the funds sector, the Central Bank emphasises the importance of alignment and oversight of such systems. It stresses the crucial role of human oversight, noting that the potential use of outputs from AI-based tools in fund management decision-making, such as stock selection or application of the diversification rules under the UCITS Directive, can lead to unwanted bias and poor investment decisions, potentially harming both investors and firms.

More broadly, we continue to see the Central Bank engage with the financial services sector through its Innovation Hub to understand how AI is being used in practice. This is so the Central Bank can provide regulatory advice and support for innovative projects. The Central Bank is assessing how current regulations and standards can be applied to the emerging use of AI. It recently engaged with a cross section of Fund Managers via an industry questionnaire that broadly explores the uses, challenges, market activities, compliance/risk management and governance frameworks associated with the use of AI systems, GenAI, machine learning (ML) and algorithmic trading.

At EU level, ESMA issued guidance for investment firms using AI in the provision of investment services to retail clients, published on 30 March 2024. ESMA’s guidance reinforces compliance with MiFID II obligations in the context of AI.

A risk-based approach to deployment of AI systems

Firms are reminded that they remain responsible for identifying and managing the various risks associated with AI systems. The Central Bank notes that many of the risks linked to AI are not new and are already addressed by existing regulations and standards. Therefore, good governance of AI systems in the funds sector needs to start there. Fund Managers and service providers must ensure and document that their use of specific types of AI is appropriate for the business challenges they aim to address, with transparent decision-making and clarity over who is responsible and accountable for the decision to use AI for a given process.

The Central Bank’s Report elaborates that risks can occur at various stages of an AI system, including input risks (data provenance and quality, data privacy and biases), algorithm risks (inappropriate use of black-box AI and parameter selection), output risks (decisions leading to harm), and overarching risks (cyber resilience, operational resilience and governance). The Central Bank also identifies the risk that AI systems, particularly GenAI, are not appropriately aligned and governed. It notes that increased sophistication and opacity in AI systems could potentially amplify other securities markets risks, such as trading decision-making methodologies not being fully understood, potentially heightening the risk of abusive practices.

In our view, in the context of algorithmic models or AI-powered investment strategies, Fund Managers should review the current use and risk assessment of algorithmic models, AI or ML technologies and controls to avoid inappropriate use of any black-box AI or incorrect parameter selection. Beyond this, key considerations also include asking how the AI models have been developed and how data inputs are tested and whether any risks are appropriately managed and disclosed to investors in the offering documents. Oversight of any AI-powered trading or strategy should confirm the proprietary rights to the intellectual property, how is it being used to inform (or potentially determine) investment decisions and identify those accountable for its oversight. While this is not new, the regulatory focus should prompt Fund Managers and boards to revisit these considerations.

Transparency and explainability of AI in investment strategies

ESMA’s recent TRV report concludes that AI in investment management is primarily used to augment existing strategies or research, informing rather than determining final investment decisions. While this may come as a relief to those in senior management responsible for governing AI transformation, ESMA indicates that EU managers are increasingly experimenting with AI, including GenAI, to stay competitive. As such senior management should conduct an AI audit to clearly understand the extent of AI usage in the fund’s investment strategy and ensure it is readily understood by investors and not exaggerated in investor facing offering documentation or fund marketing materials. Disclosures must be clear, fair, and not misleading and exaggeration (or so-called “AI-washing”) avoided. Claims associated with fund performance being improved by AI usage should be supported by evidence, and equal weight and prominence given to the associated risks in marketing materials. According to ESMA’s report, whether AI will give funds an edge on performance remains inconclusive.

EU AI Act considerations

A significant EU regulatory development is the introduction of the EU’s AI Act, which entered into force on 1 August 2024, with most provisions coming into effect in August 2026, except for the AI literacy provision which came into effect in February 2025. Most Irish Fund Managers are likely to be classified as “deployers” under the AI Act using “AI systems” developed by providers. The AI Act defines “AI systems” as machine-based systems designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

Fund Managers should identify whether they deploy AI systems and assess whether any of these systems fall within any of the “prohibited” (e.g. AI used for social scoring or emotion recognition) or “high-risk” categories (e.g. AI used for employee management systems for recruitment or performance evaluation). The European Commission’s guidance on the definition of “AI system” under the AI Act explains that the vast majority of systems, even if they qualify as AI systems within the meaning of the Act, will not be subject to the regulatory requirements under the Act unless they fall within the “prohibited” or “high risk” categories. Fund Managers should verify this as part of their deployment process and stay updated with evolving regulatory guidance.

While Irish Fund Managers may find that the regulatory requirements under the AI Act do not currently apply to them, where they deploy or oversee AI systems, we suggest that they should still be guided by the general ethical principes enshrined in the Act. These principles include transparency, accountability, privacy, and fairness which can be used to document and inform their deployment and oversight of AI systems. Importantly, providers and Fund Managers as deployers of any AI system even if they have not been categorised as “high risk”, are now obliged to ensure that their staff have a sufficient level of AI literacy.

Good governance of the transformative AI journey 

At a wider organisational level, Fund Managers and their service providers are considering what AI systems are being deployed within their firms and their specific operational purposes. They are assessing whether the existing processes and protections associated with data protection, cybersecurity, operational resilience, compliance and risk management, intellectual property rights and privacy already capture the risks associated with identified AI systems of whether they need to be amended or refined. We have seen some Fund Managers opting to document their procurement and deployment of AI systems in a specific AI policy. While this is not a prescribed requirement for them, it has the benefit of documenting and setting out in one place their use of AI systems and demonstrating how they propose to comply with existing and evolving Irish and EU regulatory requirements with respect to AI deployment, and the oversight of deployment by their service providers.

As the use cases for AI in the funds sector are evolving, the governance of such systems cannot be found playing catch-up.

To ensure compliance with AI-related obligations, Fund Managers should:

A key takeaway from the Central Bank’s Report is that AI should not employed merely because it can address a challenge, its appropriateness must be thoroughly evaluated. Decisions on AI use should be transparent, with clear lines of accountability. The funds sector, particularly those who have responsibility for and whose work will involve use of AI systems need to continue to become more AI literate. Fund Manager’s boards should draw comfort from the fact that they already have the governance tools and existing regulatory framework to oversee this transformative journey but must continue to keep pace with its evolution.

For more information, please contact Eimear Keane, Mark Ellis, Yvonne McGonigle ot your usual ALG contact. 

Date published: 7 May 2025

Key Contacts