By Rachel Aldridge, Managing Director, Regulatory and Compliance Solutions, UK
Artificial intelligence is no longer a futuristic concept – it’s here, shaping how businesses operate and how decisions are made. But for compliance leaders, already juggling a complex web of regulations, risk frameworks and governance responsibilities, the question looms large: do they now need to become AI experts too?
This was the central theme of a recent roundtable at the BVCA Technical Policy Conference, where experts from across the private funds space shared candid views. The conversation revealed tension between regulatory expectations, practical realities and the evolving role of compliance in an AI-driven world.
The regulatory lens: outcomes over technology
Rather than introducing AI-specific rules, the UK’s Financial Conduct Authority (FCA) has taken a deliberately principles-based approach. It leans on existing frameworks – Consumer Duty, SM&CR and operational resilience – arguing that these are sufficient. The FCA regulates outcomes, not technologies. In other words, firms must manage risks and deliver fair consumer outcomes, regardless of whether they use spreadsheets or sophisticated AI models.
Contrast this with the EU’s AI Act, which takes a prescriptive approach, classifying AI systems by risk. The FCA’s stance feels pragmatic, but it also signals something important: regulators expect firms to harness technology responsibly, without compromising governance.
The FCA has even acknowledged AI’s potential benefits: enhanced fraud detection, stronger AML controls and operational efficiency. These are areas where compliance and AI naturally intersect.
What’s happening on the ground?
So, what does AI look like in compliance today? The reality is far less dramatic than the headlines suggest. Most use cases are practical, low-risk and focused on efficiency. Think summarising lengthy consultation papers, suggesting structures for analyses, or reviewing policies for gaps. These are time-saving, brainpower-saving applications.
Some firms are experimenting further – semi-automating Annex IV reporting, simulating Q&A for junior training, and even building custom “agents” trained on trusted sources. These innovations often start with curiosity from one or two team members, rather than from top-down mandates.
The human factor
One lively debate centred on whether AI might erode critical thinking skills. Should juniors still slog through 200-page papers “like we had to”? Opinions varied, but most agreed that Gen Z employees bring strong AI instincts and can help teams adopt technology smartly. The challenge is striking a balance – leveraging AI without losing the analytical rigor that defines good compliance.
Risks and realities
Of course, AI isn’t risk-free. Confidentiality concerns, data bias, explainability gaps and the tendency to produce “median” answers all surfaced in discussion. The consensus? Human oversight remains non-negotiable. You wouldn’t send out a trainee’s work unchecked – why treat AI differently?
Where do we go from here?
The takeaway is clear: compliance leaders don’t need to become AI engineers. But they do need to understand how AI is being used across their firms. Why? Because effective compliance depends on knowing what front-office teams are doing. This is critical for risk management, reporting and governance.
AI isn’t replacing compliance; it’s reshaping it. The real opportunity lies in using AI thoughtfully: to streamline manual tasks, free up time for higher-value work, and strengthen oversight. Compliance leaders who embrace this mindset will not only keep pace with change, they’ll help shape the future of responsible AI in financial services.