Artificial intelligence (AI) is reshaping how businesses operate and the pensions industry is no exception. From automated data cleansing to predictive analytics, AI is increasingly woven into the systems used by administrators, actuaries and investment managers. For trustees, this presents both opportunity and responsibility.
AI can streamline operations, enhance accuracy and even improve member engagement. Yet it also introduces new governance, contractual and data protection challenges. The task for trustees is not to master the technical details of AI, but to make sure the schemes they oversee manage these risks with the same discipline applied to any other aspect of governance.
A growing presence in pensions
AI is already at work across the pensions landscape. Administrators use it to clean and reconcile data, improving the reliability of member records. Actuaries and investment professionals are exploring predictive analytics to flag potential risks in funding levels or employer covenant strength. Some firms are deploying natural language processing tools to scan scheme documents for inconsistencies or compliance issues.
Even member engagement is changing. AI-driven chatbots and digital assistants can now answer queries, explain benefits and guide members through complex choices often more efficiently than traditional communication channels.
The Pensions Regulator (TPR), in its Digital, Data and Technology Strategy, has acknowledged this shift. It recognises AI’s potential to strengthen data integrity, automate compliance and enhance member understanding. However, as TPR also notes, the benefits of innovation must be balanced by strong governance. The same tools that make administration easier can also magnify risks if left unchecked.
Where do the risks lie?
Data protection is the most immediate concern. AI systems “learn” from data and, in doing so, may process or infer sensitive personal information. Trustees are ultimately accountable as data controllers and must be sure that any use of AI by their service providers complies with the UK GDPR and the Data Protection Act 2018.
Beyond the data, the contractual landscape can be equally complex. Many service providers now use AI tools that they did not build themselves, licensing them from third parties. This creates longer chains of responsibility, meaning that if an AI-driven error were to produce incorrect benefit calculations, trustees could find themselves caught between their administrator and an unseen vendor.
There are also broader governance challenges. Dentons’ 2025 survey[1] of global businesses found that a majority had no formal AI governance strategy. It is likely that many pension suppliers are in the same position, as the technology is often adopted for efficiency before full oversight frameworks are in place.
What should trustees do?
Trustees do not need to become AI specialists, but they should ensure their governance processes reflect this new reality.
A good starting point is simply to ask questions: which of your advisers and service providers use AI, and for what purposes? Is the AI developed in-house or procured from elsewhere? The answers will help to determine where the risks sit and who carries them.
Next, contracts should be reviewed. Standard outsourcing agreements may not deal specifically with AI-related errors or liability, so trustees should check whether their providers have secured appropriate protections from any third-party AI developers. Perhaps it is sensible to include a standing requirement for providers to notify trustees when they begin using new AI tools or significantly update existing ones.
Finally, trustees should consider how AI fits into their own risk framework. That may mean updating the scheme’s risk register, revisiting indemnity insurance cover or simply documenting how AI-related risks are being monitored. These are all familiar governance actions – the novelty lies in the technology, not in the principles of good management.
What comes next?
For many schemes, AI is already part of day-to-day operations, whether trustees are aware of it or not. As the technology becomes more embedded in administration, investment and communication, oversight must evolve accordingly. The goal is not to resist innovation, but to harness it safely. Trustees who take time to understand where AI is being used, ensure their contracts and controls are fit for purpose and keep the issue on their governance agenda will be well placed to do so.
Further guidance from TPR on AI and data use is expected within the next regulatory cycle. In the meantime, trustees would be wise to assume that AI is already part of their scheme’s ecosystem directly or through their providers.
[1] Dentons’ 2025 Laws of AI Traction Report – available at: https://www.dentons.com/en/lawsofaitraction
Lucy Cleary is a Trainee in the Dentons Pension team, based in the London office – for any further queries in relation to the above, please contact lucy.cleary@dentons.com
