Skip to content

Brought to you by

Dentons logo in black and white

UK People Reward and Mobility Hub

The latest updates in employment, benefits, pensions and immigration

open menu close menu

UK People Reward and Mobility Hub

  • Home
  • Events and training
  • Who We Are
    • Meet the team
  • How we can help

The UK’s new AI and biometrics strategy

By Emily Russell and Elouisa Crichton
August 5, 2025
  • Data protection
  • Diversity, equality and inclusion
  • Privacy
  • Recruitment
Share on Facebook Share on Twitter Share via email Share on LinkedIn

At the start of the summer, the UK Information Commissioner unveiled a new AI and biometrics strategy designed to balance innovation with accountability. The strategy outlines how the Information Commissioner’s Office (ICO) plans to guide the development and deployment of artificial intelligence and biometric technologies, particularly in high-stakes areas such as recruitment and public services.

The ICO wants to support innovation while ensuring the public is protected. With AI now embedded across industries, including critical sectors like recruitment, public services and law enforcement, the ICO’s strategy emphasises transparency, fairness and the responsible use of data.

Building guardrails for AI and automated decision-making

A major element of the new strategy includes the development of a statutory Code of Practice for organisations using AI and automated decision-making (ADM) systems. This Code will provide clear expectations for lawful and ethical AI use. Whether it is a recruitment platform screening CVs or a public body using facial recognition, the ICO intends to set firm boundaries to ensure fairness and protect individual rights. Recent legislation will remove some of the restrictions on the use of ADM, provided organisations put in place certain safeguards. You can read more about these changes in our recent insight here.

The ICO’s strategy reflects growing public concern over how AI-powered decisions are made, especially when people’s futures, such as job opportunities, are on the line. Transparency, human oversight and clarity on how personal data is used are at the core of these efforts.

The strategy also sets out that the ICO will work with developers to ensure that personal information is being used in a lawful and responsible manner when training AI models, and that ADM systems are governed and managed in a way that is fair to people.

Recruitment

ADM tools are becoming increasingly prevalent in recruitment. From CV screening to initial video interviews, AI is now shaping hiring outcomes, often without applicants fully understanding how decisions are made.

To understand better the public’s perception, the ICO commissioned qualitative research involving 33 job seekers from diverse backgrounds. The findings showed that while candidates acknowledged the use of AI in recruitment, most felt in the dark about when and how employers were using it.

Many participants reported receiving near-identical rejection emails or unusually quick decisions, suspecting ADM tools were in play. However, very few had ever been explicitly informed that AI was involved, highlighting a major gap in transparency.

Public expectations

The research revealed key areas of concern and expectations:

  • Transparency is paramount: People want to know when organisations are using ADM, how it works and what data it processes.
  • Human oversight is essential: Participants were open to AI helping with initial filtering, but strongly opposed fully automated final decisions. They want a human in the loop to ensure fairness and address bias.
  • Concerns about bias must be addressed: There is widespread concern that ADM could perpetuate or exacerbate existing social inequalities. Fair and non-discriminatory systems are seen as baseline requirements.
  • Candidate experience matters: Impersonal processes and lack of feedback damage trust in ADM systems. Applicants want clarity, empathy and communication.
  • ADM’s appropriateness is contextual: There are mixed views on the appropriateness of ADM at different stages of the recruitment process and respondents raised concerns about fully automated systems.

What’s next?

The ICO has committed to focusing its regulatory efforts where risks to individuals are highest and where public concern is most acute, with recruitment being a prime example. Over the next year, the ICO will consult on updates to its guidance on ADM and profiling, and begin drafting a formal statutory Code of Practice.

It will also continue engaging with developers of generative AI models to ensure data used in training respects privacy and legal obligations. In parallel, it will collaborate with law enforcement to ensure that facial recognition and biometric tools are used in ways that are proportionate and justifiable.

How should employers respond to the ICO’s new strategy?

In the meantime, organisations should take proactive steps to ensure compliance:

  • Review and update (or create) internal policies on the use of AI and ADM in recruitment and HR processes. Ensure policies align with the ICO’s emphasis on transparency, fairness and human oversight.
  • Audit current practices against anticipated requirements of the forthcoming statutory Code of Practice. Look out for further consultations and guidance from the ICO.
  • Enhance transparency by clearly informing candidates and employees when AI or ADM tools are used, explaining how decisions are made and providing meaningful opportunities for human review.
  • Evaluate and mitigate risks of bias in AI-driven recruitment tools, ensuring that systems are regularly audited for fairness and signs of bias.
  • Engage with technology providers to ensure that any AI or biometric solutions deployed are compliant with data protection obligations and can demonstrate responsible data use.
  • Foster a culture of accountability by training HR teams and decision-makers on the ethical and legal implications of AI and ADM, and by establishing robust governance frameworks.

Taking these steps will also help strengthen candidate trust, thus protecting your organisation’s reputation, in an increasingly automated world.

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest blog posts by email.
Stay in Touch
Data Protection, Diversity equality and inclusion, Privacy, recruitment
Emily Russell

About Emily Russell

Emily is an associate in Dentons' People Reward and Mobility team in London, specialising in UK employment law. Emily supports businesses on a broad range of contentious and non-contentious employment related matters.

All posts

Elouisa Crichton

Elouisa Crichton

All posts Full bio

You might also like...

  • Data protection
  • GDPR

One day less to respond to DSARs!

Under the General Data Protection Regulation (GDPR), individuals can request access to the personal data that employers or other organisations […]

By Aggie Salt
  • Discrimination
  • Employment policies
  • Recruitment
  • Wellbeing
  • Working conditions

An update report from the Parker Review 2024: improving the ethnic diversity of UK businesses

By Elouisa Crichton
  • Artificial intelligence
  • Confidential information
  • Data protection
  • GDPR
  • Privacy
  • Recruitment

AI in hiring: ICO’s recommendations for recruiters

By Alison Weatherhead and Laura Morrison

About Dentons

Redefining possibilities. Together, everywhere. For more information visit dentons.com

Grow, Protect, Operate, Finance. Dentons, the law firm of the future is here. Copyright 2023 Dentons. Dentons is a global legal practice providing client services worldwide through its member firms and affiliates. Please see dentons.com for Legal notices.

Categories

Dentons logo in black and white

© 2025 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site