UK officials use AI to decide on issues from benefits to marriage licences

Civil servants and a handful of police forces are using AI in a range of areas, but especially when it comes to helping them make decisions over welfare, immigration and criminal justice

Is allowance instantly strangers applauded

UK government officials are using artificial intelligence (AI) and complex algorithms to help decide everything from who gets benefits to who should have their marriage licence approved, according to a Guardian investigation.

The findings shed light on the haphazard and often uncontrolled way that cutting-edge technology is being used across Whitehall.

Civil servants in at least eight Whitehall departments and a handful of police forces are using AI in a range of areas, but especially when it comes to helping them make decisions over welfare, immigration and criminal justice, the investigation shows.

The Guardian has uncovered evidence that some of the tools being used have the potential to produce discriminatory results, such as:

• An algorithm used by the Department for Work and Pensions (DWP) which an MP believes mistakenly led to dozens of people having their benefits removed.

• A facial recognition tool used by the Metropolitan police has been found to make more mistakes recognising black faces than white ones under certain settings.

• An algorithm used by the Home Office to flag up sham marriages which has been disproportionately selecting people of certain nationalities.

Artificial intelligence is typically “trained” on a large dataset and then analyses that data in ways which even those who have developed the tools sometimes do not fully understand.

If the data shows evidence of discrimination, experts warn, the AI tool is likely to lead to discriminatory outcomes as well.

Rishi Sunak recently spoke in glowing terms about how AI could transform public services, “from saving teachers hundreds of hours of time spent lesson planning to helping NHS patients get quicker diagnoses and more accurate tests”.

But its use in the public sector has previously proved controversial, such as in the Netherlands, where tax authorities used it to spot potential child care benefits fraud, but were fined €3.7m after repeatedly getting decisions wrong and plunging tens of thousands of families into poverty.

Experts worry about a repeat of that scandal in the UK, warning that British officials are using poorly-understood algorithms to make life-changing decisions without the people affected by those decisions even knowing about it. Many are concerned about the abolition earlier this year of an independent government advisory board which held public sector bodies accountable for how they used AI.

Shameem Ahmad, the chief executive of the Public Law Project, said: “AI comes with tremendous potential for social good. For instance, we can make things more efficient. But we cannot ignore the serious risks.

“Without urgent action, we could sleep-walk into a situation where opaque automated systems are regularly, possibly unlawfully, used in life-altering ways, and where people will not be able to seek redress when those processes go wrong.”

Marion Oswald, a professor in law at Northumbria University and a former member of the government’s advisory board on data ethics, said: “There is a lack of consistency and transparency in the way that AI is being used in the public sector. A lot of these tools will affect many people in their everyday lives, for example those who claim benefits, but people don’t understand why they are being used and don’t have the opportunity to challenge them.”

Sunak will gather heads of state next week at Bletchley Park for an international summit on AI safety. The summit, which Downing Street hopes will set the terms for AI development around the world for years to come, will focus specifically on the potential threat posed to all of humanity by advanced algorithmic models.

For years, however, civil servants have been relying on less sophisticated algorithmic tools to help make a range of decisions about people’s daily lives.

In some cases, the tools are simple and transparent, such as electronic passport gates or number plate recognition cameras, both of which use visual recognition software powered by AI.

In other cases, however, the software is more powerful and less obvious to those who are affected by it.

The Cabinet Office recently launched an “algorithmic transparency reporting standard”, which encourages departments and police authorities to voluntarily disclose where they use AI to help make decisions which could have a material impact on the general public.

Six organisations have listed projects under the new transparency standard.

The Guardian examined those projects, as well as a separate database compiled by the Public Law Project. The Guardian then issued freedom of information requests to every government department and police authority in the UK to build a fuller picture of where AI is currently making decisions which affect people’s lives.

The results show that at least eight Whitehall departments use AI in one way or another, some far more heavily than others.

The NHS has used AI in a number of contexts, including during the Covid pandemic, when officials used it to help identify at-risk patients who should be advised to shield.

The Home Office said it used AI for e-gates to read passports at airports, to help with the submission of passport applications and in the department’s “sham marriage triage tool”, which flags potential fake marriages for further investigation.

An internal Home Office evaluation seen by the Guardian shows the tool disproportionately flags up people from Albania, Greece, Romania and Bulgaria.

The DWP, meanwhile, has an “integrated risk and intelligence service”, which uses an algorithm to help detect fraud and error among benefits claimants. The Labour MP Kate Osamor believes the use of this algorithm may have contributed to dozens of Bulgarians suddenly having their benefits suspended in recent years after they were falsely flagged as making potentially fraudulent claims.

The DWP insists the algorithm does not take nationality into account. A spokesperson added: “We are cracking down on those who try to exploit the system and shamelessly steal from those most in need as we continue our drive to save the taxpayer £1.3bn next year.”

Neither the DWP nor the Home Office would give details of how the automated processes work, but both have said the processes they use are fair because the final decisions are made by people. Many experts worry, however, that biased algorithms will lead to biased final decisions, because officials can only review the cases flagged to them and often have limited time to do so.