Lords: Police use of AI must not undermine human rights and rule of law
The proliferation of artificial intelligence tools used in the justice system without proper oversight, particularly by the police, has serious implications for human rights and civil liberties, according to the House of Lords Justice and Home Affairs Committee.
In its report Technology rules? The advent of new technology in the justice system, published today, the committee notes the pace of the development of technologies, largely unseen by the public. Without sufficient safeguards, supervision, and caution, advanced technologies used in the justice system in England and Wales could undermine a range of human rights, risk the fairness of trials and damage the rule of law.
Facial recognition is the best known, but other technologies are in use, and more are being introduced. Development is moving fast, and controls have not kept up. The committee acknowledges the benefits: preventing crime, increasing efficiency, and generating new insights that feed into the criminal justice system.
However, it is concerning that there is no mandatory training for the users of AI technologies, such as facial recognition, particularly given their potential impact on people’s lives. Meanwhile, users can be deferential rather than critical. The committee is clear that ultimately decisions should always be made by humans.
There are risks of exacerbating discrimination. The report highlights serious concerns about the dangers of human bias contained in original data being reflected, and further embedded, in algorithmic outcomes. The committee heard about dubious selling practices and claims made as to products’ effectiveness which are often untested and unproven.
The committee calls for the establishment of a mandatory register of algorithms used in relevant tools. Without a register it is virtually impossible to find out where and how specific algorithms are used, or for Parliament, the media, academia, and, importantly, those subject to their use, to scrutinise and challenge them.
The report highlights that most public bodies lack the expertise and resources to carry out evaluations, and procurement guidelines do not address their needs. It recommends that a national body should be established to set strict scientific, validity, and quality standards and to certify new technological solutions against those standards. No tool should be introduced without receiving certification first, allowing police forces to procure the technological solutions of their choice among those ‘kitemarked’.
It is not possible to work out who is responsible for what, with more than 30 public bodies, initiatives, and programmes which play a role in the governance of new technologies in the application of the law, the committee notes that the system needs urgent streamlining. Reforms to governance should be supported by a strong legal framework. Without coordination between government departments, roles are unclear, functions overlap, joint working is patchy and where ultimate responsibility lies cannot be identified.
The committee also calls for a duty of candour on the police so that there is full transparency. AI can have huge impacts on people’s lives, particularly those in marginalised communities. Without transparency, there can be no scrutiny and no accountability when things go wrong.
Baroness Hamwee, chair of the Justice and Home Affairs Committee, said: “What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?
“Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.
“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers.
“Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.
“We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision makers, knowing how to question the tools they are using and how to challenge their outcome.”