AI researcher says law enforcement tech vendors are reluctant to be transparent

Artificial intelligence (AI) researcher Sandra Wachter says that while the House of Lords inquiry into police technology “was a big step in the right direction” it managed to highlight the biggest concerns about police AI and algorithms , the conflict of interest between criminal justice agencies and their suppliers could still hold back meaningful changes.

Wachter, invited as an expert witness to the study, is an associate professor and senior research fellow at the Oxford Internet Institute specializing in the law and ethics of AI.

Speaking to Computer Weekly, Wachter said she hopes at least some of the recommendations will be included in legislation, but is concerned about the impact of AI vendors’ hostility to transparency and openness.

“I’m concerned mainly from an intellectual property and trade secret perspective,” she said. “There is an unwillingness or reluctance in the private sector to be fully open about what is actually going on for a variety of reasons, and I think this could be a barrier to implementing the research’s recommendations.”

After a 10-month investigation into the UK police’s use of advanced algorithmic technology, including facial recognition and various crime “prediction” tools, the Lords Home Affairs and Justice Committee (HAJC) found that there was “a lot of enthusiasm” about the use of AI systems by executives, but “we have not seen a corresponding commitment to a thorough evaluation of their effectiveness”.

The HAJC also found a number of “shady selling practices” that stem from a conflict of interest between police forces, who are required under the Public Sector Equality Duty (PSED) to investigate how their policies and practices might be discriminatory, and private sector providers are.

To address problems related to procurement from private suppliers, the HAJC recommended giving police buyers additional support to become “competent customers” of new technology and establishing a national body to certify new technology.

“Pre-deployment certification could in itself convince them of the quality of the products they source. Improved procurement policies are also needed,” the committee said, adding that local and regional ethics committees should also be established by law to examine whether proposed and actual uses of a particular technology are “legitimate, necessary and proportionate.”

It also noted that while there are currently “no systemic obligations” for law enforcement agencies to disclose information about their use of advanced technology, a “public disclosure obligation” should be put in place alongside a public registry of policing algorithms so that regulators and the general public can accurately Understand how new tools are used.

Promote openness and meaningful transparency

Wachter – who told the HAJC in October 2021 that UK law enforcement procuring AI technologies should use their purchasing power to demand access to suppliers’ systems to test and prove their claims of accuracy and bias – pointed out suppliers’ lack of visibility into their systems is very unlikely to be a “technical issue of we can’t explain” but more a case of “we’re not too keen to tell you”.

In August 2020, the South Wales Police (SWP) use of live facial recognition technology was ruled unlawful by the Court of Appeal, partly because the force failed to comply with its PSED.

The ruling found that the manufacturer in this case – Japanese biometrics company NEC – had not shared details of its system with SWP, meaning the force could not fully appreciate the technology and its implications.

“For reasons of commercial secrecy, the manufacturer is not willing to disclose the details for the purpose of examination,” the verdict says. “This may be understandable, but in our view it does not enable an authority to fulfill its own non-delegable duty.”

Asked about SWP’s example, Wachter said she believes there is a middle ground. “Usually when people talk about transparency, they talk like one or zero — so either everything is transparent or nothing is transparent,” she said. “I think that’s a bit misguided — not everyone needs to know everything, but the right people need to know enough.”

Wachter said part of the problem is that police users buy into private vendors’ arguments that certain aspects of the technology simply cannot be disclosed or discussed.

To get around this, she said it’s about building trustworthiness and reliability, and agreed with the HAJC on the need for a third-party certification system, much like a MoT for vehicles, where qualified and trusted experts analyze the technology to get them approved understand exactly how it works and to make sure it doesn’t cause any harm.

Regarding how much information should be included in the proposed public registers of police algorithms, Wachter said that while there must always be open information about what technology is used by the police, she suggested going further and companies to get them to release their test results for the technology.

“The general public has a right to know what their taxpayers’ money is being spent on,” she said. “And if it’s to deter people, to send them to jail, to monitor them, then I have a right to know that this technology is working as intended.”

Wachter’s own peer-reviewed academic work revolves around how AI systems can be tested for bias, fairness and compliance with equality law standards in both the UK and the European Union (EU).

The method developed by Wachter and her colleagues – called “counterfactual explanations” – shows why and how a decision was made (e.g. why a person had to go to prison) and what would have to be different to produce a different result can be a useful basis for challenging decisions. All of this happens without violating the intellectual property rights of companies.

“If you do this test, we’re saying that you should publish the results to show the outside world that your algorithm is complying,” she said, adding that suppliers are always required to be compliant. “If your system is racist and you don’t know about it, it doesn’t matter – you will still be held liable. So the incentive structure is that you should test, test, test, because you can’t tell a regulator afterwards, “Oh, I didn’t know what was going on” — if you still have to do it, you might as well publicize it. “

Possible government resistance to change

Although the government has yet to formally respond to the inquiry’s findings – and has until 30 May 2022 to do so – Police Secretary Kit Malthouse has previously suggested to the HAJC that police use of new technologies should be tested in court and not defined new ones Laws he said could “choke innovation.”

This is consistent with previous government claims about police technology. For example, in response to a July 2019 Science and Technology Committee report calling for a moratorium on police use of live facial recognition technology until an adequate legal framework was in place, the government in March 2021 – after a two-year delay – called for that there is “already a comprehensive legal framework for the management of biometrics, including facial recognition”.

But Wachter says while Malthouse’s proposed approach might be acceptable in certain limited circumstances, such as “when we’re not sure if and when harm might actually occur,” in the case of tools like facial recognition and “predictive” police analytics, the harm is already well documented.

“We know the data is problematic,” she said. “We know that the systems are problematic. No one can really pretend there is no problem.”

Wachter added that the vast majority of people in the UK simply don’t have the resources to challenge the police in court over their use of technology. “To say, ‘Well, let’s just try to see who comes and complains,’ that’s not what a legislature should do,” she said. “You should protect everyone because everyone’s freedom is at stake.”

Responding to the argument that laws “stifle innovation,” Wachter said, “It’s such a boring argument — most of the time, when people say innovation, they mean profit. Let’s not confuse these two things.

“Regardless of the laws, I can research and develop whatever I want. The holding back is whether or not something is put into practice and then we talk about gains. I think that’s what people mean a lot.”

She added: “Good law is designed to guide ethical and responsible innovation and seeks to discourage harmful innovation. I’m not sure if those who don’t want to follow ethics and these rules are the ones I want to do business with, especially in the criminal justice arena.”

Although the HAJC, consistent with a number of the inquiry’s experts, concluded that those responsible for deploying police technology are essentially “catching up” without adequately considering the effectiveness and impact of systems, there is a clear obligation to procure of more technology for Malthouse Police and the Strategic Review of Policing published in March 2022.

As to why the UK police are so committed to adopting new technology, despite its often questionable effectiveness, Wachter said: “I think they are very much driven by the idea of ​​austerity and trying to cut costs. The public sector has always been, but right now, under massive pressure to cut costs and new technologies are being seen as a means to achieve this.”

New Technology Era

Leave a Reply

Your email address will not be published.