- The Strategist - https://aspistrategist.ru -

AI and policing: what a Queensland case study tells us

Posted By on May 13, 2022 @ 06:00

Policing agencies consider artificial intelligence a force multiplier because it can rapidly process more data than human brains and yield insights to help solve complex analytical problems.

Our limited [1] understanding of how AI algorithms make decisions and produce their insights, however, presents a significant challenge to ethically and safely implementing AI policing solutions. The use of AI by Queensland police provides a valuable opportunity to study how we can mitigate the possible negative, ethical and operational effects of this problem.

The state is trialling AI as a risk-assessment tool to predict and prevent crime, in this case [2] domestic violence. It screens data from police records [3] to identify ‘high risk of high harm’ repeat offenders. Police then routinely pre-emptively ‘knock on doors’ to deter escalation to violence and, theoretically, lessen the likelihood of perpetrators reoffending.

Police say perpetrators have proved more likely to recognise their intervention outside a ‘point of crisis’ (domestic violence incident) and they believe this provides a ‘turning point opportunity’ for habitual offenders to deviate from a trajectory of repeated offending.

However, door-knocking for deterrence can have serious negative impacts, including the possibility of triggering further conflict within families experiencing repeated violence. This possible antagonising effect on offenders in a ‘precrime’ strategy would have to be mitigated for this process to be ethical according to the federal framework [4], and for it to be effective in reducing domestic violence.

I raised this issue with the Queensland Police Service. They said the trial had demonstrably not driven further violence, and cited a 56% reduction in incidents in one cohort of high-risk, high-harm offenders with a possible victim cohort of 1,156 people.

These statistics are compelling evidence that the program could reduce offending by such perpetrators. It could also reduce deaths. The police say that 30% of domestic violence homicides in the state are carried out by offenders already known to them for domestic violence, and known offenders are significantly more likely to suicide [5].

The Queensland Police Service’s aim is to prevent domestic violence, disrupt recidivist behaviour and ‘arrest no one’.

How did the police deal with the limitations and potential pitfalls of AI?

First, the police own the AI and developed it in house, substantially increasing its transparency.

The barrier of commercial interests that prevent a company from sharing the details of product development was removed, as in the case of AI used to make parole decisions by some US courts [6]. Data scientists were employed to work closely with officers at all stages of the AI’s development and deployment.

Owning the supply chain gives police as comprehensive an understanding as possible, given the technical limitations, of the processes by which the AI is developed, trained and then deployed and monitored by in-house data scientists. This includes understanding what human biases may have been coded into the AI, what mitigation strategies have been used, what AI biases might develop through its operation on live datasets, and what should be guarded against via monitoring once the AI is deployed into live datasets.

Police ownership also seems to have provided an opportunity for authentic policing knowledge and judgement to be included at the design stage, rather than as a retrofit after the proprietary development of the product.

Critically, police could ensure the AI was trained on state police data. That meant that, while it’s not possible to avoid coding human bias into AI, they could be certain that the bias was from their own historical data and therefore known and understood. Those training datasets serve as a historical resource from which police data scientists can glean information on the historical human bias of the police force and try to code safeguards against it into the AI.

Knowledge of AI decisions is similarly increased by the AI being owned and developed in house, because the data scientists using and monitoring it and the police officers employing have all the same information about it as those who developed and trained it.

To be eligible for assessment by AI, subjects must be already considered high risk and high harm through their previous interactions with police and have at least three domestic violence orders against them. This helps police know which of all the homes experiencing repeated family violence they should door-knock to deter further violence. Police don’t have the resources to door-knock all homes with a record of domestic violence.

It’s likely that using AI for high-risk criminal justice decisions will never be a good idea if we’re striving for safe, fair, ethical and reliable AI use. But it can provide valuable insights and context to inform policing decisions.

Identifying at-risk victims is not the focus, despite the overall goal being to prevent or reduce the rate and severity of violence endured by victims of habitual offenders.

So, the potential harm of over-policing subjects is arguably neutralised by the fact that police have already been interacting with them due to their repeated offending. This is not a risk-assessment of a general cross-section of a community, or even a cohort with a single record of violence.

But if AI decision-making were used in a higher-risk policing scenario, who would be accountable for incorrect decisions: police, computer scientists, policymakers or even the AI?

The eligibility criteria provide a key safeguard for limitations on transparency and explainability because if someone is incorrectly flagged for a door-knock, they are still a known, repeat perpetrator. If we accept the ethical and practical legitimacy (in terms of likelihood to achieve outcomes of harm reduction) of police pre-emptively door-knocking known offenders at all, then it can’t be argued that incorrectly door-knocking someone at a slightly lower but still significant risk of triggering violence counts as over-policing or violating their right to privacy and equality. A net benefit logic applies.

Problems remain, though. Significant technological development is required to design comprehensively transparent and explainable AI.

Computer scientists [7] tell us that it remains incredibly difficult, if it proves possible at all, to comprehensively understand how AI make decisions within live datasets as they develop more and more correlations that aren’t visible to monitors, as in overfitting [8]. We need to keep trying, and to hold AI to equal, or higher, ethical standards than human decision-making.

As for the net benefit argument, using AI as a solution could obfuscate both the root causes of a problem and the possible alternative, non-technical solutions. Can we, for example, better support victims in the family court to prevent them living in a perpetually violent home?

AI solutions are here to stay. Appropriate regulation in law enforcement scenarios is imperative to mitigate their significant potential impacts on justice outcomes and civil liberties. If Australia wants to ensure AI is safe, secure and reliable [4], we need an ethical framework that is compulsory and legally enforceable, not voluntary and aspirational.



Article printed from The Strategist: https://aspistrategist.ru

URL to article: /ai-and-policing-what-a-queensland-case-study-tells-us/

URLs in this post:

[1] limited: https://www.aspistrategist.ru/report/ai_policing_australia

[2] this case: https://www.theguardian.com/australia-news/2021/sep/14/queensland-police-to-trial-ai-tool-designed-to-predict-and-prevent-domestic-violence-incidents

[3] records: https://www.lawinsider.com/dictionary/qprime

[4] framework: https://www.industry.gov.au/data-and-publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles

[5] suicide: https://www.courts.qld.gov.au/__data/assets/pdf_file/0011/699230/domestic-and-family-violence-death-review-and-advisory-board-annual-report-2020-21.pdf

[6] US courts: https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

[7] scientists: https://dl.acm.org/doi/10.1145/3313107

[8] overfitting: https://www.ibm.com/cloud/learn/overfitting

Copyright © 2024 The Strategist. All rights reserved.