- The Strategist - https://aspistrategist.ru -
Artificial intelligence and policing in Australia
Posted By Teagan Westendorf on April 7, 2022 @ 06:00
Digital technologies, devices and the internet are producing huge amounts of data and greater capacity to store it, and those developments are likely to accelerate. For law enforcement, a critical capability lagging behind the pace of innovation is the ability and capacity to screen, analyse and render insights from the ever-increasing volume of data—and to do so in accordance with the constraints on access to and use of personal information within a democratic system.
Artificial intelligence and machine learning are presenting valuable tools to the public and private sectors for screening big and live data. AI is also commonly considered and marketed as a solution that removes human bias [1], although AI algorithms and dataset creation can also perpetuate human bias and so aren’t value- or error-free.
In light of the many and varied solutions AI offers, the Australian government is building the necessary policy and regulatory frameworks to pursue the goal of positioning Australia as a ‘global leader in AI [2]’. Recent initiatives include an AI action plan [3] launched in 2021 as part of the digital economy strategy [4], the CSIRO’s 2019 AI roadmap [5], and the voluntary artificial intelligence ethics framework, which includes eight principles necessary for AI to be safe and democratically legitimate. In addition, more than $100 million [2] in investment has been pledged to develop the expertise and capabilities of an Australian AI workforce and to establish private–public partnerships to develop AI solutions to national challenges.
AI is being broadly conceptualised by the federal government and many private companies as an exciting technological solution to ‘strengthen the economy and improve the quality of life of all Australians [2]’ by inevitably ‘reshap[ing] virtually every industry, profession and life [2]’. There’s some truth there, but how that reshaping occurs depends on choices, including, for policing, about how data and insights are used and how direct human judgements and relationships can be informed by those technologies, not discounted and disempowered.
In a new ASPI report [6], released today, I explore some of the limitations on the use of AI in policing and law enforcement scenarios, possible strategies to mitigate the potential negative effects of AI data insights and decision-making in the justice system, and implications for regulation of AI use by police and law enforcement in Australia.
It is problematic, to say the least, that the ethics framework [7] designed to ‘ensure AI is safe, secure and reliable [7]’ is entirely voluntary for both the public and private sectors. And it is not supported by any actual laws specifically addressing the use of AI or other emerging technologies, such as the European Union’s General Data Protection Regulation [8].
For policing agencies, AI is considered a force-multiplier not only because it can process more data than human brains can conceivably do within required time frames, but also because it can yield insights to complement the efforts of human teams to solve complex analytical problems.
There are many different types and purposes for AI currently under consideration, in trial or in use in various areas of policing globally [9]. Examples include risk assessment of recidivism used to inform parole decisions [10] or to prompt pre-emptive, deterrent police visits [11] to offenders’ homes; public safety video and image analysis, using facial recognition to identify people of interest or to decipher [12] low-quality images; and forensic DNA testing [12].
AI algorithms, or models, promise to enable processing of high volumes of data at speed while identifying patterns; supercharge knowledge management while (supposedly) removing human bias from the process (we now know that AI can in fact learn and act on human bias [13]); and operate with ethical principles coded into their decision-making.
This promise, however, is not a guarantee.
Holding AI to the ‘safe, secure and reliable’ standard in the Australian ethics framework requires the ability and functionality to comprehensively know and understand how algorithms make decisions and how ethical decision frameworks are coded into an algorithm and its development and training on historical and live datasets.
In fact, there are significant barriers from both a technical [14] and implementation standpoint (for example, the market disincentive [15] of sharing the finer details of how a proprietary AI product works) to achieving these aims.
It’s broadly understood that human bias can compromise both police outcomes (reducing and preventing crime, the successful prosecution of perpetrators of crime, and justice for victims of crime) and the trust from communities [16] that enables effective policing. This can make AI seem a solution; however, if it’s adopted without knowledge of its limitations and potential errors, it has the potential to create more and compounding problems for police.
While researchers are fond of analysing ‘human bias’ in systems, the humanity of individuals also really matters for how they do their work and engage with their communities. It’s a strength of the community policing function, not something to be edited out by technology, no matter how powerful and large the datasets may be. This insight can help shape how policing works with AI and other new technologies, and how human analysts can prevent coded human bias from running unchecked in AI systems.
We can be certain that AI is here to stay. Appropriate regulation of its use in law enforcement scenarios is imperative to mitigate the significant potential impacts on justice outcomes and civil liberties. If Australia wants to ensure AI is safe, secure and reliable [7], we need at the very least an ethics framework that is compulsory and legally enforceable.
Article printed from The Strategist: https://aspistrategist.ru
URL to article: /artificial-intelligence-and-policing-in-australia/
URLs in this post:
[1] solution that removes human bias: https://www.ojp.gov/pdffiles1/nij/252038.pdf
[2] global leader in AI: https://www.minister.industry.gov.au/ministers/porter/media-releases/action-plan-positions-australia-be-global-leader-artificial-intelligence
[3] AI action plan: https://www.industry.gov.au/data-and-publications/australias-artificial-intelligence-action-plan
[4] digital economy strategy: https://digitaleconomy.pmc.gov.au/sites/default/files/2022-02/digital-economy-strategy.pdf
[5] AI roadmap: https://www.csiro.au/en/research/technology-space/ai/artificial-intelligence-roadmap
[6] new ASPI report: https://www.aspistrategist.ru/report/ai_policing_australia
[7] ethics framework: https://www.industry.gov.au/data-and-publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles
[8] General Data Protection Regulation: https://gdpr.eu/
[9] globally: https://policinginsight.com/subject/artificial-intelligence/
[10] parole decisions: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[11] deterrent police visits: https://www.theguardian.com/australia-news/2021/sep/14/queensland-police-to-trial-ai-tool-designed-to-predict-and-prevent-domestic-violence-incidents
[12] decipher: https://www.police.nsw.gov.au/crime/terrorism/terrorism_categories/facial_recognition
[13] learn and act on human bias: https://humanrights.gov.au/our-work/rights-and-freedoms/publications/using-artificial-intelligence-make-decisions-addressing
[14] technical: https://dl.acm.org/doi/10.1145/3313107
[15] market disincentive: https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
[16] trust from communities: https://www.tandfonline.com/doi/full/10.1080/10439463.2020.1726345
Click here to print.
Copyright © 2024 The Strategist. All rights reserved.