- The Strategist - https://aspistrategist.ru -
The need for responsible AI
Posted By Jason Signolet on March 19, 2024 @ 06:30
‘2023 marks 25 years of Google Search, and a quarter of a century of curiosity,’ said the tech giant in December. At the same time, Google launched its ‘Year in Search’, highlighting how it’s been influenced by what matters to Australians, naming everything from the Optus outage and the war in Gaza to the new royal era with King Charles III and Queen Camilla as some stand-out search trends in 2023.
This calls into question how we’re being influenced, and, of course, that brings forth the subject of artificial intelligence.
For example, Google’s autocomplete function uses AI to predict what users are typing and to offer suggestions based on popular queries. While it’s intended to be helpful, reducing typing time by 25% [1], there are concerns about the neutrality of the AI deployed in this technology. It has been seen to produce offensive, inaccurate or misleading suggestions, such as ‘women should stay at home [2]’.
With technology and media so entrenched in our social systems today, the advancement and application of AI is a perpetual issue—particularly where regulation is in its infancy.
Enter responsible AI.
This term has become a common phrase in recent years to counter AI fears and concerns. However, the NSW Ombudsman [3] cautions that ‘responsible AI’ is a form of tech vendor spin, arguing that it risks obscuring the real question of who is actually responsible for AI development.
Remarkably, the European Union has made strides to counter this issue. It has just recently finalised the world’s first AI Act, which will see it enforce binding rules on the development, deployment and use of both high-risk and general-purpose AI.
Because, when AI is used to support, or even replace, everyday processes and decision-making, there’s a lot at stake.
In 2018, well before ChatGPT made AI mainstream, the technology was introduced through the US legal system. A longer prison sentence [4] was handed down based on no more than an algorithm.
Since then, TikTok has been downloaded by 1.677 billion users, and its all-powerful, all-knowing recommendation algorithm has proved dangerous. It doesn’t take long for a user to move from a relatively tame comedy clip to a malicious one—whether it’s a radical view, extremist content or outright propaganda from a particular ideological fringe.
It’s a prime example of poorly incentivised algorithms, in which companies optimise scripts for their own benefit—in this case keeping users engaged by pushing content that triggers strong emotions—without consideration for social harm, including rampant polarisation.
This is relevant because in recent months Australia has seen a rise in anti-Semitic and Islamophobic [5] content on social media platforms.
While the federal government has jurisdiction over telecommunications, it does not, yet, regulate software [6] or how it’s deployed. Although Communications Minister Michelle Rowland signalled updates to the Basic Online Safety Expectations, asking tech companies to ensure that their algorithms don’t amplify harmful or extreme content—covering racism, sexism and homophobia—AI regulation is still currently in the hands of industry.
Where there’s potential for misuse, developers and data scientists are responsible for avoiding that through ethical AI design principles. This is explicit mitigation of the potential harms that the technology can cause.
However, we often see the basics in ethical AI development missed. AI development can follow different pathways, depending on the goals and methods of the developers. Every day, my work to build trustworthy AI involves careful consideration of the pathways designed for safe use. The most obvious use of AI should always be the safest.
There are some basic principles that AI developers should follow. And first up is bias mitigation: AI models should be designed with care to avoid unfair or discriminatory outcomes. Transparency and explainability should also be considered, to avoid mysterious black-box AI situations in which the AI’s processes and outputs are indefensible. Lastly, accountability is crucial to see that, where errors are made (and they will be), there are means to correct them.
Even with all this in place, truly responsible AI also needs a protective framework against rogue actors going out of their way to use the technology for financial or criminal gain. You can’t outrun an arms race, but you can beat the people involved—you just need to get into the minds of those intentionally using the technology to harm.
While the risk of an individual going out of his or her way to use the technology harmfully exists, it’s up to those developing algorithms and training AI to make sure pathways are as controlled as possible.
Article printed from The Strategist: https://aspistrategist.ru
URL to article: /the-need-for-responsible-ai/
URLs in this post:
[1] 25%: https://journal.media-culture.org.au/index.php/mcjournal/article/view/2852
[2] women should stay at home: https://medium.com/the-straight-dope/googles-racist-search-results-ac1391f65ce3
[3] NSW Ombudsman: https://www.themandarin.com.au/231024-responsible-ai-doesnt-exist-ombud-demands-human-culpability/"HYPERLINK "https://www.themandarin.com.au/231024-responsible-ai-doesnt-exist-ombud-demands-human-culpability/
[4] longer prison sentence: https://www.weforum.org/agenda/2018/11/algorithms-court-criminals-jail-time-fair/
[5] rise in anti-Semitic and Islamophobic: /words-matter-in-times-of-global-upheaval/
[6] regulate software: /is-artificial-intelligence-about-to-be-regulated/
Click here to print.
Copyright © 2024 The Strategist. All rights reserved.