- The Strategist - https://aspistrategist.ru -

Editors’ picks for 2023: ‘Safety by design: protecting users, building trust and balancing rights in a generative AI world’

Posted By on January 3, 2024 @ 06:00

Originally published on 1 November 2023.

n the grand tapestry of technological evolution, generative artificial intelligence has emerged as a vibrant and transformative thread. Its extraordinary benefits are undeniable; already it is revolutionising industries and reshaping the way we live, work and interact.

Yet as we stand on the cusp of this new era—one that will reshape the human experience—we must also confront the very real possibility that it could all unravel.

The duality of generative AI demands that we navigate its complexities with care, understanding and foresight. It is a world where machines—equipped with the power to understand and create—generate content, art and ideas with astonishing proficiency. And realism.

The story of generative AI thus far is one of mesmerising achievements and chilling consequences. Of innovation and new responsibilities. Of progress and potential peril.

From Web 1.0 to generative AI: the evolution of the internet and its impact on human rights

We’ve come a long way since Web 1.0, where users could only read and publish information; a black-and-white world where freedom of expression, simply understood, quickly became a primary concern.

Web 2.0 brought new interactive and social possibilities. We began to use the internet to work, socialise, shop, create and play—even find partners. Our experience became more personal as our digital footprint became ever larger, underscoring the need for users’ privacy and security to be baked in to the development of digital systems.

The wave of technological change we’re witnessing today promises to be even more transformative. Web 3.0 describes a nascent stage of the internet that is decentralised and allows users to exchange directly content they own or have created. The trend towards decentralisation [1] and the development of technologies such as virtual reality, augmented reality and generative AI are bringing us to an entirely new frontier.

All of this is driving a deeper awareness of technology’s impact on human rights.

Rights to freedom of expression and privacy are still major concerns, but so are the principles of dignity, equality and mutual respect, the right to non-discrimination, and rights to protection from exploitation, violence and abuse.

For me, as Australia’s eSafety Commissioner, one vital consideration stands out: the right we all have to be safe.

Concerns we can’t ignore

We have seen extraordinary developments in generative AI over the past 12 months that underline the challenges we face in protecting these rights and principles.

Deepfake videos and audio recordings, for example, depict people in situations they never engaged in or experienced. Such technical manipulation may go viral before the authenticity—or falsehood—can be proven. This can have serious repercussions for not only an individual’s reputation or public standing, but also their fundamental wellbeing and identity.

Experts have long been concerned about the role of generative AI in amplifying and entrenching biases in training data. These models may then perpetuate stereotypes and discrimination, fuelling an untrammelled cycle of inequality at an unprecedented pace and scale.

And generative AI poses significant risks in creating synthetic child sexual abuse material. This harm is undeniable; all content that normalises child sexualisation and AI-generated versions of it hamper law enforcement.

eSafety’s hotline and law-enforcement queues are starting to fill with synthetic child sexual abuse material, presenting massive new resourcing and triaging challenges.

We are also very concerned about the potential of manipulative chatbots to further weaponise the grooming and exploitation of vulnerable young Australians.

These are not abstract concerns; incidents of abuse are already being reported to us.

The why: stating the case for safety by design in generative AI

AI and virtual reality are creating new actual realities that we must confront as we navigate the complexities of generative AI.

Doing so effectively requires a multi-faceted approach that involves technology, policy and education.

By making safety a fundamental element in the design of generative AI systems, we put individuals’ wellbeing first and reduce both users’ and developers’ exposure to risk.

As has been articulated often over the past several months and seems to be well understood, safety needs to be a pre-eminent concern, not retrofitted as an afterthought, or after systems have been ‘extricated out into the wild’.

Trust in the age of AI is a paramount consideration given the scale of potential harm to individuals and society, even to democracy and humanity itself.

The how: heeding the lessons of history by adopting safety by design for generative AI

How can we establish this trust?

Just as the medical profession has the Hippocratic Oath, we need to see a well-integrated credo in model and systems design that is in direct alignment with the imperative, ‘first, do no harm’.

To achieve this, identifying potential harms and misuse scenarios is crucial. We need to consider the far-reaching effects of AI-generated content—not just for today but for tomorrow.

Some of the efforts around content authenticity and provenance standards through watermarking and more rapid deepfake-detection tools should be an immediate goal.

As first steps, users also need to know when they are interacting with AI-generated content, and the decision-making processes behind AI systems must be more transparent.

User education on recognising and reporting harmful content must be another cornerstone of our approach. Users need to have control over AI-generated content that impacts them. This includes options to filter or block such content and personalise AI interactions.

Mindful development of content moderation, reporting mechanisms and safety controls can ensure harmful synthetic content and mistruths don’t go viral at the speed of sound without meaningful recourse.

But safety commitments must extend beyond design, development and deployment. We need continuous vigilance and auditing of AI systems in real-world applications for swift detection and resolution of issues.

Human oversight by reviewers adds an extra layer of protection. And empowering developers responsible for AI with usage training and through company and industry performance metrics is also vital.

The commitment to safety must start with company leadership and be inculcated into every layer of the tech organisation, including incentives for engineers and product designers for successful safety interventions.

The whole premise of Silicon Valley legend John Doerr’s 2018 book, Measure what matters, provided tech organisations with a blueprint for developing objectives and key results instead of the traditional key performance indicators.

What matters now with the tsunami of generative AI is that industry not only gets onto the business of measuring its safety success at the company, product and service level, but also sets tangible safety outcomes and measurements for the broader AI industry.

Indeed, the tech industry needs to measure what matters in the context of AI safety.

Regulators should be resourced to stress-test AI systems to uncover flaws, weaknesses, gaps and edge cases. We are keen to build regulatory sandboxes.

We need rigorous assessments before and after deployment to evaluate the societal, ethical and safety implications of AI systems.

Clear crisis-response plans are also necessary to address AI-related incidents promptly and effectively.

The role of global regulators in an online world of constant change

How can we make sure the tech industry plays along?

Recent pronouncements, principles and policies from industry are welcome. They can help lay the groundwork for safer AI development.

For example, TikTok, Snapchat and Stability AI, along with 24 other organisations—including the US, German and Australian governments—have pledged to combat child sexual abuse images generated by AI. This commitment was announced by Britain ahead of a global AI safety summit later this week. The pledge focuses on responsible AI use and collaborative efforts to prevent AI-related risks in addressing child sexual abuse.

Such commitments won’t amount to much without measurement and external validation, which is partly why we’re seeing a race by governments globally to implement the first AI regulatory scheme.

US President Joe Biden’s new executive order on AI safety and security [2], for example, mandates AI model developers to share safety test results. This order addresses national security, data privacy and public safety concerns related to AI technology. The White House views it as a significant step in AI safety.

With other governments pursing their own reforms, there’s a danger of plunging headlong into a fragmented splinternet of rules.

What’s needed instead is a harmonised legislative approach which recognises sovereign nations and regional blocs will take slightly different paths. A singular global agency to oversee AI would likely be too cumbersome and slow to deal with the issues we need to rectify now.

Global regulators—whether focused on safety, privacy, competition or consumer rights—can work towards best-practice AI regulation in their domains, building from current frameworks. And we can work across borders and boundaries to achieve important gains.

Last November, eSafety launched the Global Online Safety Regulators Network with the UK, Ireland and Fiji. We’ve since increased our membership to include South Korea and South Africa, and have a broader group of observers to the network.

As online safety regulation pioneers, we strive to promote a unified global approach to online safety regulation, building on shared insights, experiences and best practices.

In September 2023, the network had its inaugural in-person meeting in London, and issued its first position statement [3] on the important intersection of human rights in online safety regulation.

Our guiding principles are rooted in the broad sweep of human rights, including protecting children’s interests, upholding dignity and equality, supporting freedom of expression, ensuring privacy and preventing discrimination.

It is crucial that safeguards coexist with freedoms, and we strongly believe that alleviating online harms can further bolster human rights online.

In the intricate relationship between human rights and online safety, no single right transcends another. The future lies in a proportionate approach to regulation that respects all human rights.

This involves governments, regulators, businesses and service providers cooperating to prevent online harm and enhance user safety and autonomy, while allowing freedom of expression to thrive.



Article printed from The Strategist: https://aspistrategist.ru

URL to article: /editors-picks-for-2023-safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world/

URLs in this post:

[1] decentralisation: /removing-the-risks-from-a-decentralised-internet/

[2] executive order on AI safety and security: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

[3] position statement: https://www.esafety.gov.au/sites/default/files/2023-09/Position-statement-Human-rights-and-online-safety-regulation.pdf

Copyright © 2024 The Strategist. All rights reserved.