{"id":83248,"date":"2023-11-01T12:30:56","date_gmt":"2023-11-01T01:30:56","guid":{"rendered":"https:\/\/www.aspistrategist.ru\/?p=83248"},"modified":"2023-11-02T10:02:08","modified_gmt":"2023-11-01T23:02:08","slug":"safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world","status":"publish","type":"post","link":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/","title":{"rendered":"Safety by design: protecting users, building trust and balancing rights in a generative AI world"},"content":{"rendered":"
<\/figure>\n

In the grand tapestry of technological evolution, generative artificial intelligence has emerged as a vibrant and transformative thread. Its extraordinary benefits are undeniable; already it is revolutionising industries and reshaping the way we live, work and interact.<\/p>\n

Yet as we stand on the cusp of this new era\u2014one that will reshape the human experience\u2014we must also confront the very real possibility that it could all unravel.<\/p>\n

The duality of generative AI demands that we navigate its complexities with care, understanding and foresight. It is a world where machines\u2014equipped with the power to understand and create\u2014generate content, art and ideas with astonishing proficiency. And realism.<\/p>\n

The story of generative AI thus far is one of mesmerising achievements and chilling consequences. Of innovation and new responsibilities. Of progress and potential peril.<\/p>\n

From Web 1.0 to generative AI: the evolution of the internet and its impact on human rights<\/strong><\/p>\n

We\u2019ve come a long way since Web 1.0, where users could only read and publish information; a black-and-white world where freedom of expression, simply understood, quickly became a primary concern.<\/p>\n

Web 2.0 brought new interactive and social possibilities. We began to use the internet to work, socialise, shop, create and play\u2014even find partners. Our experience became more personal as our digital footprint became ever larger, underscoring the need for users\u2019 privacy and security to be baked in to the development of digital systems.<\/p>\n

The wave of technological change we\u2019re witnessing today promises to be even more transformative. Web 3.0 describes a nascent stage of the internet that is decentralised and allows users to exchange directly content they own or have created. The trend towards decentralisation<\/a> and the development of technologies such as virtual reality, augmented reality and generative AI are bringing us to an entirely new frontier.<\/p>\n

All of this is driving a deeper awareness of technology\u2019s impact on human rights.<\/p>\n

Rights to freedom of expression and privacy are still major concerns, but so are the principles of dignity, equality and mutual respect, the right to non-discrimination, and rights to protection from exploitation, violence and abuse.<\/p>\n

For me, as Australia\u2019s eSafety Commissioner, one vital consideration stands out: the right we all have to be safe.<\/p>\n

Concerns we can\u2019t ignore<\/strong><\/p>\n

We have seen extraordinary developments in generative AI over the past 12 months that underline the challenges we face in protecting these rights and principles.<\/p>\n

Deepfake videos and audio recordings, for example, depict people in situations they never engaged in or experienced. Such technical manipulation may go viral before the authenticity\u2014or falsehood\u2014can be proven. This can have serious repercussions for not only an individual\u2019s reputation or public standing, but also their fundamental wellbeing and identity.<\/p>\n

Experts have long been concerned about the role of generative AI in amplifying and entrenching biases in training data. These models may then perpetuate stereotypes and discrimination, fuelling an untrammelled cycle of inequality at an unprecedented pace and scale.<\/p>\n

And generative AI poses significant risks in creating synthetic child sexual abuse material. This harm is undeniable; all content that normalises child sexualisation and AI-generated versions of it hamper law enforcement.<\/p>\n

eSafety\u2019s hotline and law-enforcement queues are starting to fill with synthetic child sexual abuse material, presenting massive new resourcing and triaging challenges.<\/p>\n

We are also very concerned about the potential of manipulative chatbots to further weaponise the grooming and exploitation of vulnerable young Australians.<\/p>\n

These are not abstract concerns; incidents of abuse are already being reported to us.<\/p>\n

The why<\/em>: stating the case for safety by design in generative AI<\/strong><\/p>\n

AI and virtual reality are creating new actual<\/em> realities that we must confront as we navigate the complexities of generative AI.<\/p>\n

Doing so effectively requires a multi-faceted approach that involves technology, policy and education.<\/p>\n

By making safety a fundamental element in the design of generative AI systems, we put individuals\u2019 wellbeing first and reduce both users\u2019 and developers\u2019 exposure to risk.<\/p>\n

As has been articulated often over the past several months and seems to be well understood, safety needs to be a pre-eminent concern, not retrofitted as an afterthought, or after systems have been \u2018extricated out into the wild\u2019.<\/p>\n

Trust in the age of AI is a paramount consideration given the scale of potential harm to individuals and society, even to democracy and humanity itself.<\/p>\n

The how<\/em>: heeding the lessons of history by adopting safety by design for generative AI<\/strong><\/p>\n

How can we establish this trust?<\/p>\n

Just as the medical profession has the Hippocratic Oath, we need to see a well-integrated credo in model and systems design that is in direct alignment with the imperative, \u2018first, do no harm\u2019.<\/p>\n

To achieve this, identifying potential harms and misuse scenarios is crucial. We need to consider the far-reaching effects of AI-generated content\u2014not just for today but for tomorrow.<\/p>\n

Some of the efforts around content authenticity and provenance standards through watermarking and more rapid deepfake-detection tools should be an immediate goal.<\/p>\n

As first steps, users also need to know when they are interacting with AI-generated content, and the decision-making processes behind AI systems must be more transparent.<\/p>\n

User education on recognising and reporting harmful content must be another cornerstone of our approach. Users need to have control over AI-generated content that impacts them. This includes options to filter or block such content and personalise AI interactions.<\/p>\n

Mindful development of content moderation, reporting mechanisms and safety controls can ensure harmful synthetic content and mistruths don\u2019t go viral at the speed of sound without meaningful recourse.<\/p>\n

But safety commitments must extend beyond design, development and deployment. We need continuous vigilance and auditing of AI systems in real-world applications for swift detection and resolution of issues.<\/p>\n

Human oversight by reviewers adds an extra layer of protection. And empowering developers responsible for AI with usage training and through company and industry performance metrics is also vital.<\/p>\n

The commitment to safety must start with company leadership and be inculcated into every layer of the tech organisation, including incentives for engineers and product designers for successful safety interventions.<\/p>\n

The whole premise of Silicon Valley legend John Doerr\u2019s 2018 book, Measure what matters<\/em>, provided tech organisations with a blueprint for developing objectives and key results instead of the traditional key performance indicators.<\/p>\n

What matters now with the tsunami of generative AI is that industry not only gets onto the business of measuring its safety success at the company, product and service level, but also sets tangible safety outcomes and measurements for the broader AI industry.<\/p>\n

Indeed, the tech industry needs to measure what matters in the context of AI safety.<\/p>\n

Regulators should be resourced to stress-test AI systems to uncover flaws, weaknesses, gaps and edge cases. We are keen to build regulatory sandboxes.<\/p>\n

We need rigorous assessments before and after deployment to evaluate the societal, ethical and safety implications of AI systems.<\/p>\n

Clear crisis-response plans are also necessary to address AI-related incidents promptly and effectively.<\/p>\n

The role of global regulators in an online world of constant change<\/strong><\/p>\n

How can we make sure the tech industry plays along?<\/p>\n

Recent pronouncements, principles and policies from industry are welcome. They can help lay the groundwork for safer AI development.<\/p>\n

For example, TikTok, Snapchat and Stability AI, along with 24 other organisations\u2014including the US, German and Australian governments\u2014have pledged to combat child sexual abuse images generated by AI. This commitment was announced by Britain ahead of a global AI safety summit later this week. The pledge focuses on responsible AI use and collaborative efforts to prevent AI-related risks in addressing child sexual abuse.<\/p>\n

Such commitments won\u2019t amount to much without measurement and external validation, which is partly why we\u2019re seeing a race by governments globally to implement the first AI regulatory scheme.<\/p>\n

US President Joe Biden\u2019s new executive order on AI safety and security<\/a>, for example, mandates AI model developers to share safety test results. This order addresses national security, data privacy and public safety concerns related to AI technology. The White House views it as a significant step in AI safety.<\/p>\n

With other governments pursing their own reforms, there\u2019s a danger of plunging headlong into a fragmented splinternet of rules.<\/p>\n

What\u2019s needed instead is a harmonised legislative approach which recognises sovereign nations and regional blocs will take slightly different paths. A singular global agency to oversee AI would likely be too cumbersome and slow to deal with the issues we need to rectify now.<\/p>\n

Global regulators\u2014whether focused on safety, privacy, competition or consumer rights\u2014can work towards best-practice AI regulation in their domains, building from current frameworks. And we can work across borders and boundaries to achieve important gains.<\/p>\n

Last November, eSafety launched the Global Online Safety Regulators Network with the UK, Ireland and Fiji. We\u2019ve since increased our membership to include South Korea and South Africa, and have a broader group of observers to the network.<\/p>\n

As online safety regulation pioneers, we strive to promote a unified global approach to online safety regulation, building on shared insights, experiences and best practices.<\/p>\n

In September 2023, the network had its inaugural in-person meeting in London, and issued its first position statement<\/a> on the important intersection of human rights in online safety regulation.<\/p>\n

Our guiding principles are rooted in the broad sweep of human rights, including protecting children\u2019s interests, upholding dignity and equality, supporting freedom of expression, ensuring privacy and preventing discrimination.<\/p>\n

It is crucial that safeguards coexist with freedoms, and we strongly believe that alleviating online harms can further bolster human rights online.<\/p>\n

In the intricate relationship between human rights and online safety, no single right transcends another. The future lies in a proportionate approach to regulation that respects all human rights.<\/p>\n

This involves governments, regulators, businesses and service providers cooperating to prevent online harm and enhance user safety and autonomy, while allowing freedom of expression to thrive.<\/p>\n","protected":false},"excerpt":{"rendered":"

In the grand tapestry of technological evolution, generative artificial intelligence has emerged as a vibrant and transformative thread. Its extraordinary benefits are undeniable; already it is revolutionising industries and reshaping the way we live, work …<\/p>\n","protected":false},"author":1360,"featured_media":83250,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false,"footnotes":""},"categories":[1],"tags":[1291,3230,249,724,332],"class_list":["post-83248","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-general","tag-artificial-intelligence","tag-child-exploitation","tag-human-rights","tag-regulatory-frameworks","tag-technology","dinkus-of-minds-and-machines-an-ai-series"],"acf":[],"yoast_head":"\nSafety by design: protecting users, building trust and balancing rights in a generative AI world | The Strategist<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Safety by design: protecting users, building trust and balancing rights in a generative AI world | The Strategist\" \/>\n<meta property=\"og:description\" content=\"In the grand tapestry of technological evolution, generative artificial intelligence has emerged as a vibrant and transformative thread. Its extraordinary benefits are undeniable; already it is revolutionising industries and reshaping the way we live, work ...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/\" \/>\n<meta property=\"og:site_name\" content=\"The Strategist\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/ASPI.org\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-01T01:30:56+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-01T23:02:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2023\/11\/GettyImages-1008725092-e1698798319299.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"707\" \/>\n\t<meta property=\"og:image:height\" content=\"418\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Julie Inman Grant\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ASPI_org\" \/>\n<meta name=\"twitter:site\" content=\"@ASPI_org\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Julie Inman Grant\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#website\",\"url\":\"https:\/\/www.aspistrategist.ru\/\",\"name\":\"The Strategist\",\"description\":\"ASPI's analysis and commentary site\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.aspistrategist.ru\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-AU\"},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/#primaryimage\",\"url\":\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2023\/11\/GettyImages-1008725092-e1698798319299.jpg\",\"contentUrl\":\"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2023\/11\/GettyImages-1008725092-e1698798319299.jpg\",\"width\":707,\"height\":418,\"caption\":\"Deep Learning and Machine Artificial Intelligence Concept\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/\",\"url\":\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/\",\"name\":\"Safety by design: protecting users, building trust and balancing rights in a generative AI world | The Strategist\",\"isPartOf\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/#primaryimage\"},\"datePublished\":\"2023-11-01T01:30:56+00:00\",\"dateModified\":\"2023-11-01T23:02:08+00:00\",\"author\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/9490168536278e62c2d903e4475d4527\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/#breadcrumb\"},\"inLanguage\":\"en-AU\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.aspistrategist.ru\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Safety by design: protecting users, building trust and balancing rights in a generative AI world\"}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/9490168536278e62c2d903e4475d4527\",\"name\":\"Julie Inman Grant\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/558642a3fdaef2da1fbe92cbe46725c3?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/558642a3fdaef2da1fbe92cbe46725c3?s=96&d=mm&r=g\",\"caption\":\"Julie Inman Grant\"},\"url\":\"https:\/\/www.aspistrategist.ru\/author\/julie-inman-grant\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Safety by design: protecting users, building trust and balancing rights in a generative AI world | The Strategist","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/","og_locale":"en_US","og_type":"article","og_title":"Safety by design: protecting users, building trust and balancing rights in a generative AI world | The Strategist","og_description":"In the grand tapestry of technological evolution, generative artificial intelligence has emerged as a vibrant and transformative thread. Its extraordinary benefits are undeniable; already it is revolutionising industries and reshaping the way we live, work ...","og_url":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/","og_site_name":"The Strategist","article_publisher":"https:\/\/www.facebook.com\/ASPI.org","article_published_time":"2023-11-01T01:30:56+00:00","article_modified_time":"2023-11-01T23:02:08+00:00","og_image":[{"width":707,"height":418,"url":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2023\/11\/GettyImages-1008725092-e1698798319299.jpg","type":"image\/jpeg"}],"author":"Julie Inman Grant","twitter_card":"summary_large_image","twitter_creator":"@ASPI_org","twitter_site":"@ASPI_org","twitter_misc":{"Written by":"Julie Inman Grant","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebSite","@id":"https:\/\/www.aspistrategist.ru\/#website","url":"https:\/\/www.aspistrategist.ru\/","name":"The Strategist","description":"ASPI's analysis and commentary site","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.aspistrategist.ru\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-AU"},{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/#primaryimage","url":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2023\/11\/GettyImages-1008725092-e1698798319299.jpg","contentUrl":"https:\/\/www.aspistrategist.ru\/wp-content\/uploads\/2023\/11\/GettyImages-1008725092-e1698798319299.jpg","width":707,"height":418,"caption":"Deep Learning and Machine Artificial Intelligence Concept"},{"@type":"WebPage","@id":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/","url":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/","name":"Safety by design: protecting users, building trust and balancing rights in a generative AI world | The Strategist","isPartOf":{"@id":"https:\/\/www.aspistrategist.ru\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/#primaryimage"},"datePublished":"2023-11-01T01:30:56+00:00","dateModified":"2023-11-01T23:02:08+00:00","author":{"@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/9490168536278e62c2d903e4475d4527"},"breadcrumb":{"@id":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/#breadcrumb"},"inLanguage":"en-AU","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.aspistrategist.ru\/safety-by-design-protecting-users-building-trust-and-balancing-rights-in-a-generative-ai-world\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.aspistrategist.ru\/"},{"@type":"ListItem","position":2,"name":"Safety by design: protecting users, building trust and balancing rights in a generative AI world"}]},{"@type":"Person","@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/9490168536278e62c2d903e4475d4527","name":"Julie Inman Grant","image":{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.aspistrategist.ru\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/558642a3fdaef2da1fbe92cbe46725c3?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/558642a3fdaef2da1fbe92cbe46725c3?s=96&d=mm&r=g","caption":"Julie Inman Grant"},"url":"https:\/\/www.aspistrategist.ru\/author\/julie-inman-grant\/"}]}},"_links":{"self":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/83248"}],"collection":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/users\/1360"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/comments?post=83248"}],"version-history":[{"count":6,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/83248\/revisions"}],"predecessor-version":[{"id":83274,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/posts\/83248\/revisions\/83274"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/media\/83250"}],"wp:attachment":[{"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/media?parent=83248"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/categories?post=83248"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aspistrategist.ru\/wp-json\/wp\/v2\/tags?post=83248"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}