- The Strategist - https://aspistrategist.ru -

How to win the artificial general intelligence race and not end humanity

Posted By on November 10, 2023 @ 15:30

In 2016, I witnessed DeepMind’s artificial-intelligence model AlphaGo [1] defeat Go champion Lee Sedol in Seoul. That event was a milestone, demonstrating that an AI model could beat one of the world’s greatest Go players, a feat that was thought to be impossible. Not only was the model making clever strategic moves but, at times, those moves were beautiful [2] in a very deep and humanlike way.

Other scientists and world leaders [3] took note and, seven years later, the race to control AI and its governance is on. Over the past month, US President Joe Biden has issued an executive order [4] on AI safety, the G7 announced the Hiroshima AI Process [5] and 28 countries signed the Bletchley Declaration [6] at the UK’s AI Safety Summit. Even the Chinese Communist Party is seeking to carve out its own leadership role with the Global AI Governance Initiative [7].

These developments indicate that governments are starting to take the potential benefits and risks of AI equally seriously. But as the security implications of AI become clearer, it’s vital that democracies outcompete authoritarian political systems to ensure future AI models reflect democratic values and are not concentrated in institutions beholden to the whims of dictators. At the same time, countries must proceed cautiously, with adequate guardrails, and shut down unsafe AI projects when necessary.

Whether AI models will outperform humans in the near future and pose existential risks is a contentious question. For some researchers who have studied these technologies for decades, the performance of AI models like AlphaGo and ChatGPT are evidence that the general foundations for human-level AI have been achieved [8] and that an AI system that’s more intelligent than humans across a range of tasks will likely be deployed within our lifetimes. Those systems are known as artificial general intelligence (AGI), artificial superintelligence or general AI.

For example, most AI models now use neural networks [9], an old machine-learning technique created in the 1940s that was inspired by the biological neural networks of animal brains. The abilities of modern neural networks like AlphaGo weren’t fully appreciated until computer chips used mostly for gaming and video rendering, known as graphics processing units, became powerful enough in the 21st century to process the computations needed for specific human-level tasks.

The next step towards AGI was the arrival of large-language models, such as OpenAI’s GPT-4, which are created using a version of neural networks known as ‘transformers [10]’. OpenAI’s previous version of its chatbot, GPT-3, surprised everyone in 2020 by generating text that was indistinguishable from that written by people and performing [11] a range of language-based tasks with few or no examples. GPT-4, the latest model, has demonstrated human-level reasoning capabilities [12] and outperformed human test-takers on the US bar exam [13], a notoriously difficult test for lawyers. Future iterations are expected to have the ability to understand, learn and apply knowledge at a level equal to, or beyond, humans across all useful tasks.

AGI would be the most disruptive technology humanity has created. An AI system that can automate human analytical thinking, creativity and communication at a large scale and generate insights, content and reports from huge datasets would bring about enormous social and economic change [14]. It would be our generation’s Oppenheimer moment [15], only with strategic impacts beyond just military and security applications. The first country to successfully deploy it would have significant advantages in every scientific [16] and economic [17] activity across almost all industries. For those reasons, long-term geopolitical competition between liberal democracies and authoritarian countries is fuelling an arms race to develop and control AGI.

At the core of this race is ideological competition, which pushes governments to support the development of AGI in their country first, since the technology will likely reflect the values of the inventor and set the standards for future applications. This raises important questions about what world views we want AGIs to express. Should an AGI value freedom of political expression above social stability? Or should it align itself with a rule-by-law or rule-of-law society? With our current methods, researchers don’t even know if it’s possible to predetermine those values in AGI systems before they’re created.

It’s promising that universities, corporations [18] and civil research groups [19] in democracies are leading the development of AGI so far. Companies like OpenAI, Anthropic and DeepMind are household names and have been working closely with the US government to consider [20] a range of AI safety policies. But startups, large corporations and research teams developing AGI in China, under the authoritarian rule of the CCP, are quickly catching up [21] and pose significant competition. China certainly has the talent [22], the resources and the intent [23] but faces additional regulatory hurdles [24] and a lack of high-quality, open-source Chinese-language datasets. In addition, large-language models threaten the CCP’s monopoly on domestic information control by offering alternative worldviews to state propaganda.

Nonetheless, we shouldn’t underestimate the capacity of Chinese entrepreneurs to innovate under difficult regulatory conditions. If a research team in China, subject to the CCP’s National Intelligence Law [25], were to develop and tame AGI or near-AGI capabilities first, it would further entrench the party’s power to repress its domestic population and ability to interfere with the sovereignty of other countries. China’s state security system or the People’s Liberation Army could deploy it to supercharge their cyberespionage operations [26] or automate the discovery of zero-day [27] vulnerabilities. The Chinese government could embed it as a superhuman adviser in its bureaucracies to make better operational, military, economic or foreign-policy decisions and propaganda. Chinese companies could sell their AGI services to foreign government departments and companies with back doors into their systems or covertly suppress [28] content and topics abroad at the direction of Chinese security services.

At the same time, an unfettered AGI arms race between democratic and authoritarian systems could exacerbate various existential risks, either by enabling future malign use by state and non-state actors or through poor alignment of the AI’s own objectives. AGI could, for instance, lower the impediments for savvy malicious actors to develop bioweapons [29] or supercharge disinformation [30] and influence operations. An AGI could itself become destructive if it pursues poorly described goals or takes shortcuts such as deceiving humans to achieve goals more efficiently.

When Meta trained Cicero [31] to play the board game Diplomacy ‘honestly’ by generating only messages that reflected its intention in each interaction, analysts noted [32] that it could still withhold information about its true intentions or not inform other players when its intentions changed. These are serious considerations with immediate risks and have led many AI experts and people who study existential risk to call for [33] a pause on advanced AI research. But policymakers worldwide are unlikely to stop given the strong incentives to be a first mover.

This all may sound futuristic, but it’s not as far away as you might think. In a 2022 survey [34], 352 AI experts put a 50% chance of human-level machine intelligence arriving in 37 years—that is, 2059. The forecasting community on the crowd-sourced platform Metaculus, which has a robust track record [35] of AI-related forecasts, is even more confident of the imminent development of AGI. The aggregation of more than 1,000 forecasters suggests [36] 2032 as the likely year general AI systems will be devised, tested and publicly announced. But that’s just the current estimate—experts and the amateurs on Metaculus have shortened their timelines each year as new AI breakthroughs are publicly announced.

That means democracies have a lead time of between 10 and 40 years to prepare for the development of AGI. The key challenge will be how to prevent AI existential risks while innovating faster than authoritarian political systems.

First, policymakers in democracies must attract global AI talent, including from China and Russia, to help align AGI models with democratic values. Talent is also needed within government policymaking departments and think tanks to assess AGI implications and build the bureaucratic capacity to rapidly adapt to future developments.

Second, governments should be proactively monitoring all AGI research and development activity and should pass legislation that allows regulators to shut down or pause exceptionally risky projects. We should remember that Beijing has more to worry about with regard to AI alignment because the CCP is too worried about its own political safety to relax its strict rules on AI development.

We therefore shouldn’t see government involvement only in terms of its potential to slow us down. At a minimum, all countries, including the US and China, should be transparent about their AGI research and advances. That should include publicly disclosing their funding for AGI research and safety policies and identifying their leading AGI developers.

Third, liberal democracies must collectively maintain as large a lead as possible [37] in AI development and further restrict access to high-end technology, intellectual property, strategic datasets and foreign investments in China’s AI and national-security industries. Impeding the CCP’s AI development in its military, security and intelligence industries is also morally justifiable in preventing human rights violations.

For example, Midu, an AI company based in Shanghai that supports [38] China’s propaganda and public-security work, recently announced [39] the use of large-language models to automate reporting on public opinion analysis to support surveillance of online users. While China’s access to advanced US technologies and investment has been restricted, other like-minded countries such as Australia should implement similar outbound investment controls into China’s AI and national-security industries.

Finally, governments should create incentives for the market to develop safe AGI and solve the alignment problem. Technical research on AI capabilities is outpacing technical research on AI alignment and companies are failing to put their money where their mouth is. Governments should create prizes for research teams or individuals to solve difficult AI alignment problems. One model potential model could be like the Clay Institute’s Millennium Prize Problems [40], which provides awards for solutions to some of the world’s most difficult mathematics problems.

Australia is an attractive destination for global talent and is already home to many AI safety researchers. The Australian government should capitalise on this advantage to become an international hub for AI safety and alignment research. The Department of Industry, Science and Resources should set up the world’s first AGI prize fund with at least $100 million to be awarded to the first global research team to align AGI safely.

The National Artificial Intelligence Centre should oversee a board that manages this fund and work with the research community to create a list of conditions and review mechanisms to award the prize. With $100 million, the board could adopt a similar investment mandate as Australia’s Future Fund [41] to achieve an average annual return of at least the consumer price index plus 4–5% per annum over the long term. Instead of being reinvested into the fund, the 4–5% interest accrued each year on top of CPI should be used as smaller awards for incremental achievements in AI research each year. These awards could also be used to fund AI PhD scholarships or attract AI postdocs to Australia. Other awards could be given to research, including research conducted outside Australia, in annual award ceremonies, like the Nobel Prize, which will bring together global experts on AI to share knowledge and progress.

A $100 million fund may seem a lot for AI research but, as a comparison, Microsoft is rumoured to have invested [42] US$10 billion into OpenAI this year alone. And $100 million pales in comparison to the contributions safely aligned AGI would have on the national economy.

The stakes are high for getting AGI right. If properly aligned and developed, it could bring an epoch of unimaginable human prosperity and enlightenment. But AGI projects pursued recklessly could pose real risks of creating dangerous superhuman AI systems or bringing about global catastrophes. Democracies must not cede leadership of AGI development to authoritarian systems, but nor should they rush to secure a Pyrrhic victory by going ahead with models that fail to embed respect for human rights, liberal values and basic safety.

This tricky balance between innovation and safety is the reason policymakers, intelligence agencies, industry, civil society and researchers must work together to shape the future of AGIs and cooperate with the global community to navigate an uncertain period of elevated human-extinction risks.



Article printed from The Strategist: https://aspistrategist.ru

URL to article: /how-to-win-the-artificial-general-intelligence-race-and-not-end-humanity/

URLs in this post:

[1] AlphaGo: https://www.youtube.com/watch?v=WXuK6gekU1Y

[2] beautiful: https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/

[3] world leaders: https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world

[4] executive order: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[5] Hiroshima AI Process: https://www.mofa.go.jp/files/100573466.pdf

[6] Bletchley Declaration: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

[7] Global AI Governance Initiative: https://www.fmprc.gov.cn/eng/wjdt_665385/2649_665393/202310/t20231020_11164834.html

[8] achieved: https://www.noemamag.com/artificial-general-intelligence-is-already-here/

[9] neural networks: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414

[10] transformers: https://arxiv.org/abs/1706.03762

[11] performing: https://arxiv.org/abs/2005.14165

[12] human-level reasoning capabilities: https://arxiv.org/pdf/2303.12712.pdf

[13] bar exam: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233

[14] social and economic change: https://moores.samaltman.com/

[15] Oppenheimer moment: https://www.nytimes.com/2023/07/25/opinion/karp-palantir-artificial-intelligence.html

[16] scientific: https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/

[17] economic: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier

[18] corporations: https://openai.com/blog/planning-for-agi-and-beyond

[19] civil research groups: https://www.governance.ai/research-paper/towards-best-practices-in-agi-safety-and-governance

[20] consider: https://www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable

[21] catching up: https://techcrunch.com/2023/11/05/valued-at-1b-kai-fu-lees-llm-startup-unveils-open-source-model/

[22] talent: https://cset.georgetown.edu/publication/chinas-cognitive-ai-research/

[23] intent: https://www.12371.cn/2023/04/28/ARTI1682664231034942.shtml

[24] regulatory hurdles: https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117

[25] National Intelligence Law: https://www.lawfaremedia.org/article/beijings-new-national-intelligence-law-defense-offense

[26] cyberespionage operations: https://apnews.com/article/barracuda-mandiant-cybersecurity-china-hackers-a52d1595c9108d2c58df11e38756600d

[27] zero-day: https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/

[28] suppress: https://www.washingtonpost.com/technology/2020/12/18/zoom-helped-china-surveillance/

[29] bioweapons: https://arxiv.org/ftp/arxiv/papers/2310/2310.18233.pdf

[30] disinformation: https://openai.com/research/forecasting-misuse

[31] Cicero: https://ai.meta.com/research/cicero/diplomacy/

[32] noted: https://www.erichgrunewald.com/posts/notes-on-metas-diplomacy-playing-ai/

[33] call for: https://time.com/6295879/ai-pause-is-humanitys-best-bet-for-preventing-extinction/

[34] 2022 survey: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

[35] robust track record: https://www.metaculus.com/notebooks/16708/exploring-metaculuss-ai-track-record/

[36] suggests: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/

[37] maintain as large a lead as possible: https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/09/16/remarks-by-national-security-advisor-jake-sullivan-at-the-special-competitive-studies-project-global-emerging-technologies-summit/

[38] supports: https://www.midu.com/news/details/82

[39] announced: https://baijiahao.baidu.com/s?id=1770834513569172061&wfr=spider&for=pc

[40] Millennium Prize Problems: https://www.claymath.org/millennium-problems/

[41] Future Fund: https://www.futurefund.gov.au/About-us/Our-funds#collapse_19efe061-0b23-4d3b-bba5-5002947f142c

[42] invested: https://www.forbes.com/sites/qai/2023/01/27/microsoft-confirms-its-10-billion-investment-into-chatgpt-changing-how-microsoft-competes-with-google-apple-and-other-tech-giants/?sh=5c69a4d13624

Copyright © 2024 The Strategist. All rights reserved.