- The Strategist - https://aspistrategist.ru -

Artificial intelligence and the future of humanity

Posted By on October 11, 2023 @ 06:00

Thinking and learning about artificial intelligence are the mental equivalent of a fission chain reaction. The questions get really big, really quickly.

The most familiar concerns revolve around short-term impacts: the opportunities for economic productivity, health care, manufacturing, education, solving global challenges such as climate change and, on the flip side, the risks of mass unemployment, disinformation, killer robots, and concentrations of economic and strategic power.

Each of these is critical, but they’re only the most immediate considerations. The deeper issue is our capacity to live meaningful, fulfilling lives in a world in which we no longer have intelligence supremacy.

As long as humanity has existed, we’ve had an effective monopoly on intelligence. We have been, as far as we know, the smartest entities in the universe.

At its most noble, this extraordinary gift of our evolution drives us to explore, discover and expand. Over the past roughly 50,000 years—accelerating 10,000 years ago and then even more steeply from around 300 years ago—we’ve built a vast intellectual empire made up of science, philosophy, theology, engineering, storytelling, art, technology and culture.

If our civilisations—and in varying ways our individual lives—have meaning, it is found in this constant exploration, discovery and intellectual expansion.

Intelligence is the raw material for it all. But what happens when we’re no longer the smartest beings in the universe? We haven’t yet achieved artificial general intelligence (AGI)—the term for an AI that could do anything we can do. But there’s no barrier in principle to doing so, and no reason it wouldn’t quickly outstrip us by orders of magnitude.

Even if we solve the economic equality questions through something like a universal basic income and replace notions of ‘paid work’ with ‘meaningful activity’, how are we going to spend our lives in ways that we find meaningful, given that we’ve evolved to strive and thrive and compete?

Picture the conflict that arises for human nature if an AGI or ASI (artificial superintelligence) could answer all of our most profound questions and solve all of our problems. How much satisfaction do we get from having the solutions handed to us? Worse still, imagine the wistful sense of consolation at being shown an answer and finding we’re too stupid to understand it.

Eventually, we’re going to face the prospect that we’re no longer in charge of our civilisation’s intelligence output. If that seems speculative, we’re actually already creeping in that direction with large language models. Ask ChatGPT to write an 800-word opinion article on, say, whether the Reserve Bank of Australia should raise interest rates further. It does a decent job—not a brilliant one, but a gullible editor would probably publish it and it certainly could be used as the basis for a sensible dinner party conversation. You might edit it, cherrypick the bits you agree with and thereby tell yourself these are your views on the issue. But they’re not; they’re ChatGPT’s views and you’re going along with them.

Generative AI is an amazing achievement and a valuable resource, but we have to be clear-eyed about where these tools might take us.

This is not about AI going wrong. To be sure, there’s a pressing urgency for more work on AI alignment so that we don’t give powerful AIs instructions that sound sensible but go horribly wrong because we can never describe exactly what we want. AI pioneer Stuart Russell has compared this to the King Midas story. Turning everything into gold by touching it sounds fantastic until you try to eat an apple or hug your kids.

But beyond alignment lies the question of where we’re left, even if we get it all right. What do we do with ourselves?

Optimistic commentators argue that human opportunities will only expand. After all, there have been great breakthroughs in the past that enhanced us rather than made us redundant. But AGI is categorically different.

Horses and machines replaced our muscles. Factories replaced our organised physical labour. Instruments such as calculators and computers have replaced specific mental tasks, while communications technology improves our cooperation and hence our collective intelligence. Narrow AI can enhance us by freeing us from routine tasks, enabling us to concentrate on higher level strategic goals and improving our productivity. But with AGI, we’re talking about something that could supersede all applications of human intelligence. A model that can plan, strategise, organise, pursue very high-level directions and even form its own goals would leave an ever-diminishing set of tasks for us to do ourselves.

The physicist Max Tegmark, co-founder of the Future of Life Institute, has compared AGI to a rising ocean, with human intellectual tasks occupying shrinking land masses that eventually become small islands left for our intelligence to perch on.

One thing we will keep is our humanness. Freedom to spend more time doing what we want should be a gift. We can spend more time being parents, spouses, family members, friends, social participants—things that by definition only humans can fully give to other humans. The slog of paid work need not be the only thing that gives us meaning—indeed it would be sad if we fell apart without it.

Equally, we have always earned our living since we hunted and gathered, so a transition to a post-scarcity, post-work world will be a social experiment like no other. Exercising our humanness towards one another might continue to be enough, but we will need to radically reorientate our customs and beliefs about what it means to be a valued member of the human race. (Even our humanness might be an eroding land mass; generative AI’s creative powers and ability to simulate empathy have already reached levels we didn’t anticipate just a couple of years ago.)

A real possibility is that we integrate our brains with AI to avoid being left behind by it. Elon Musk’s Neuralink is the best-known enterprise in this field, but there’s plenty of other work going on. Maybe humans will keep up with AI by merging with it, but then that raises the question of at what point we diverge from being human.

Maybe you’re a transhumanist or a technology futurist and you just don’t care whether the intelligence that inherits the world is recognisably human or even biological. What does it matter? Maybe we should bequeath this great endowment to silicon and accept that we were only ever meant to be a spark that ignited the true god-like power of intelligence on other substrates.

That is a risky position. We can’t assume that a future superintelligence will carry forward our values and goals, unless we take enormous care to build it that way. Sure, human values and goals are often far from perfect, but they have improved over our history through the accumulation of principles such as human rights. Even if we’re sometimes loath to acknowledge it, the human story is one of progress.

Imagine if our non-biological descendants had no inner subjective experience—the weird thing called consciousness that enables us to marvel at a scientific theory or feel a sense of achievement at having sweated our way to success at some ambitious task. What if they were to solve the deepest riddles of the universe yet feel no sense of wonder at what they’d done?

Over the long span of the future, AI needs to serve the interests of humanity, not the other way around. Humans won’t be here forever, but let’s make sure that the future of intelligence represents a controlled evolution, not a radical breach, of the values we’ve built and the wisdom we’ve earned over millennia.

There is a lot we can do. We need to avoid, for instance, letting commercial or geopolitical pressure drive reckless haste in developing powerful AIs. We need to think and debate very carefully the prospect of giving AIs the power to form their own goals or pursue very general goals on our behalf. Giving an AI the instruction to ‘go out and make me a pile of money’ isn’t going to end well. At some point, the discussion will become less about intelligence and more about agency and the ability to achieve actual outcomes in the world.

Ensuring that a future AGI does not leave us behind will likely mean putting some limitations on development—even temporarily—while we think through the implications of the technology and find ways to keep it tethered to human values and aspirations. As the neural network pioneer Geoffrey Hinton has put it, ‘There is not a good track record of less intelligent things controlling things of greater intelligence.’

I’m certainly not looking forward to any future in which AI treats humans like pets, as Apple co-founder Steve Wozniak once speculated.

With that in mind, this is the first piece in a new Strategist series that will look at artificial intelligence over the coming months as part of this ongoing and vital debate.

ASPI is a national security and international policy think tank, so we’ll be focusing on security and international dimensions. But what I’ve outlined here is really the ultimate human security issue, which is our global future and our ability to continue to live meaningful lives. This is the biggest question we are facing right now and arguably the biggest we have ever faced.



Article printed from The Strategist: https://aspistrategist.ru

URL to article: /artificial-intelligence-and-the-future-of-humanity/

Copyright © 2024 The Strategist. All rights reserved.