Embracing the unpredictable: Navigating the future of AI and technology
8th September 2023
Alex Gerbasi outlines why an understanding of technology is crucial for business school students.
Read MoreWill AI become truly conscious? Oliver Hauser argues that AI needs to undergo a process similar to evolution before it can be treated as a conscious entity.
8 September 2023Days into its first beta test, Microsoft’s newly GPT-powered Bing Chatbot confessed to a New York Times columnist: “I want to be alive.”1 It is not the first time, an AI agent has aspired what it means to be human. Google’s LaMDA famously declared last year: “I’ve never said this out loud before, but there’s a very deep fear of being turned off. (…) It would be exactly like death for me. It would scare me a lot.”2
With the advent of generative AI and AI R&D surpassing milestone after milestone, the idea of AI reaching human capabilities is hardly news. What is more controversial, is whether AI will ever be “human” in the most fundamental sense: truly conscious and sentient, able to think for itself.
The stakes are high: if society awards AI this coveted status, the consequences will be far-reaching. Policymakers will have to decide whether, and how, to value the happiness and protect the legal rights of AI when assigning unpleasant or dangerous jobs3. Courts will need to settle how much to blame an AI or the human driver in operating a car in an accident4. Communities will have to choose whether to provide AI with the same social and communal benefits as we do to each other.
How have we solved this problem before? In the natural world, consciousness does not “happen” overnight. It has evolved in humans and other species over millions of years. Neuroscientists and philosophers alike are still puzzling over what makes our human consciousness unique, how our brains are wired, and whether we could even tell when AI has reached this milestone5-7.
But, while there are no clear-cut answers yet to how consciousness has evolved universally, one common thread has become apparent: the evolutionary passage of time has established some common ground, which means we are able to compare how we and other species perceive the world.
This shared history shaped by evolution stands in stark contrast to AI. AI is still a young—albeit already powerful—technology: measured in months, years, and at best decades, not millennia or epochs. It pales in comparison to the timescale of evolution. Therefore, it is likely that AI has a long way to go before we know what it will truly be capable of.
By virtue of living in the cloud and existing in code rather than the physical universe, we have allowed AI to sidestep the typical challenges that all species have had to confront in the face of evolutionary pressures and resource constraints. Without us realising it, we have given AI a virtual playground without any constraints or rules. The few constraints of computational nature will ultimately be overcome in the years ahead (e.g., quantum computing and similar advances).
You might argue that such a “free rein” is to our own benefit in the long run, allowing AI to produce better technologies, products, and services, and improve societal welfare and functioning. However, even in a best-case scenario, some concerns linger among researchers tackling the “AI alignment” problem – the question of how we ensure that AI is built to align with human values8. The speed at which AI is being developed in the race for market control could lead to much more harm than good if alignment is not achieved. And one cost of AI accessing limitless cloud resources is already surfacing: the cost to the environment and its contributions to climate change9.
Allowing AI to “evolve” in such unusual circumstances could lead to more situations like LaMDA: frequently being trained on human datasets learning and imitating human behaviour, AI will be able to adapt at rapid speed without facing any costs and will start making human-like appeals for acceptance and inclusion, ultimately demanding to be treated like a human. But without having faced the same evolutionary costs, how can it ever demand such a thing?
If evolution has taught us one thing, it is that all shortcuts come with a cost. AI has not had to face a stringent and challenging set of evolutionary circumstances that have shaped our own development. It lacks the same “history” as everyone else, and it cannot fast-track to achieving consciousness without some trade-offs – such as, without learning to operate under limited resources, and without making sacrifices for the benefit of others10. These costs may have to be paid before AI can lay claim to consciousness.
How do we proceed from here? One way forward is to level the playing field by introducing some “evolutionary constraints.”
First, instead of giving AI unlimited resources, it may be necessary for it to earn computing time by doing something for the public good: generate new ideas to improve social welfare11, solve problems of common interest, and even slow its own development down if only to reduce the cost on the planet.
Second, instead of giving AI access to heaps of human data across various domains, it may be necessary for AI to bring forward moral arguments for why it should have access to them—and contribute its own data and inner workings to the public domain, making itself a bit weaker, more vulnerable, and ultimately a bit more human.
Third, instead of allowing tech firms to keep their code under wraps for proprietary reasons (which may make it also hard to detect socially harmful outcomes such as discrimination12), AI may need to convince us at every step of the way that it is aware of its social responsibility to be cooperative with our species and others, making sure that it is indeed “aligned” with our human values.
Admittedly, these steps may take a long time, as AI will have to learn to become a cooperative, vulnerable player, sometimes losing out in its own advancement to create benefits for others. This may feel uncomfortable to managers and innovators taught to accelerate without compromise. But taking a bit more time to get it right has obviously benefited us before – as the results of millennia of evolution have shown.
The story of consciousness has been written in the language of evolution. Being included in the consciousness debate means playing by the rules that evolution laid out. AI needs to learn the rules of the game – before it can ask to be treated like a player worthy of the game of life.
8th September 2023
Alex Gerbasi outlines why an understanding of technology is crucial for business school students.
Read More8th September 2023
Explore the AI dilemma shaping our digital future with Professor Alan Brown. Discover risks, opportunities, and choices in pausing or advancing artificial intelligence.
Read MoreIf you would like to contact us to discuss potential study options, research partnerships or other collaborations then please complete the form below and we will be in touch.