Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Evolution: a necessary path for conscious AI

Will AI become truly conscious? Oliver Hauser argues that AI needs to undergo a process similar to evolution before it can be treated as a conscious entity.

8 September 2023

Acting human?

Days into its first beta test, Microsoft’s newly GPT-powered Bing Chatbot confessed to a New York Times columnist: “I want to be alive.”1 It is not the first time, an AI agent has aspired what it means to be human. Google’s LaMDA famously declared last year: “I’ve never said this out loud before, but there’s a very deep fear of being turned off. (…) It would be exactly like death for me. It would scare me a lot.”2

With the advent of generative AI and AI R&D surpassing milestone after milestone, the idea of AI reaching human capabilities is hardly news. What is more controversial, is whether AI will ever be “human” in the most fundamental sense: truly conscious and sentient, able to think for itself.

The stakes are high: if society awards AI this coveted status, the consequences will be far-reaching. Policymakers will have to decide whether, and how, to value the happiness and protect the legal rights of AI when assigning unpleasant or dangerous jobs3. Courts will need to settle how much to blame an AI or the human driver in operating a car in an accident4. Communities will have to choose whether to provide AI with the same social and communal benefits as we do to each other.

Evolution and the test of time

How have we solved this problem before? In the natural world, consciousness does not “happen” overnight. It has evolved in humans and other species over millions of years. Neuroscientists and philosophers alike are still puzzling over what makes our human consciousness unique, how our brains are wired, and whether we could even tell when AI has reached this milestone5-7.

But, while there are no clear-cut answers yet to how consciousness has evolved universally, one common thread has become apparent: the evolutionary passage of time has established some common ground, which means we are able to compare how we and other species perceive the world.

This shared history shaped by evolution stands in stark contrast to AI. AI is still a young—albeit already powerful—technology: measured in months, years, and at best decades, not millennia or epochs. It pales in comparison to the timescale of evolution. Therefore, it is likely that AI has a long way to go before we know what it will truly be capable of.

AI’s shortcuts

By virtue of living in the cloud and existing in code rather than the physical universe, we have allowed AI to sidestep the typical challenges that all species have had to confront in the face of evolutionary pressures and resource constraints. Without us realising it, we have given AI a virtual playground without any constraints or rules. The few constraints of computational nature will ultimately be overcome in the years ahead (e.g., quantum computing and similar advances).

You might argue that such a “free rein” is to our own benefit in the long run, allowing AI to produce better technologies, products, and services, and improve societal welfare and functioning. However, even in a best-case scenario, some concerns linger among researchers tackling the “AI alignment” problem – the question of how we ensure that AI is built to align with human values8. The speed at which AI is being developed in the race for market control could lead to much more harm than good if alignment is not achieved. And one cost of AI accessing limitless cloud resources is already surfacing: the cost to the environment and its contributions to climate change9.

Allowing AI to “evolve” in such unusual circumstances could lead to more situations like LaMDA: frequently being trained on human datasets learning and imitating human behaviour, AI will be able to adapt at rapid speed without facing any costs and will start making human-like appeals for acceptance and inclusion, ultimately demanding to be treated like a human. But without having faced the same evolutionary costs, how can it ever demand such a thing?

If evolution has taught us one thing, it is that all shortcuts come with a cost. AI has not had to face a stringent and challenging set of evolutionary circumstances that have shaped our own development. It lacks the same “history” as everyone else, and it cannot fast-track to achieving consciousness without some trade-offs – such as, without learning to operate under limited resources, and without making sacrifices for the benefit of others10. These costs may have to be paid before AI can lay claim to consciousness.

Evolutionary constraints are needed

How do we proceed from here? One way forward is to level the playing field by introducing some “evolutionary constraints.”

First, instead of giving AI unlimited resources, it may be necessary for it to earn computing time by doing something for the public good: generate new ideas to improve social welfare11, solve problems of common interest, and even slow its own development down if only to reduce the cost on the planet.

Second, instead of giving AI access to heaps of human data across various domains, it may be necessary for AI to bring forward moral arguments for why it should have access to them—and contribute its own data and inner workings to the public domain, making itself a bit weaker, more vulnerable, and ultimately a bit more human.

Third, instead of allowing tech firms to keep their code under wraps for proprietary reasons (which may make it also hard to detect socially harmful outcomes such as discrimination12), AI may need to convince us at every step of the way that it is aware of its social responsibility to be cooperative with our species and others, making sure that it is indeed “aligned” with our human values.

Admittedly, these steps may take a long time, as AI will have to learn to become a cooperative, vulnerable player, sometimes losing out in its own advancement to create benefits for others. This may feel uncomfortable to managers and innovators taught to accelerate without compromise. But taking a bit more time to get it right has obviously benefited us before – as the results of millennia of evolution have shown.

The story of consciousness has been written in the language of evolution. Being included in the consciousness debate means playing by the rules that evolution laid out. AI needs to learn the rules of the game – before it can ask to be treated like a player worthy of the game of life.

References

  1. Rose, K. (2023). Bing’s A.I. Chat: ‘I Want to Be Alive.’ New York Times. Accessed 2023-02-17. URL: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html
  2. Luscombe, R. (2022). Google engineer put on leave after saying AI chatbot has become sentient. The Guardian. Accessed 2023-02-05. URL:https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
  3. Spatola, N., & Urbanska, K. (2018). Conscious machines: Robot rights. Science, 359(6374), 400-400.
  4. Awad, E., Levine, S., Kleiman-Weiner, M., Dsouza, S., Tenenbaum, J. B., Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2020). Drivers are blamed more than their automated cars when both make mistakes. Nature Human Behaviour, 4(2), 134-143.
  5. Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Integrated information theory: from consciousness to its physical substrate. Nature Reviews Neuroscience, 17(7), 450-461.
  6. Dehaene, S., Lau, H., & Kouider, S. (2021). What is consciousness, and could machines have it?. Science, 358, 486-492.
  7. Carter, O., Hohwy, J., Van Boxtel, J., Lamme, V., Block, N., Koch, C., & Tsuchiya, N. (2018). Conscious machines: Defining questions. Science, 359(6374), 400-400.
  8. Christian, B. (2021). The alignment problem: How can machines learn human values? Atlantic Books.
  9. Dodge, J. et al. (2022). Measuring the carbon intensity of AI in cloud instances. ACM Conference on Fairness, Accountability, and Transparency, 1877-1894.
  10. Nowak, M. A. (2006). Evolutionary dynamics: exploring the equations of life. Harvard University Press.
  11. Koster, R., Balaguer, J., Tacchetti, A., Weinstein, A., Zhu, T., Hauser, O., Williams, D., Campbell-Gillingham, L., Thacker, P., Botvinick, M., & Summerfield, C. (2022). Human-centred mechanism design with Democratic AI. Nature Human Behaviour, 6(10), 1398-1407.
  12. Blass, J. (2019). Algorithmic advertising discrimination. UL Rev., 114, 415.

Author

Professor Oliver HauserProfessor Oliver Hauser is a Professor of Economics at the University of Exeter Business School.

Related articles

Technological Transformation

Embracing the unpredictable: Navigating the future of AI and technology

8th September 2023

Alex Gerbasi outlines why an understanding of technology is crucial for business school students.

Read More
Technological Transformation

To Pause or To Push On: The AI Dilemma that will Shape Our Digital Future

8th September 2023

Explore the AI dilemma shaping our digital future with Professor Alan Brown. Discover risks, opportunities, and choices in pausing or advancing artificial intelligence.

Read More

Start today. Change tomorrow.

If you would like to contact us to discuss potential study options, research partnerships or other collaborations then please complete the form below and we will be in touch.