Why We Need to Pause
TL;DR Recent AI breakthroughs have overwhelmed organizations and institutions. Fair and equitable use of AI should be seen as a right available to all. Without a pause to reflect, we are exposed to fundamental weaknesses in governance, legal, and ethical frameworks that must be addressed now.
Perhaps just as significantly as the hopes it raises, the announcement of ChatGPT reminded many people that the rapid availability of such tools will forcing organizations and individuals to address challenges that they may well be ill prepared to address. Beyond automation and intelligent decision making, LLM-based AI systems are capable of generating vast amounts of information that is not only indistinguishable from human-generated materials, it is also intended to fool people that it comes from a human source. Furthermore, the intelligence used by such systems is limited in scope, often unverified, and subject to manipulation.
Consider, for example, the implications of a sophisticated AI tool that has no concept of right or wrong. Ask it a question and it responds with an answer that is plausible and believable. However, it may also be incomplete, misleading, or simply false. Those already making use of ChatGPT report that its responses are “dangerously creative”. That is, its creativity knows no bounds. It certainly has no limits on whether its answers are true or false.
This can have disturbing results. Cassie Kozyrkov, Chief Decision Scientist at Google, calls ChatGPT “the ultimate bullshitter”. It provides seemingly correct answers to anything and everything. But without a filter on what it says, and incapable of determining what is true and what is not. It is dangerous precisely because it has no interest in ensuring the validity of its responses.
Furthermore, ChatGPT is widely available at zero cost. This makes it very attractive across many domains. So much so that it had over one million users sign up in less than a week. Its potential uses appear to be never ending. But they also bring with them some troubling questions.
Imagine the implications if every student writing an essay can use ChatGPT to generate the text. Every company producing software can deploy ChatGPT to create its code. Every social media channel is clogged with responses created by ChatGPT. And so on. What will this do to many of our knowledge-based professions? What are the implications for intellectually property and liability in a world where we cannot distinguish how information is generated? How will we evaluate the value and validity of AI generated responses? Will wide availability of AI-generated responses destabilize many of our existing systems? These and many other questions are left hanging in the air.
It is with this in mind that Paul Kedrosky, an economist and MIT fellow, refers to ChatGPT as “a virus that has been released into the wild”. He believes that most organizations are completely unprepared for the impact of ChatGPT. He sees its broad release without restrictions as reckless and has opened a pandoras box that should not have been opened. With the release of ChatGPT we are now beginning to realize just how much that is still to be debated about the future of our digital world.
The latest 2023 AI Index from Stanford University takes this argument one step further. It raises the concern that decisions about how to deploy the latest AI technology and to mitigate are in the hands of a few Big Tech companies. Yet, as their influence grows, they have been seen to cut their AI safety and ethics teams. Several leading figures in AI have highlighted the challenges that this brings to AI’s future and have called for more focus and investment to govern the use of AI.
In such circumstances, those requesting a pause see the unmanaged release of more and more sophisticated AI systems as irresponsible at best, and dangerous at worst. Cooperative agreements and increased focus are required to reduce potential AI harms that are already having significant negative impacts, but may well seen be out of control.