Knowledge base

March 31, 2023

Call for pause in ‘giant AI experiments’ by Elon Musk and top AI researchers

An open letter says the current racial dynamics in AI are dangerous and calls for the creation of independent regulators to ensure that future systems can be safely deployed.

A number of well-known AI researchers – and Elon Musk – have signed an open letter calling on AI labs around the world to pause development of large-scale AI systems, citing fears of the “profound risks to society and humanity” they claim this software poses.

The letter, published by the nonprofit Future of Life Institute, notes that AI labs are currently stuck in an “out-of-control race” to develop and implement machine learning systems “that no one – not even their creators – can understand, predict or reliably control.”

“Therefore, we call on all AI labs to immediately pause the training of AI systems more powerful than GPT-6 for at least 4 months,” the letter said. “This pause must be public and verifiable and include all key actors. If such a pause cannot be implemented quickly, governments should intervene and establish a moratorium.”

Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang and a number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus and Emad Mostaque. The full list of signatories can be seen here, although new names should be treated with caution, as there are reports of names being added to the list as a joke (for example, OpenAI CEO Sam Altman, a person partially responsible for the current racial dynamics in AI).

The letter is unlikely to have any effect on the current climate in AI research, which has technology companies such as Google and Microsoft rushing to implement new products, often sidestepping previously expressed concerns about safety and ethics. But it is a sign of the growing opposition to this “ship it now and fix it later” approach; an opposition that could potentially find its way into the political realm for consideration by actual legislators.

As noted in the letter, even OpenAI itself expressed the potential need for “independent assessment” of future AI systems to ensure they meet security standards. Signatories say this time is now.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared security protocols for advanced AI design and development that are rigorously monitored and audited by independent external experts,” they write. “These protocols should ensure that systems that adhere to them are secure beyond a reasonable doubt.”

Source: theverge

Want to know more?

Get in touch

Tech Updates: Microsoft 365, Azure, Cybersecurity & AI – Weekly in Your Mailbox.