Deutsche Welle EN

Tech experts call for 6-month pause on AI development

Tech experts call for 6-month pause on AI development
The GPT-4 logo with the OpenAI logo in the backgroundThe GPT-4 logo with the OpenAI logo in the background
The GPT-4 logo with the OpenAI logo in the background

Several leaders in the field of cutting-edge technology have signed a letter that was published on Wednesday, calling for artificial intelligence developers to pause their work for six months.

The letter warns of potential risks to society and humanity as tech giants such as Google and Microsoft race to build AI programs that can learn independently.

The warning comes after the release earlier this month of GPT-4 (Generative Pre-trained Transformer), an AI program developed by OpenAI with backing from Microsoft.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said.

Who signed the letter?

Signatories to the letter included big names such as Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio and Stuart Russel, as well as household names such as Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak.

The letter says "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control."

"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," it adds. "This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."

Can chatbot ChatGPT make Bing more popular?

This is a modal window.

Beginning of dialog window. Escape will cancel and close the window.

End of dialog window.

Governments working out their approaches

The letter was organized by the non-profit Future of Life Institute — which is primarily funded by Musk according to the EU's transparency register — and follows attempts by the UK and EU to work out how to regulate this rapidly advancing technology.

The British government released a paper on Wednesday which gave an idea of its approach, but said it would "avoid heavy-handed legislation which could stifle innovation."

EU lawmakers have also been in talks regarding the need for AI rules, amid fears that it could be used to spread harmful disinformation and make entire jobs unnecessary.

But the letter has not been without criticism.

"These kinds of statements are meant to raise hype. It's meant to get people worried," Johanna Björklund, an AI researcher and associate professor at Umea University. "I don't think there's a need to pull the handbrake."

She called for more transparency rather than a pause.

ab/msh (AP, Reuters)

Deutsche Welle EN

+ weitere Artikel anzeigen