A group of 12 former and current employees of OpenAI and Google's DeepMind, two of the world's frontrunners in AI development, issued an open letter raising concerns about AI safety risks, lack of transparency, and accountability in leading AI companies.
The letter was endorsed by renowned AI researchers Yoshua Bengio, Geoffrey Hinton, and Stuart Russell.
It warns about potential risks such as existing inequalities, manipulation, misinformation, and loss of control of autonomous AI systems, " potentially resulting in human extinction.". The letter emphasizes the importance of employees as key messengers to the public.
It expresses concern that AI companies have financial incentives to avoid effective oversight, and self-regulation would not address the issue, highlighting the need for more transparency, accountability, and protection for employees who voice concerns.
Criticism of OpenAI's leadership, particularly CEO Sam Altman, has grown amid the company's unveiling of new AI products and controversies like the human-like chatbot resembling Scarlett Johansson's voice.
OpenAI faced backlash for threatening to revoke equity from employees who didn't sign strict NDAs, and its decision to disband a safety team and lose key researchers.
A former OpenAI researcher claimed the company is rushing to be the first to develop artificial general intelligence (AGI).
Sources: Axios, The New York Times, CNN, Futurism, NZ Herald, USA News, IB Times, and Nieman Lab.
This article was written in collaboration with Generative AI news company Alchemiq.