Staff At Top AI Companies Sign Letter Calling For More Transparency About Risks
Public deserves to know what's going on, they say
Current and former employees at prominent AI companies including OpenAI, DeepMind and Anthropic, have signed an open letter calling for AI companies to sign up to a set of principles around safety and transparency.
The signatories say they believe in the potential of AI for solving many of humanity's problems and to deliver unprecedented benefits but believe that the profit motive is providing "strong financial incentives to avoid effective oversight," adding that the "bespoke structures of corporate governance" are not up to the task.
Companies cannot be relied upon to share risk based information with governments and civil society, they argue. In the absence of proper oversight structures, only former and current employees at AI companies can keep them accountable.
The signatories note that restrictive agreements often prevent employees from speaking about legal but questionable practices they may have seen.
Several reports have detailed how employees at OpenAI have had their stock in the company tied to "non-disparagement" agreements.
The company says it has since reversed this policy, but such practices demonstrates the barriers to transparency and accountability imposed by companies who stand to make vast profits from AI.
Other examples of the opaque nature of operations, are the mysterious hiring and refiring of Sam Altman, CEO at Open AI; the departure of chief scientist and superalignment lead Ilya Sutskever and safety researcher Jan Leike from that company; and the laying off of AI safety teams across the tech sector.
The open letter, headed "A Right to Warn about Advanced Artificial Intelligence", calls on "advanced AI companies" to sign up to the following four principles, as confidentiality allows:
1. That the company will not enter into or enforce any agreement that prohibits "disparagement" or criticism of the company for risk-related concerns;
2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organisation with relevant expertise;
3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public;
4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.
Signatories include Jacob Hilton, formerly a reinforcement learning researcher at OpenAI, Ramana Kumar, a former AGI safety researcher at Google DeepMind, and Neel Nanda, a research engineer at DeepMind who previously worked for Anthropic.
The letter is endorsed by three of the biggest names in AI development: Yoshua Bengio, Geoffrey Hinton and Stuart Russell.
This article originally appeared on our sister site, Computing.