OpenAI Forms Team to Manage Risks of Superintelligent AI
OpenAI, the team behind the AI chatbot ChatGPT, has announced the formation of a new team dedicated to managing the risks associated with superintelligent AI systems. In a blog post on July 5, the non-profit organization expressed its intention to navigate and control AI systems that surpass human intelligence.
On this page
While OpenAI believes that superintelligence has the potential to address numerous challenges, it also acknowledges the risks associated with it. The organization warns that the immense power of superintelligence could pose dangers, including the potential disempowerment or even extinction of humanity.
To address these concerns, OpenAI plans to allocate 20% of its existing compute power to this initiative. The organization aims to recruit and develop a team of researchers specializing in automated alignment, striving to achieve a level of alignment comparable to human intelligence.
Currently, Ilya Sutskever, OpenAI’s chief scientist, and Jan Leike, the head of alignment at the research lab, have been appointed as co-leaders of the effort. OpenAI has extended an open invitation to machine learning researchers and engineers to join the team and contribute to this important mission.
The content on The Coinomist is for informational purposes only and should not be interpreted as financial advice. While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, or reliability of any content. Neither we accept liability for any errors or omissions in the information provided or for any financial losses incurred as a result of relying on this information. Actions based on this content are at your own risk. Always do your own research and consult a professional. See our Terms, Privacy Policy, and Disclaimers for more details.