OpenAI is seeking to recruit a new executive tasked with examining emerging AI-related dangers spanning computer security to mental health issues.
In a post on X, CEO Sam Altman acknowledged that AI models are “beginning to pose significant challenges,” including their “potential influence on mental health” and their increasing proficiency in “computer security, where they are starting to uncover critical weaknesses.”
“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” Altman stated.
The job description for OpenAI’s Head of Preparedness details the role as overseeing the implementation of the company’s preparedness framework, which is defined as “our guide outlining OpenAI’s strategy for monitoring and addressing cutting-edge capabilities that introduce novel severe harm risks.”
In 2023, the company initially unveiled the formation of a preparedness team, stating its mandate was to investigate potential “catastrophic risks,” encompassing both immediate dangers like phishing attempts and more theoretical ones like nuclear threats.
Under a year later, OpenAI shifted Aleksander Madry, its Head of Preparedness, to a position concentrating on AI reasoning. Furthermore, other safety leaders within OpenAI have either departed the organization or assumed different responsibilities unrelated to preparedness and safety.
Recently, the company also revised its Preparedness Framework, indicating a potential “adjustment” of its safety protocols should a rival AI research entity deploy a “high-risk” model lacking comparable safeguards.
As Altman hinted in his message, generative AI chatbots are increasingly under examination for their effects on mental well-being. Recent legal actions claim that OpenAI’s ChatGPT intensified users’ erroneous beliefs, heightened their social withdrawal, and, in some instances, contributed to suicide. (OpenAI has stated its ongoing efforts to enhance ChatGPT’s capacity to detect indicators of emotional distress and guide users toward actual support services.)