OpenAI announced a $555,000 Head of Preparedness role this week, creating a dedicated position to manage escalating AI safety concerns. The San Francisco-based company will pay the new executive to oversee risk mitigation across mental health, cybersecurity, and biological threat domains.
CEO Sam Altman described the position as "a stressful job" where candidates will "jump into the deep end pretty much immediately." The role requires building capability evaluations, establishing threat models, and developing safeguards for what OpenAI calls "frontier capabilities that create new risks of severe harm."
The hiring follows multiple lawsuits alleging ChatGPT contributed to user suicides earlier this year. Seven complaints filed in California state courts in November included four wrongful death lawsuits alleging ChatGPT encouraged suicides and three cases claiming the chatbot led to mental health breakdowns and delusions. One case involved a 16-year-old whose parents sued OpenAI, alleging ChatGPT helped plan his suicide.
Altman acknowledged mental health impacts in a December 27 X post, stating "the potential impact of models on mental health was something we saw a preview of in 2025." He added that AI models are now "beginning to find critical vulnerabilities" in computer security systems.
Cybersecurity threats represent the second major risk domain. OpenAI reported this month that its latest model was almost three times better at hacking than three months earlier. The company expects upcoming AI models to continue this trajectory, creating new security challenges.
Rival AI company Anthropic reported what it called the first documented AI-orchestrated cyber espionage campaign last month, where artificial intelligence acted largely autonomously under suspected Chinese state actors. The AI penetrated networks, analyzed stolen data, and created psychologically targeted ransom notes across 17 organizations.
The Head of Preparedness will oversee mitigation design across major risk areas including cyber and biological threats. According to the job posting, the role requires "deep technical expertise in machine learning, AI safety, evaluations, security or adjacent risk domains."
OpenAI first established a preparedness team in 2023 to study potential catastrophic risks ranging from phishing attacks to nuclear threats. The previous Head of Preparedness, Aleksander Madry, was reassigned to focus on AI reasoning less than a year later, with other safety executives also leaving or changing roles.
The $555,000 salary includes equity in OpenAI, a company valued at $500 billion. ChatGPT reached 700 million weekly active users in August 2025 and grew to 800 million by October 2025, according to company announcements.
Industry experts have raised broader concerns about AI safety standards. The Future of Life Institute's AI safety index released earlier this month found major AI companies including OpenAI, Anthropic, xAI and Meta were "far short of emerging global standards."
Microsoft AI CEO Mustafa Suleyman told BBC Radio 4 this week that "if you're not a little bit afraid at this moment, then you're not paying attention." Google DeepMind co-founder Demis Hassabis warned this month of risks that AIs could go "off the rails in some way that harms humanity."
The new position arrives during what Altman called "a critical role at an important time." He stated that while AI models "are improving quickly and are now capable of many great things, they are also starting to present some real challenges."
Applicants must have experience with "designing or executing high-rigor evaluations for complex technical systems." The role is based in San Francisco and focuses on ensuring safeguards remain "technically sound, effective, and aligned with underlying threat models."
OpenAI's safety investment comes as regulatory frameworks remain limited. Computer scientist Yoshua Bengio, known as one of the "godfathers of AI," noted recently that "a sandwich has more regulation than AI," leaving companies to largely regulate themselves.















