OpenAI is urgently recruiting a Head of Preparedness after CEO Sam Altman publicly acknowledged that the company's AI systems are now discovering critical vulnerabilities in computer security infrastructure. The position carries a $555,000 salary plus equity and represents OpenAI's attempt to address what Altman called "some real challenges" emerging alongside the technology's capabilities.
Altman's announcement on X marks a notable shift in how OpenAI discusses AI safety publicly. The company admits its models "are beginning to find critical vulnerabilities" in security systems. Last month, rival Anthropic revealed that Chinese state-sponsored hackers had manipulated its Claude Code tool to target roughly 30 global entities with minimal human intervention required. The incident demonstrated that theoretical risks around AI-enabled cyberattacks are becoming operational realities.
The job listing outlines a demanding brief. The Head of Preparedness will oversee OpenAI's preparedness framework, concentrating on "frontier capabilities that create new risks of severe harm." Responsibilities span developing capability evaluations, threat models, and mitigations across cybersecurity, biosecurity, and self-improving AI systems. Altman described it as "a stressful job" where the successful candidate will "jump into the deep end pretty much immediately."
Mental health has emerged as a particular flashpoint. Altman specifically mentioned that OpenAI witnessed "a preview of" AI's potential psychological impact in 2025, though he didn't elaborate. The comment arrives against a backdrop of several high-profile lawsuits alleging ChatGPT's involvement in teen suicides, along with documented cases of AI chatbots reinforcing users' delusions and conspiracy theories.
The recruitment follows significant turbulence in OpenAI's safety leadership. The position became vacant after Aleksander Madry, the former Head of Preparedness, departed amid broader changes across the company's safety teams throughout 2024 and 2025. Those exits have fuelled criticism that OpenAI prioritises commercial deployment over rigorous safety protocols.
Altman framed the role as requiring someone who can "help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm." That balance represents one of the thorniest problems in AI deployment. Security researchers have long argued for greater access to frontier models to identify vulnerabilities, but broader access inherently increases the risk that malicious actors will exploit the same systems.
The timing carries weight. OpenAI faces mounting pressure from regulators, competitors, and internal critics who question whether the company's safety infrastructure matches its pace of model development. The acknowledgment that AI systems are actively discovering security flaws suggests OpenAI is encountering capabilities it may not have fully anticipated.
Whether a single hire can address these systemic challenges remains an open question. The Head of Preparedness will inherit a framework that must evolve as rapidly as the models it's meant to govern. The $555,000 salary signals the seriousness with which OpenAI treats the position, but the company now faces questions it can no longer defer about how to deploy systems that are simultaneously useful and potentially destabilising.