OpenAI Launches GPT-5.4-Cyber for Vetted Security Professionals Under Tiered Access Program
The new model variant imposes fewer restrictions on vulnerability research and analysis than standard GPT models, giving approved security vendors and researchers a more capable tool for defensive cyber operations.
OpenAI on Tuesday unveiled a new strategy for expanding access to AI models with advanced cybersecurity capabilities, introducing a model variant called GPT-5.4-Cyber and adding new tiers to its Trusted Access for Cyber program, Axios reported on April 14. The model is designed to support defensive security operations and is more permissive than standard versions of GPT-5.4 for activities including vulnerability research and analysis.
The rollout represents a shift in how OpenAI approaches dual-use cybersecurity risk: rather than restricting model capabilities broadly, the company is focusing on validating who has access to the most sensitive features. Approved users who meet the program's highest verification criteria will gain access to GPT-5.4-Cyber, which aims to eliminate what OpenAI described as unnecessary friction for legitimate security operations — a frustration that security partners had raised after standard GPT models occasionally declined to engage with dual-use cyber queries.
Initial access is restricted to vetted security vendors, organizations, and independent researchers. OpenAI stated its goal is to broaden access to thousands of users and security teams over time, contingent on successful verification.
The company said it is not currently providing GPT-5.4-Cyber to US government agencies but is engaged in ongoing discussions and will evaluate such access through its internal governance and safety review protocols.
The launch paralleled but differed from Anthropic's Project Glasswing approach. While Anthropic limited Mythos Preview to approximately forty pre-approved organizations in critical infrastructure, OpenAI is pursuing a broader distribution model with tiered permissions rather than a hard whitelist.
Analysts noted the contrast reflected differing risk appetites and strategic priorities as both companies competed to define the responsible deployment standard for AI in cybersecurity.
Read the original reporting at Axios.