OpenAI is granting a select group of users access to a new artificial intelligence model designed to identify software security vulnerabilities. This strategic move comes just one week after rival Anthropic PBC announced a limited release of its own security tool, Mythos.
Examine what this competition means for software stocks by upgrading to InvestingPro -
The San Francisco-based company began rolling out GPT-5.4-Cyber on Tuesday to find and fix digital flaws. "We are fine-tuning our models specifically to enable defensive cybersecurity use cases, starting today with a variant of GPT-5.4 trained to be cyber-permissive: GPT-5.4-Cyber," OpenAI stated.
The model is currently available to participants in the Trusted Access for Cyber program, which launched in February. This initiative allows cybersecurity professionals to test OpenAI’s most capable offerings under reduced constraints for probing vulnerabilities.
Anthropic’s competing Mythos model recently sparked significant concern among both financial firms and government agencies. During a meeting last week, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell warned Wall Street leaders about Mythos, according to reporting from Bloomberg.
High-level executives were told they should take the risks associated with the Mythos model seriously. "Digital infrastructure has already been vulnerable for years, before advanced AI even came along," OpenAI noted in its announcement, adding that "threat actors are experimenting with novel AI-driven approaches."
OpenAI intends to scale its program from hundreds of initial testers to thousands of verified defenders in the coming weeks. The company emphasized its commitment to "democratized access" while maintaining rigorous identity verification for those using these advanced offensive-capable tools.

