OpenAI Unveils New Safeguards as AI Nears ‘High’ Capability Threshold in Biology

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 06/27/2025
In News

OpenAI is taking new steps to ensure its frontier AI models are developed responsibly as they approach high” capability levels in biology—warning that these systems could, without the right safeguards, assist in the creation of biological threats by users with only limited expertise.

 

In a statement, OpenAI outlined a comprehensive plan to mitigate the dual-use risks posed by powerful AI tools in the life sciences. While advanced models are already helping researchers design vaccines, accelerate drug development, and identify enzymes for sustainable fuels, the same capabilities could also be exploited to guide harmful experiments or support bioweapon development.

 

“We don’t think it’s acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards,” OpenAI said.

 

The company is preparing for a future where models can reason over biological data, predict chemical reactions, and guide lab procedures with a level of precision that could pose biosecurity concerns. Its approach includes training AI to refuse harmful prompts, deploying always-on detection systems, enforcing strict usage policies, and conducting end-to-end “red teaming” exercises with biothreat experts.

 

OpenAI’s Safety and Security Committee has already reviewed and implemented these measures in current models like GPT-4-o. The company says it will not release any model that crosses the “High” capability threshold—defined in its Preparedness Framework—without first ensuring strong mitigations are in place.

 

The company is also collaborating with global experts, including Los Alamos National Laboratory, and plans to host a biodefense summit in July with U.S. and international partners. The summit aims to align on responsible deployment practices and explore ways AI can be used to bolster public health and biosecurity.

 

OpenAI emphasized that while its focus is on securing its own models, broader action is needed across industry and government. It called for enhanced screening of DNA synthesis, investment in early detection systems for new pathogens, and support for startups working at the intersection of AI and biodefense.

 

“Our safety work is not just about models—it’s about preparing society,” the company said.

 

 

Need Help?

 

If you have questions or concerns about OpenAI’s high capability or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter