A new report warns that artificial intelligence could drastically lower the barriers to bioterrorism, leaving U.S. biosecurity measures ill-prepared for the emerging risks of AI-enabled biotechnology.
Published August 6 by the Center for Strategic and International Studies (CSIS), “Opportunities to Strengthen U.S. Biosecurity from AI-Enabled Bioterrorism: What Policymakers Should Know” argues that rapid advances in artificial intelligence, especially large language models (LLMs) and biological design tools (BDTs), could help bad actors develop or even invent pathogens with pandemic potential. The report, authored by Georgia Adamson and Gregory C. Allen, stresses that the United States must act urgently to update its biodefense strategies.
For nearly a century, the cost and expertise required to build bioweapons has steadily declined. What once required thousands of scientists and large budgets could, in some cases, now be accomplished with limited resources and commercial lab services. According to the report, AI threatens to accelerate this trend even further. Widely available LLMs are “on the cusp” of helping novices plan and execute biological attacks, while cutting-edge BDTs could someday design entirely new pathogens.
The authors cite recent safety assessments from OpenAI and Anthropic showing their latest models can already provide expert-level advice in sensitive virology areas. Meanwhile, Stanford’s Evo 2 model, trained on an enormous database of genomes, demonstrates the power of BDTs to simulate genetic behavior and design novel sequences. Though current systems cannot yet create pandemic-scale pathogens, the report warns that trajectory is clear—and security measures are falling behind.
Current U.S. safeguards, such as list-based screening of DNA synthesis orders, are inadequate in an era when AI could generate dangerous sequences that don’t match anything on official watchlists. At present, just 63 agents and toxins are federally regulated. “In theory, these mechanisms work until a BDT creates the 64th highly contagious and lethal organism,” the authors caution. Similarly, existing safeguards in advanced design models like Evo have already been circumvented, raising concerns that protective guardrails won’t hold as capabilities improve.
The report outlines three key recommendations for policymakers. First, Congress should fund the National Institute of Standards and Technology (NIST) and the new U.S. Center for AI Standards and Innovation (CAISI) to continue work at the intersection of AI and biosecurity. Second, CAISI should lead evaluations of frontier biological AI tools with support from the TRAINS Taskforce and international AI Safety Institutes. And third, the White House should direct agencies to develop a standardized AI-enabled screening system for DNA synthesis, capable of identifying novel threats that evade today’s static lists.
The Trump administration’s July 2025 AI Action Plan acknowledged the dual-use risks of biotechnology and called for stronger defenses. But proposed budget cuts to NIST, the report warns, could undercut progress just as the stakes are rising.
AI’s promise in medicine and research remains immense, from accelerating vaccine discovery to improving diagnostics. Yet, the authors argue, policymakers must ensure those benefits are not overshadowed by risks of catastrophic misuse. “As the barriers to bioterrorism continue to fall in the age of AI,” they conclude, “the actions U.S. leaders take may determine whether such threats remain the realm of a few evil geniuses—or fall within reach of evil morons as well.”
Need Help?
If you’re concerned or have questions about how to navigate the AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.