A new report from the Atlantic Council urges the U.S. defense and national security community to engage more actively in debates over artificial intelligence (AI) regulation, warning that civilian-focused rules could have unintended consequences on military readiness, innovation, and capability development.
Deborah Cheverton authored the report, “Second-order Impacts of Civil Artificial Intelligence Regulation on Defense.” It outlines how regulatory frameworks designed for civil AI systems could ripple into the defense sector—even when those frameworks contain explicit carve-outs for military use.
“The assumption that defense is insulated from civil AI regulation is deeply flawed,” the report argues. “When technology is inherently dual-use, those carve-outs can be porous at best.”
Cheverton identifies three areas of concern: market-shaping regulations that could restrict the availability of AI tools to defense agencies; judicial interpretations of civil laws that may encroach on military operations; and increased development costs tied to compliance burdens. The report categorizes areas of regulatory development into three calls to action: “Be Supportive,” “Be Proactive,” and “Be Watchful.”
Among the supportive areas, the report highlights technical standards, risk assessment tools, and safety frameworks. It urges the defense community to align with civil-sector best practices to reduce costs and improve interoperability. But it also calls for vigilance in areas like data protection laws, legal liability frameworks, and the regulation of adjacent sectors. Those include policing and surveillance, which could indirectly impact defense AI applications.
“Adopting civil tools can be cost-effective,” Cheverton writes, “but without proactive input, the national security community risks being sidelined by frameworks that were never designed with it in mind.”
The report also provides a comparative analysis of AI regulatory developments in major jurisdictions including the United States, China, the European Union, and Singapore. In the U.S., federal efforts remain fragmented, with no overarching law on AI, while states like Colorado and Utah have advanced legislation focused on consumer protection. Experts describe the European Union’s AI Act, which took effect in August 2024, as the most comprehensive to date, though it too exempts military use.
Cheverton calls on military and intelligence stakeholders to engage in ongoing regulatory discourse. “AI governance,” she concludes, “should not be treated as a purely civilian affair. The stakes are too high.”
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.