A new report from the European Cyber Agora (ECA), spearheaded by the German Marshall Fund of the United States, emphasizes the urgent need for the European Union and the United States to collaborate on creating a robust, democratic framework for global AI governance. The findings, derived from two expert workshops held in 2024, outline steps to align AI policies and standards across the Atlantic while advancing international partnerships.
The workshops, part of ECA’s “AI, Transatlantic Alignment, and Geopolitics” workstream, identified three primary pillars for cooperation: regulatory alignment, economic competitiveness, and international engagement. At the core of the dialogue is a shared commitment to uphold democratic values and fundamental rights in AI applications. Both the EU and the U.S. have already made strides with the EU’s AI Act and the U.S. Executive Order on AI, but the report highlights gaps that need addressing to avoid fragmented governance systems globally.
The report suggests a multi-pronged approach to strengthen transatlantic cooperation:
- Standards and Risk Management: Building on the work of the EU-U.S. Trade and Technology Council, which has developed joint roadmaps for AI risk management and taxonomy, the report urges deeper collaboration on technical standards. It highlights the potential for these standards to influence global AI governance, similar to the “Brussels effect” seen with GDPR.
- Global Partnerships: Recognizing the competitive edge of authoritarian states in the AI space, the report stresses the need for coordinated international outreach. It suggests leveraging multilateral platforms like the G7 and United Nations and expanding bilateral ties with key regions, including Africa, Latin America, and Asia.
- Addressing Divergences: Despite shared objectives, the EU and the U.S. differ in their regulatory frameworks. The EU’s AI Act applies a horizontal, comprehensive model, while the U.S. relies on sectoral and agency-specific guidelines. These differences present challenges but also opportunities for complementary approaches.
- Economic Security and Innovation: The report identifies economic security and research investment as key areas for collaboration. While the U.S. leads in private-sector AI funding, the EU’s structured regulatory approach can provide a model for balancing innovation with accountability.
The report cautions against potential fractures in transatlantic cooperation due to political uncertainties, including the upcoming European Parliament elections and the possibility of policy shifts under a new U.S. administration. Additionally, the report notes differing priorities in economic security, with the U.S. focused on national security and China, while the EU’s strategy is less overtly geopolitical.
To ensure robust governance, the report advocates for greater inclusion of civil society and the private sector in policy making processes. It calls for meaningful multi stakeholder dialogue to address the societal impacts of AI, particularly in developing countries where digital divides persist.
Need Help?
Keeping track of the everchanging AI landscape can be tough, especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.