Over 100 Companies Sign EU AI Pact, Pledging to Drive Trustworthy AI Development Ahead of EU AI Act Implementation

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/26/2024
In News

UPDATE — AUGUST 2025: The Pact now counts over 300 signatories by mid-2025, including major tech players and SMEs, all of whom use the voluntary pledges to align early with compliance obligations. A key milestone came in June 2025, when the European AI Office became fully operational in Brussels. This new supervisory body oversees consistent enforcement of the AI Act, coordinates with national regulators, and manages obligations for general-purpose AI models like ChatGPT and Llama. GPAI providers are now required to publish technical documentation, summarize training data, and carry out systemic risk assessments, with labeling obligations for deepfakes and other synthetic content beginning to apply in 2025.

At the same time, the EU has invested heavily in innovation capacity through its AI Factories program, providing high-performance computing clusters, sector-specific data spaces, and compliance support to startups across Europe. New hubs in Germany, France, and Spain are already live, with more on the way in Italy and Eastern Europe. Together with venture funding and the creation of a European AI Research Council, these efforts form part of the AI Innovation Package.

Globally, the Pact and the Act are setting the tone for regulatory convergence. Brazil, Canada, and Japan have explicitly drawn from the EU model in their frameworks, while the G7 Hiroshima AI Code of Conduct builds on EU-style transparency and risk controls. Even in the U.S., where no federal AI law exists, White House voluntary commitments signed by leading AI companies echo the Pact’s structure.

ORIGINAL NEWS STORY:

Over 100 Companies Sign EU AI Pact, Pledging to Drive Trustworthy AI Development Ahead of EU AI Act Implementation

 

More than 100 companies have signed the EU AI Pact, pledging to uphold responsible and ethical standards in artificial intelligence (AI) ahead of the EU AI Act’s full implementation. Announced by the European Commission, this initiative represents a key step toward building a culture of trustworthy AI across industries before the Act’s regulatory obligations take effect.

 

Building a Foundation for Responsible AI

 

The EU AI Pact encourages organizations to voluntarily align with the core principles of the AI Act. Signatories—ranging from multinational corporations to small and medium-sized enterprises (SMEs)—span diverse sectors, including IT, telecommunications, healthcare, banking, and automotive manufacturing. By joining the Pact, companies commit to integrating responsible AI practices into their operations. This includes improving transparency, ensuring safety, and mitigating bias. The initiative also positions the EU as a global leader in the ethical governance of AI technologies.

 

Three Core Commitments

 

Under the Pact, participating companies have agreed to three foundational actions. First, they must establish an internal AI governance strategy. Each organization is expected to create frameworks for ethical AI development that align with the EU AI Act’s compliance requirements. This step promotes accountability and prepares companies for upcoming regulatory enforcement.

Second, companies are required to map and assess their high-risk AI systems. This involves identifying tools and applications that may fall under the “high-risk” category defined by the Act—such as those used in healthcare, finance, or law enforcement. These systems will be subject to strict oversight to protect individuals’ rights and safety. Third, signatories have pledged to promote AI literacy and awareness among their staff. Training programs and internal education efforts will ensure that employees understand the ethical and legal responsibilities of AI development, helping organizations foster a culture of trust and compliance.

 

Beyond the Basics: Additional Voluntary Pledges

 

Many signatories have gone further by taking supplementary pledges. These include guaranteeing human oversight in AI decision-making, addressing algorithmic bias, and clearly labeling AI-generated or synthetic content. Transparency measures—such as watermarking deepfakes—are especially important amid rising concerns over AI-driven disinformation. These voluntary commitments are designed to prepare organizations for the AI Act’s legally binding obligations, which will soon govern transparency, accountability, and risk management in AI systems across the European Union.

 

Supporting Innovation Through AI Factories

 

The EU AI Pact operates alongside the AI Factories program, a central component of the AI Innovation Package. The AI Factories offer startups and companies access to shared data spaces, technical expertise, and high-performance computing infrastructure. These resources help accelerate innovation while ensuring compliance with Europe’s ethical and safety standards. The program will support priority sectors such as healthcare, energy, transportation, aerospace, and robotics. By combining innovation resources with strong governance, the European Commission aims to build a resilient AI ecosystem that balances competitiveness with societal trust.

Toward Global Convergence

 

As the EU finalizes the AI Act’s enforcement phase, the AI Pact is already influencing global policy. Countries including Brazil, Canada, and Japan have referenced the EU’s approach in shaping their own frameworks. The G7 Hiroshima AI Code of Conduct also reflects similar principles of transparency and accountability. Even in the United States—where no federal AI law yet exists—the White House’s voluntary AI commitments signed by major technology companies closely mirror the EU’s Pact structure. Together, the Pact and the Act signal a new era of global regulatory alignment, setting benchmarks for safety, fairness, and transparency in AI development.

 

 

Need Help?

 

If you have questions or concerns about the EU’s AI proposals and guidelines, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter