UPDATE — AUGUST 2025: The Pact now counts over 300 signatories by mid-2025, including major tech players and SMEs, all of whom use the voluntary pledges to align early with compliance obligations. A key milestone came in June 2025, when the European AI Office became fully operational in Brussels. This new supervisory body oversees consistent enforcement of the AI Act, coordinates with national regulators, and manages obligations for general-purpose AI models like ChatGPT and Llama. GPAI providers are now required to publish technical documentation, summarize training data, and carry out systemic risk assessments, with labeling obligations for deepfakes and other synthetic content beginning to apply in 2025.
At the same time, the EU has invested heavily in innovation capacity through its AI Factories program, providing high-performance computing clusters, sector-specific data spaces, and compliance support to startups across Europe. New hubs in Germany, France, and Spain are already live, with more on the way in Italy and Eastern Europe. Together with venture funding and the creation of a European AI Research Council, these efforts form part of the AI Innovation Package.
Globally, the Pact and the Act are setting the tone for regulatory convergence. Brazil, Canada, and Japan have explicitly drawn from the EU model in their frameworks, while the G7 Hiroshima AI Code of Conduct builds on EU-style transparency and risk controls. Even in the U.S., where no federal AI law exists, White House voluntary commitments signed by leading AI companies echo the Pact’s structure.
ORIGINAL NEWS STORY:
Over 100 Companies Sign EU AI Pact, Pledging to Drive Trustworthy AI Development Ahead of EU AI Act Implementation
The European Commission announced that over 100 companies have become the first signatories of the EU AI Pact, committing to voluntary pledges that promote the safe, responsible, and ethical development of artificial intelligence (AI). This significant milestone comes as the EU prepares for the full implementation of the EU AI Act, which was introduced earlier this year. The signatories range from multinational corporations to small and medium enterprises (SMEs) across various industries, including IT, telecommunications, healthcare, banking, automotive, and aeronautics.
The AI Pact, designed to align with the principles of the EU AI Act, aims to establish a voluntary framework that encourages companies to proactively adopt responsible AI governance ahead of the EU AI Act’s full application. Through these pledges, companies agree to a set of actions intended to ensure their AI systems are safe, transparent, and fair, furthering the EU’s commitment to becoming a global leader in trustworthy AI innovation.
Signatories of the AI Pact have agreed to commit to three core actions. First, companies are required to develop an AI governance strategy that will help foster the adoption of AI technologies within their organizations while ensuring future compliance with the EU AI Act. This strategy includes setting up frameworks for ethical AI development and implementing internal processes that align with the regulatory requirements of the EU AI Act.
Second, companies must conduct thorough mapping of their high-risk AI systems. This involves identifying AI tools and applications within their organizations that may be classified as high-risk under the EU AI Act’s provisions. These systems, which often include AI applications in healthcare, finance, and law enforcement, will be subject to stricter scrutiny to prevent harm and ensure the protection of individuals’ rights and safety.
Third, companies pledge to promote AI literacy and awareness among their staff. This commitment is crucial in ensuring that employees understand the ethical and legal implications of AI technologies. By improving AI literacy, companies can foster a culture of responsibility and accountability in their AI development processes, helping mitigate potential risks.
In addition to these core commitments, more than half of the companies have taken on additional pledges. These include ensuring human oversight in AI operations, mitigating the risks of bias, and transparently labeling AI-generated content, such as deepfakes. This transparency initiative is particularly important, given the rising concerns over the misuse of AI for deceptive purposes.
The launch of the AI Pact is part of the European Commission’s broader efforts to enhance the region’s leadership in AI innovation while ensuring that the technology is developed in a way that prioritizes safety and ethical considerations. As the EU AI Act comes into full effect, the Pact serves as a bridge for companies to transition smoothly into compliance with upcoming regulations.
Complementing the AI Pact, the European Commission is also ramping up its support for AI innovation through initiatives such as the AI Factories program. The AI Factories will provide a one-stop shop for startups and industry players to access the resources they need to develop and deploy AI technologies. These resources include access to data, talent, and high-performance computing infrastructure, which are critical for accelerating AI development.
The AI Factories will play a pivotal role in advancing AI applications in key European sectors such as healthcare, energy, automotive, aerospace, and robotics. This initiative is part of the Commission’s AI Innovation Package, which also includes venture capital support, the establishment of common European data spaces, and the creation of a European AI Research Council. Together, these initiatives aim to position Europe as a global hub for AI research and development while adhering to strict ethical standards.
Need Help?
If you have questions or concerns about the EU’s AI proposals and guidelines, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.