Meta Platforms has rejected the European Union’s new voluntary code of practice for general-purpose artificial intelligence, saying it creates legal uncertainty and exceeds the scope of the EU’s AI Act, The Wall Street Journal reported.
The code, finalized last week by the European Commission, offers model providers guidance on transparency, safety, and copyright, aiming to help companies align with the AI Act, which takes effect August 2. Though signing is optional, EU officials say non-signatories may face greater regulatory scrutiny.
“Europe is heading down the wrong path on AI,” Meta’s Chief Global Affairs Officer Joel Kaplan wrote on LinkedIn. “This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”
The EU’s AI Act bans certain uses of AI and imposes new risk and transparency obligations, particularly on high-risk systems. The rules will be enforced on new general-purpose AI models starting next year and on existing models by 2027. Violations could lead to fines of up to 7% of a company’s global revenue.
Kaplan said Meta shares the concerns raised by major European tech firms, including Mistral AI, ASML, and Airbus. Those companies recently urged the Commission to delay enforcement of the AI Act, arguing that its complexity threatens innovation.
Meanwhile, OpenAI has agreed to sign the code, contingent on final approval by the EU’s AI Board. The company said the decision reflects its commitment to safe and accessible AI for Europeans.
The EU hopes its AI gigafactory initiative—designed to boost computing infrastructure—will help close the gap with the U.S. and China. OpenAI has expressed interest in participating in the program.
Need Help?
If you’re concerned or have questions about how to navigate any country’s AI regulatory landscape, reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.