Meta Warns EU AI Code Risks AI Development

Meta Platforms Inc. has officially refused to sign the European Union’s voluntary AI Code of Practice on July 18, 2025, making it the first major tech company to openly challenge the bloc’s landmark AI regulation ahead of looming compliance deadlines.

The decision, confirmed by Chief Global Affairs Officer Joel Kaplan, positions Meta in direct opposition to a legal regime designed to set global standards for artificial intelligence governance.

Kaplan Leads Meta’s Public Pushback

Joel Kaplan stated, “Europe is heading down the wrong path on AI,” and that the Code “introduces a number of legal uncertainties for model developers.” His public comments, delivered in a July 18 statement, accuse European regulators of creating confusion instead of clarity. Kaplan argues that the Code, rather than reducing red tape, could stall both the development of next-generation AI and the ability of European businesses to compete globally.

The EU’s Code of Practice and Industry Divide

The EU’s AI Code of Practice was published by the European Commission on July 10 2025, just weeks before the Act’s obligations take effect. Designed as a voluntary framework, the Code aims to help companies comply with sweeping new rules covering transparency, copyright, and safety in AI.

While Meta refuses to participate, other technology leaders are taking a different path. OpenAI announced its intention to sign the Code on July 11, 2025, highlighting its commitment to providing secure and accessible AI models for the European market.

Microsoft President Brad Smith echoed this approach, stating, “I think it’s likely we will sign the Code.” Their willingness to comply creates a growing competitive rift in the industry, as rival companies face increasing pressure to declare their positions.

Corporate Resistance and Industry Demands

Meta’s rejection has emboldened a broader industry coalition seeking to slow the rollout of the Act. Forty-four European companies, including Bosch, Siemens, SAP, Airbus, and BNP, signed a letter calling for a two-year “stop the clock” delay.

The group argues that vague and overlapping rules could stifle European innovation and cede ground to competitors in the U.S. and China. Their letter warns that the Act may “jeopardize” the continent’s chances of producing global tech leaders.

Commission Rejects All Calls for Delay

In a July 4, 2025, press briefing, Commission spokesperson Thomas Regnier stated, “there is no stop the clock. There is no grace period. There is no pause.”

This blunt rejection underscores the EU’s resolve to enforce the new rules without exception or delay, despite growing corporate pressure and public lobbying campaigns.

The AI Act obligations for general-purpose AI models take effect on August 2, 2025. These sweeping requirements demand rigorous documentation, transparency in training data, clear copyright compliance, and robust safety measures for advanced AI models.

The Commission’s enforcement powers take effect on August 2, 2026, providing companies with a one-year transition period before full penalties apply.

Penalties and Systemic Risk Standards

Non-compliance with the AI Act can result in maximum penalties of €35 million or 7% of the company’s global annual revenue, whichever is higher.

The law also establishes a technical threshold for “systemic risk” at 10^25 floating-point operations (FLOPs), targeting only the most powerful AI systems for the strictest oversight.

Civil Society Mobilizes to Defend the Act

As industry lobbyists press for delays, a coalition of 52 civil society organizations wrote to oppose delays in the implementation of the AI Act on July 9, 2025.

Groups including European Digital Rights (EDRi) and AlgorithmWatch warn that postponements could weaken hard-won protections for rights, democracy, and the environment.

Economic Stakes and Global Implications

The regulatory standoff is about more than just compliance costs. European businesses argue that overly strict regulation could slow investment and innovation, harming Europe’s position in the global AI race.

Meta and its allies argue that the Act’s requirements will increase costs and introduce new uncertainties. At the same time, EU regulators emphasize that robust rules are essential for maintaining consumer trust and fostering long-term industry growth.

The EU’s approach may set a precedent for other governments seeking to regulate artificial intelligence, shaping how companies and regulators interact worldwide.

Technical and Compliance Challenges Ahead

For companies, meeting the Code’s requirements demands new systems for documenting data, tracking model development, managing copyright claims, and ensuring technical safety.

Small firms, in particular, face a steep learning curve and resource constraints. With enforcement on the horizon, the world is watching to see whether the EU’s regulatory model can survive industry resistance and set a global standard for trustworthy AI.

Key Takeaways:

  • Meta has officially refused to sign the EU’s voluntary AI Code of Practice on July 18, 2025, intensifying industry pushback against new European AI regulations.
  • Joel Kaplan criticized the Code as introducing legal uncertainties for developers, deepening the rift between Meta and EU lawmakers.
  • While Meta resists, OpenAI and Microsoft have signaled they will sign the Code, highlighting a growing competitive divide among leading tech firms.
  • The AI Act’s obligations for general-purpose models take effect on August 2, 2025, with fines of up to €35 million or 7% of annual revenue for non-compliance.
  • EU regulators have categorically rejected industry calls for delays, insisting that there will be “no stop the clock, no pause” in the rollout of the Act.
  • Fifty-two civil society groups have publicly urged the Commission to resist pressure for delays, arguing that robust AI regulation is essential to protect rights and democracy.

Similar Posts