
As the European Union moves closer to finalizing its AI regulations, a third and final draft of the Code of Practice for General Purpose AI (GPAI) models was published on Tuesday. With a May deadline looming, this latest draft aims to provide clearer guidance for AI model providers on complying with the provisions of the EU AI Act.
To enhance accessibility, a dedicated website has been launched, allowing stakeholders to review and provide written feedback on the draft before the submission deadline of March 30, 2025.
Table of Contents
Understanding the EU AI Act and Its Impact on AI Model Makers
The EU AI Act is a risk-based framework designed to regulate artificial intelligence systems, imposing specific obligations on the most powerful AI model makers. These include transparency, copyright compliance, and risk mitigation requirements. Noncompliance with these regulations could result in significant penalties, with fines reaching up to 3% of global annual revenue.
Key Updates in the Latest Code of Practice Draft (EU AI Act)
The newly revised draft introduces a more streamlined structure, incorporating refined commitments and measures based on feedback from the previous version published in December.
The draft is divided into several sections covering:
- Commitments for General Purpose AI (GPAI) Models
- Detailed Guidance on Transparency and Copyright Compliance
- Safety and Security Measures for High-Risk AI Models

Transparency and Copyright Compliance: Evolving Guidelines
One of the most contentious aspects of the EU AI Act revolves around transparency and copyright issues. The current draft includes an updated model documentation form, requiring GPAI providers to disclose key technical information for downstream compliance.
However, the language used in the draft includes terms like “best efforts,” “reasonable measures,” and “appropriate measures,” leaving room for AI companies to interpret their obligations flexibly. This has raised concerns that major AI developers might continue scraping copyrighted content for training purposes with minimal consequences.
Additionally, an earlier provision that required GPAI providers to establish a direct and rapid complaint-handling system for rightsholders has been softened. The revised version now states that signatories must simply designate a point of contact for communication, without specifying the speed or effectiveness of responses.
The latest draft also suggests that AI model providers may decline to act on copyright complaints if they are deemed “manifestly unfounded or excessive.” This raises concerns that AI developers could dismiss large volumes of automated copyright complaints from content creators using AI-driven detection tools.
Safety and Security Measures Narrowed Further (EU AI Act)
The EU AI Act mandates systemic risk assessment and mitigation for only the most powerful AI models—those trained with over 10^25 FLOPs of computing power. However, in response to industry feedback, this latest draft has further narrowed some of the previously recommended safety measures, prompting criticism from AI watchdogs.

U.S. Influence and Pushback Against EU Regulations
Notably absent from the EU’s official press release is the ongoing criticism from the U.S. government regarding AI regulation. At the Paris AI Action Summit last month, U.S. Vice President JD Vance dismissed European AI regulations as overreaching, warning that strict rules could stifle innovation and economic growth. The Trump administration has since advocated for a more flexible approach, emphasizing AI opportunity over regulation.
Following this pressure, the EU recently scrapped its AI Liability Directive and announced an “omnibus” package aimed at reducing regulatory burdens for businesses. With the AI Act still in the implementation phase, there is growing speculation that U.S. lobbying efforts may lead to further dilution of its strictest requirements.
Tech Industry Reactions and Compliance Challenges
Leading AI companies, including France-based GPAI developer Mistral, have voiced concerns over the feasibility of compliance. Mistral CEO Arthur Mensh recently stated that his company is struggling to find technological solutions to meet the EU’s AI Act requirements and is actively working with regulators to address these challenges.
While the Code of Practice is being developed by independent experts, the European Commission’s AI Office is concurrently working on additional clarifications, including definitions of GPAIs and their compliance obligations. The AI Office is expected to release further guidance “in due time,” potentially shaping the final implementation of the law.
Final Thoughts: What’s Next for the EU AI Act?
As the final draft of the Code of Practice moves toward adoption, AI companies, regulators, and legal experts are closely watching how the EU will balance innovation with enforcement. With mounting pressure from global stakeholders, the AI Office’s upcoming clarifications could be instrumental in determining whether the EU’s AI Act remains a strict regulatory framework or shifts toward a more business-friendly approach.
For now, AI developers and stakeholders have until March 30, 2025, to provide feedback on the draft. The final version of the Code is expected to be implemented later this year, setting the stage for the future of AI regulation in Europe and beyond.