The adoption of the EU Commission Guidelines on providers of General Purpose AI (GPAI) models marks a turning point for anyone developing, distributing, or customizing AI systems.
With the obligations now in effect as of August 2, under the enforcement of the AI Act, companies and providers must adapt to a new regulatory landscape with clear responsibilities across the entire value chain. Beyond ensuring compliance and avoiding sanctions or operational disruptions, this framework is a real opportunity: it encourages the structured, ethical, and transparent adoption of AI.
AI Act and General Purpose AI Models
The AI Act is the first comprehensive European regulation on artificial intelligence. It classifies AI systems based on risk levels and introduces tiered obligations for developers, deployers, and distributors.
A GPAI model is designed to adapt to multiple uses, rather than being restricted to a single task or industry. If your company develops or distributes AI systems that can be applied across different contexts, from customer service to content generation, you fall into this category and must ensure compliance with the new requirements.
Key Points from the EU Guidelines on GPAI Providers
The Guidelines focus on five main areas:
- Identifying relevant actors: they clarify who qualifies as a “provider” and when a model falls under the GPAI category.
- Transparency and accountability: providers must maintain detailed and up-to-date technical documentation about the model’s features and training data, ensuring clarity and traceability throughout the AI value chain.
- Open source: Open-source GPAI models may benefit from certain exemptions, but they still need to meet minimum transparency and accountability standards to ensure safety and reliability.
- Risk management: for more complex models, or those with significant potential impact on safety, rights, or the market, stricter obligations apply.
- Code of conduct: the EU encourages the adoption of voluntary best practices to help providers move toward compliance, even if full regulatory alignment will require further steps and commitments.
Timeline and enforcement
The rules officially took effect in August 2025. From 2026, monitoring and enforcement mechanisms will begin, and August 2, 2027 is the final deadline for compliance for models already on the market before August 2025.
What this means for businesses and providers
Companies that develop, integrate, or use GPAI models must now begin mapping their current systems, updating documentation, and assessing legal risks related to privacy, copyright, and cybersecurity. Internal teams will also need targeted training. A legal partner specialized in AI compliance is essential to ensure alignment with the new rules, minimize exposure, and protect the value of AI-related investments.
The role of van Berings
van Berings acts as a strategic partner for companies navigating the challenges posed by the AI Act and the new European guidelines and helps clients assess regulatory and legal impacts, develop compliant internal policies, and proactively manage risks related to security, copyright, and overall AI governance.