EU AI Act Enforcement Begins: GPAI Compliance and the 10²² FLOPS Threshold

advertisement

On January 1, 2025, the European Union’s AI Act officially entered full enforcement, marking the world’s first comprehensive regulatory framework for artificial intelligence. At its core lies a bold distinction: AI systems exceeding 10²² floating-point operations per second (FLOPS)—a measure of computing power—are classified as “General Purpose AI (GPAI)” and subject to the strictest oversight. This threshold, tailored to target advanced models capable of autonomous learning across diverse tasks, aims to balance innovation with risk mitigation, forcing tech giants and startups alike to rethink their AI development and deployment strategies.

The 10²² FLOPS benchmark is no arbitrary number. EU regulators designed it to capture high-capacity AI systems—such as OpenAI’s GPT-5 and Google DeepMind’s Gemini Ultra—that can outperform humans in complex tasks while posing unforeseen risks, from algorithmic bias to systemic cybersecurity threats. For companies developing or importing GPAI, compliance demands rigorous pre-deployment assessments: they must document computing power usage, conduct third-party audits of risk mitigation measures, and maintain real-time monitoring of how the AI interacts with users and critical infrastructure. “This isn’t just about limiting computing power—it’s about ensuring high-capacity AI serves public good,” said Margrethe Vestager, EU Commissioner for Competition, during the Act’s launch event.

Compliance challenges have already emerged, particularly for firms reliant on cloud-based computing power. Amazon Web Services and Microsoft Azure, which host many GPAI models, now require clients to declare projected FLOPS usage upfront, with penalties for underreporting. Smaller AI startups face steeper hurdles: hiring auditors and upgrading monitoring systems can cost upwards of two million United States dollars annually, prompting the EU to launch a 150 million United States dollar support fund for SMEs. Meanwhile, global players like Meta have adjusted their EU-focused models—scaling back some computing power features to avoid GPAI classification, though critics argue this risks stifling innovation in the bloc.

Enforcement relies on a network of national AI supervision authorities (ASAs) and a central EU AI Board, empowered to issue fines of up to forty million United States dollars or seven percent of global annual revenue for non-compliance. In its first month, the Board already opened investigations into three unnamed tech firms accused of understating their AI systems’ computing power. “Transparency is non-negotiable,” emphasized Wojciech Wiewiórowski, EU Data Protection Supervisor, adding that ASAs will prioritize auditing sectors like healthcare and finance, where GPAI could impact human safety or economic stability.

image.png

The Act also sets a global precedent, with countries like Canada and Japan referencing its 10²² FLOPS threshold in their own draft AI laws. Yet debates persist: some experts argue the threshold is too rigid, as computing power alone does not fully capture an AI’s risk profile. Others praise it as a necessary guardrail. As the EU navigates its first year of enforcement, one thing is clear: the 10²² FLOPS line has redefined what “responsible AI” means for developers worldwide. For users and businesses across Europe, the Act promises greater accountability—even as the tech industry adapts to a new era of regulated innovation.

WriterDick