European Central: The EU’s AI Regulations Need Refinement. But They Also Show Promise
At a June 24th event in Brussels, EU Commissioner Wopke Hoekstra named Artificial Intelligence the most pressing global challenge, ahead of geopolitics and climate change, citing its impact as "10 times greater" than the Industrial Revolution. In response to AI’s rapid rise, the EU passed the world’s first comprehensive regulatory framework, the AI Act.
With the strictest bans taking effect in February and additional regulations being phased in, the AI Act serves as a test case for the EU’s approach to AI risk analysis. With a year having passed since the act was adopted, one of the world’s watershed AI legislations is now being implemented, with mixed reactions.
AI Risk And Reward: The EU’s Perspective
Hoekstra’s statement about AI’s potential impact is not hyperbolic. Practically every industry is eyeing its use, from its ability to diagnose diseases and develop treatment plans, to its ability to automate vast swaths of entry-level work in myriad white-collar industries.
However, the world is also seeing AI widely used on social media to spread misinformation that can influence elections and spark panic about fake events. Perhaps more dangerous is its use in everyday systems like transportation, where faulty AI automation could wreak havoc if it goes awry.
For these reasons, the EU adopted the EU AI Act in May 2024, and the act is extremely comprehensive. It classifies AI tools into four categories: minimal risk, limited risk, high risk, and unacceptable risk. Spam filters and chatbots fall into minimal and limited risk, respectively, while uses in border control management and public biometric surveillance fall into high and unacceptable risk, respectively.
The Act mandates transparency for high-risk AI, including content labeling, disclosure of training data sources, and requirements for cybersecurity, risk assessments, incident reporting, and codes of practice.
Fines for violating the guidelines range from 7.5 million euros or 1.5% of turnover (whichever is higher) to 35 million euros or 7% of global turnover, depending on the violation.
Reactions To The Regulations
The most stringent regulations, those affecting AI with “unacceptable risks”, took effect last February, with an August 2 2025 deadline for EU nations to create bodies for overseeing and assessing AI compliance and AI regulation implementation. The remaining regulations apply once August 2026 arrives.
But there have already been hiccups in implementing the initial regulations. The AI Code of Practice, a guidance document to help AI developers comply with the act, missed its publication date of May 2 and wasn’t published until July 10. This left many businesses in the dark about how to comply and prompted a group of 45 European companies to publish an open letter requesting a two-year “clock-stop” on the AI Act’s next implementation deadlines.
Notably, these guidelines are voluntary, and already there have been some notable exceptions among the signatories. While companies like Anthropic, OpenAI, and Microsoft all signaled their intentions to sign, Meta announced they would not sign.
Regulatory pushback continues. While an EU spokesperson said signatories gain legal clarity and less bureaucracy, Big Tech group CCIA, including Meta and Google, called the code overly burdensome.
Pushback has even made its way to the ears of policymakers. According to a report by Corporate Europe Observatory and LobbyControl, tech companies had more access to the drafting process than others and were invited to dedicated workshops that involved EU AI working group chairs. Language in the AI Act regarding AI liability for civil damages was dropped, and national-security uses of AI were deemed outside the law’s scope.
The initial compliance reactions haven’t been one-sided, however. Denmark was the first to establish the requisite authorities to oversee AI regulation compliance on the national level, while back in March, the Netherlands published a regulatory sandbox proposal. Both of these will provide blueprints for the rest of the EU to follow.
Additionally, while campaigners complained about the risk of the rules being watered down after a last-minute removal of key areas in the code of practice, the European Commission’s experts did not cave and explicitly included the “risk to fundamental rights” that companies must consider, alongside adding the requirement that companies “publish a summarised version” of the reports filed to regulators before releasing a model to the public.
The AI Act is still comprehensive, even though it remains unpolished and likely will need sizable regulatory tweaks moving forward. The EU has already signaled its willingness to work with companies to streamline regulations and make targeted changes.
Innovation and Investment
It is clear that the EU has tried to take its duty to regulate seriously, despite intense lobbying. But how has it balanced its duty to foster innovation?
The AI industry is currently dominated by American firms; EU AI companies received €32.5 billion in investments between 2018 and late 2023, compared to over €120 billion for U.S. AI companies. In terms of saturation, 73% of Large Language Models are being developed in the U.S. and another 15% in China. According to the 2024 Draghi report, only four of the world’s top 50 tech companies are European.
The EU’s powerhouses, like Siemens, SAP, and Mistral, have had to turn to foreign investors in light of 61 percent of global AI funding flowing to U.S. firms compared to a mere 6 percent to European firms.
But that could soon change. The EU has attracted myriad investment pledges from the likes of Nvidia, which partnered with Mistral to build data centers in France and an AI cloud platform in Germany. France also intends to invest 109 billion euros in French private sector AI.
Separately, the European Commission pledged $20 billion to build four "AI gigafactories" to lower dependence on U.S. firms and plans to invest $1.4 billion in artificial intelligence, cybersecurity, and digital skills by 2027.
All of this investment has led to projections that European AI spending will reach $144 billion by 2028 and annually grow at 30%, compared to the U.S.’s annual growth rate of 19.33% through 2034.
But a major piece of the puzzle is harmonizing Europe’s digital infrastructure. On top of the challenges of building, training, and providing the energy for AI models, fragmented digital infrastructure affects everything from licensing regimes to 5G traffic flows. To address this, the EU recently launched the Digital Networks Act, which unifies the EU’s digital regulatory framework.
There are still substantial hurdles outside of improving regulations and increasing investment, namely optimizing energy efficiency and securing sufficient AI chips sustainably and without threatening national security. But the EU’s approach has potential.
In contrast, the U.S. prioritized growth with a laissez-faire attitude towards tech companies, and China’s state-centric focus, prioritizing social stability, national security, and ideological control, risks a lack of human rights considerations and overregulation.
Many company complaints about stifled innovation reflect not EU overreach, but the greater freedom firms enjoy in places like the U.S. This mirrors past reactions to digital regulations, like Google’s resistance to the right to be forgotten. The EU AI Act may lack flexibility, but with no major AI disasters so far, its role as a necessary early guardrail remains clear.