Technology

EU Delays Tougher AI Act Rules: What Changes For U.S. Companies Using Generative AI

Tyler Nov 28, 2025

The European Commission stated on November 19, 2025, that it plans to defer decisions on certain provisions of the AI Act (the most severe provisions for high-risk systems) until 31 December 2027. In its communication, the Commission also stated that it is dedicated to regulating Artificial Intelligence and that there is ample time for the market to adapt to these new regulations.

The decision is part of a package nicknamed the Digital Omnibus, which also softens parts of the GDPR and data legislation in an explicit attempt to cut red tape and respond to pressure from Big Tech players like Google, Meta, and OpenAI. For U.S. companies already using generative AI in products, marketing, fraud detection, or experience personalization, this is not just a technical tweak in Brussels.

It is an extended grace period that lowers the risk of a regulatory shock in 2026 but raises expectations that, by 2027, AI governance will no longer be a nice-to-have and will instead be a basic requirement for doing business with European users.

Digital Omnibus: Why Brussels Hit The Brakes

Source: Pexels

The AI Act came into force in August 2024 and was designed to be applied gradually. Under the EU’s official timeline, outright bans and AI literacy obligations started applying on February 2, 2025, while the rules for general-purpose AI (GPAI) models and the governance framework began to apply on August 2, 2025.

The next step would have been full obligations for high-risk systems in 2026-2027, covering AI used in credit, employment, healthcare, critical infrastructure, and law enforcement. The new proposal from the Commission pushes this core regulatory piece back.

Instead of August 2026, the tougher rules now have December 2027 as their main target date, with the possibility of an additional period tied to the availability of technical standards and compliance tools.

In Brussels, the move is being sold as simplification, not deregulation. The Commission argues that it needs to make sure standards, specifications, and guidance are ready before demanding that thousands of companies, including many in the United States, comply with complex requirements for documentation, risk assessment, logging, and transparency.

At the same time, there is a clear economic angle. In parallel with AI, the EU had already begun loosening parts of its green agenda under pressure from governments and industry, and is now doing something similar on the digital side. In other words, it is trying to preserve global competitiveness at a moment when the global AI market is estimated at roughly $757 billion in 2025, with a heavy contribution from North America.

For sectors like online entertainment and gaming, which rely on AI for personalized offers, identity verification, and risk detection, the delay works like a shock absorber. U.S. users, for instance, often rely on independent guides that compare bonuses, payment methods, and platform reputation. They typically also highlight the best options among alternative sites in terms of withdrawal speeds and banking flexibility. Behind those comparisons, there are more and more AI models analyzing play patterns, pricing, and terms.

Generative AI Remains At The Centre Of The Regulatory Radar

Despite the delay, generative AI has not been given a free pass. The AI Act was originally conceived for more traditional automated decision systems, but it ended up absorbing the explosion of general-purpose models that generate text, images, audio, and video, such as large language models and synthetic media generators.

The European Union currently distinguishes between responsibilities for general-purpose models and responsibilities associated with high-risk use cases that include those models. In summary, a generative model used only for drafting support emails might fall into a lower risk level; however, when used for resume screening, credit assessments or providing support for medical assessments, that same model represents a high risk.

Under these circumstances, the European Union's Artificial Intelligence Act mandates documentation of training data, information about how the system was validated, ongoing monitoring for bias, and compliance with audit requirements. By extending the deadline to 2027 for high-risk systems and existing high-risk systems that are embedded in regulated products, regulators have made it clear that they intend to move toward enforcement of these obligations.

The Domino Effect For U.S. AI Companies

U.S. companies are not outside the AI Act’s reach. According to the European Commission itself, the rules apply to providers based outside the EU whenever their systems’ outputs are used within the Union. That directly affects SaaS products for marketing, customer service, HR, and credit risk that rely on generative AI and serve European clients.

All of this is happening at a time when AI adoption in the U.S. corporate environment is accelerating at an impressive pace. The Stanford HAI AI Index 2025 estimates that companies worldwide reported using AI in 78% of their operations in 2024, up from 55% the previous year, with private AI investment in the United States surpassing $100 billion, far ahead of China and the UK.

Looking specifically at generative AI, a recent Bain survey indicates that 95% of companies in the United States already use some form of GenAI, a jump of 12 percentage points in just over a year. In other words, almost everyone has something in production, whether it is a support chatbot, a content recommendation engine or an automated contract analysis tool.

That contrast matters. While the EU is pushing back the trigger for its toughest rules, U.S. companies continue to stack generative AI use cases. The extra time on the European side should not be read as an invitation to improvise, but as a window to bring products, contracts, and data pipelines into alignment with what will be required more strictly in 2027.

High-Risk Sectors And The Role Of Algorithmic Personalization

The most sensitive core of the AI Act remains the list of high-risk uses. It includes AI applied to critical infrastructure, education, employment, healthcare services, credit assessment, law enforcemen,t and biometric identification.

In all of these contexts, generative models appear as one gear in a larger system. An LLM that generates responses in automated interviews, for example, is involved in hiring decisions. A generative engine that assembles medical summaries from health records influences patient prioritization.

A system that creates summaries of banking transactions can affect how fraud detection mechanisms behave. This is where the Commission’s proposed delay carries real weight. Without it, many of these applications would have to comply with high-risk requirements as early as 2026.

Due to the changed timelines, businesses will have more time in 2027 to examine and evaluate their data and audit their models, allowing them to more effectively separate the types of personalization that are used for engagement purposes that can be considered low-risk and those types of sensitive decisions that require more stringent protections.

Meanwhile In Washington: Regulatory Patchwork And Turf Battles

Submitting to the EU's desire for a singular and consolidated regulatory framework for AI, the United States is currently experiencing great divergence among states regarding AI. The nation has no broad federal initiative since Europe released its framework for the Regulation of Artificial Intelligence; instead, the states have many different legal codes, rules/standards provided by the Federal Government, and competing orders issued by the Executive Branch, without any federal law being put into place.

The Software Improvement Group recently released an analysis that shows every state in America, as well as Puerto Rico, the U.S. Virgin Islands, and Washington D.C., had introduced some type of artificial intelligence legislation during their respective 2025 legislative sessions. Of those 50 states, 38 passed or adopted approximately 100 different pieces of legislation regarding artificial intelligence.

Another summary, from the Retail Industry Leaders Association using MultiState data, counts more than 1,080 AI bills introduced this year, with only 118 becoming law, an approval rate of roughly 11%.

At the federal level, the White House and key members of Congress are openly discussing the idea of limiting or blocking state-level AI laws in favor of a single federal standard, something that has drawn pushback from states and civil rights groups.

For U.S. companies, the result is a kind of double fragmentation. The current state of affairs regarding transparency, the provision of notice to users, and protections against Deepfakes are varied across the 50 states. At the same time, there is now a need to follow European legislation which, although progressing at the moment more slowly than anticipated, is likely to establish substantial obligations by 2027.

Data, Trus,t And Borders: The Role Of The Data Privacy Framework

One sensitive point for any U.S. company serving European users with generative AI is the legal basis for transferring and processing personal data. Since July 2023, the European Commission has considered the new EU-US Data Privacy Framework to provide an adequate level of protection for data transfers from the EU to certified U.S. companies.

That effectively restored a channel that had already been struck down twice in previous court rulings. In September 2025, the EU General Court confirmed the validity of the framework, rejecting a challenge to its legality and giving more legal certainty to large tech groups that rely on constant data flows between the EU and the U.S.

The proposal to soften parts of the AI Act and the GDPR within the Digital Omnibus package is closely tied to this context. By suggesting that European personal data may be used in more scenarios to train AI models, the Commission is signaling that it is willing to tolerate more intensive data use as long as formal safeguards such as the Data Privacy Framework are in place.

Critics see this as a win for Big Tech. Supporters argue that without this kind of balance, Europe would lose competitiveness in a market where America already concentrates more than $50 billion in AI revenue in 2025.

Post Comment

Be the first to post comment!

Related Articles