AI Regulation: EU AI Act, US Executive Orders

For years, the tech industry’s motto was “move fast and break things.” With AI, governments have realized that the things being broken might be democracy, employment, and public safety.

The regulatory landscape is fracturing into different approaches. The European Union is acting as the strict referee, the United States as the innovation-focused observer, and China as the state-controlled accelerator.

The European Union: The “Risk-Based” Approach

The EU AI Act (passed in 2024) is the world’s first comprehensive AI law. It doesn’t treat all AI the same. Instead, it classifies systems into a pyramid of risk:

1. Unacceptable Risk (Banned)

These are prohibited outright.

  • Social Scoring systems (like Black Mirror).
  • Real-time remote biometric identification (facial recognition) in public spaces by law enforcement (with narrow exceptions for terrorism/kidnapping).
  • AI that uses subliminal techniques to manipulate behavior.

2. High Risk (Heavily Regulated)

This includes AI used in critical infrastructure, education (grading exams), employment (resume sorting), and law enforcement.

  • Requirements: Companies must provide high-quality data governance, detailed documentation, transparency, and human oversight. They must prove the system is safe before it hits the market.

3. Limited Risk (Transparency)

Chatbots and deepfakes fall here.

  • Requirement: Users must be informed they are interacting with a machine. Deepfakes must be labeled.

The United States: The “Sector-Based” Approach

The US has resisted a single sweeping federal law, fearing it might stifle innovation in Silicon Valley. Instead, it relies on Executive Orders and existing agencies.

The Biden Executive Order (October 2023)

This order mobilized federal agencies to create standards.

  • Safety Testing: Requires developers of the most powerful models (those using >10^26 FLOPS of compute) to share safety test results with the government.
  • NIST Framework: Tasks the National Institute of Standards and Technology to create safety guidelines.
  • Watermarking: Encourages standards for labeling AI-generated content.

Unlike the EU Act, the US approach is largely enforcement via existing bodies (FTC, DOJ) rather than new legislation. The focus is on “responsible innovation.”

China: The “State Control” Approach

China’s regulations are swift and strict, focused on social stability and alignment with socialist core values.

  • Algorithm Registry: Companies must register their algorithms with the government.
  • Generative AI Rules: AI outputs must be accurate and uphold state ideology. Service providers are liable for the content their models generate.

The Global Challenge: The Brussels Effect

Just as GDPR became the global standard for privacy, the EU AI Act may become the global standard for AI.

  • Why? Multinational tech companies don’t want to build two different products. If they have to make their AI safe enough for Europe (a massive market), they will likely roll out those safety features worldwide.

Summary Comparison

RegionPhilosophyKey MechanismPriority
EUPrecautionaryComprehensive Legislation (AI Act)Fundamental Rights & Safety
USAPro-InnovationExecutive Orders & Agency OversightEconomic Growth & National Security
ChinaState ControlSpecific targeted regulationsSocial Stability & State Power

We are witnessing the end of the “wild west” era of AI. The code is now subject to the law.