Generative AI Regulations

As we enter a new era of AI regulation, European businesses must prepare for the EU AI Act, the world’s first comprehensive AI regulation, while US businesses adapt to evolving regulations.

EU AI Act Compliance

To ensure compliance, companies on both sides should inventory their AI systems, assess risk classifications, and prioritize high-risk systems.

Establishing governance frameworks, forming oversight committees, and implementing clear policies and quality management systems aligned with ethical guidelines are crucial steps in this process.

Key Aspects of the EU AI Act

The EU AI Act, which entered into force on August 1, 2024, introduces a risk-based approach to AI regulation. This groundbreaking legislation aims to ensure AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

Risk Classification System

The Act categorizes AI systems based on their potential risk to individuals:

Unacceptable Risk

AI systems in this category are prohibited due to the clear threat they pose to the safety, livelihoods, and rights of people. Examples include:

These AI systems pose significant risks to health, safety, or fundamental rights and are subject to strict obligations. High-risk AI systems include those used in:

This category primarily addresses risks associated with a lack of transparency in AI usage. AI systems in this category are subject to specific transparency obligations, such as:

The majority of AI systems currently used in the EU fall into this category. These systems are allowed free use with minimal regulation. Examples include:

Compliance Requirements

The EU AI Act establishes distinct compliance requirements for all categories of AI systems.

Prohibited AI practices

For prohibited AI practices, which include systems that manipulate human behavior, exploit vulnerabilities, or create social scoring systems, compliance is straightforward: these must not be developed, deployed, or used under any circumstances.

Organizations must implement robust screening processes to avoid inadvertently engaging with such systems, with non-compliance potentially resulting in severe fines.

High-risk AI systems

High-risk AI systems, such as those used in critical infrastructure, education, employment, law enforcement, and healthcare, face the most stringent compliance measures.

Providers and deployers must conduct thorough risk assessments, ensure human oversight, maintain detailed documentation, and register systems in an EU database.

Additionally, these systems must comply with accessibility requirements, implement logging capabilities for traceability, and report serious incidents to authorities.

Minimal or no-risk AI systems

Minimal or no-risk AI systems, which include most current AI applications, face minimal regulation. However, organizations are still encouraged to adhere to responsible AI principles.

This includes ensuring data privacy, maintaining transparency about AI use, implementing basic safeguards against potential misuse, and staying informed about evolving regulations.

While not mandatory, organizations dealing with these systems should consider voluntarily applying AI best practices and ethical guidelines to maintain public trust and prepare for potential future regulatory changes.

Transparency for Generative AI

While not classified as high-risk, generative AI systems like ChatGPT must adhere to specific transparency requirements:

Preparing Your Business

Companies should establish governance frameworks

Top