Generative AI Regulations
As we enter a new era of AI regulation, European businesses must prepare for the EU AI Act, the world’s first comprehensive AI regulation, while US businesses adapt to evolving regulations.
EU AI Act Compliance
To ensure compliance, companies on both sides should inventory their AI systems, assess risk classifications, and prioritize high-risk systems.
Establishing governance frameworks, forming oversight committees, and implementing clear policies and quality management systems aligned with ethical guidelines are crucial steps in this process.
Key Aspects of the EU AI Act
The EU AI Act, which entered into force on August 1, 2024, introduces a risk-based approach to AI regulation. This groundbreaking legislation aims to ensure AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
Risk Classification System
The Act categorizes AI systems based on their potential risk to individuals:
Unacceptable Risk
AI systems in this category are prohibited due to the clear threat they pose to the safety, livelihoods, and rights of people. Examples include:
- Social scoring systems by governments
- AI-powered toys that encourage dangerous behavior
- Systems using subliminal, manipulative, or deceptive techniques to distort behavior
- AI exploiting vulnerabilities related to age, disability, or socio-economic circumstances
- Certain biometric categorization systems inferring sensitive attributes
- Systems for assessing an individual's risk of committing criminal offenses
High Risk
These AI systems pose significant risks to health, safety, or fundamental rights and are subject to strict obligations. High-risk AI systems include those used in:
- Critical infrastructure management
- Educational or vocational training
- Employment and worker management
- Access to essential private and public services
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
Limited Risk
This category primarily addresses risks associated with a lack of transparency in AI usage. AI systems in this category are subject to specific transparency obligations, such as:
- Ensuring users are aware when interacting with AI systems like chatbots
- Labeling AI-generated content, particularly for matters of public interest
- Identifying deep fake audio and video content
Minimal or No Risk
The majority of AI systems currently used in the EU fall into this category. These systems are allowed free use with minimal regulation. Examples include:
- AI-enabled video games
- Spam filters
- AI applications that pose little to no risk to citizens' rights or safety
Compliance Requirements
The EU AI Act establishes distinct compliance requirements for all categories of AI systems.
Prohibited AI practices
For prohibited AI practices, which include systems that manipulate human behavior, exploit vulnerabilities, or create social scoring systems, compliance is straightforward: these must not be developed, deployed, or used under any circumstances.
Organizations must implement robust screening processes to avoid inadvertently engaging with such systems, with non-compliance potentially resulting in severe fines.
High-risk AI systems
High-risk AI systems, such as those used in critical infrastructure, education, employment, law enforcement, and healthcare, face the most stringent compliance measures.
Providers and deployers must conduct thorough risk assessments, ensure human oversight, maintain detailed documentation, and register systems in an EU database.
Additionally, these systems must comply with accessibility requirements, implement logging capabilities for traceability, and report serious incidents to authorities.
Minimal or no-risk AI systems
Minimal or no-risk AI systems, which include most current AI applications, face minimal regulation. However, organizations are still encouraged to adhere to responsible AI principles.
This includes ensuring data privacy, maintaining transparency about AI use, implementing basic safeguards against potential misuse, and staying informed about evolving regulations.
While not mandatory, organizations dealing with these systems should consider voluntarily applying AI best practices and ethical guidelines to maintain public trust and prepare for potential future regulatory changes.
Transparency for Generative AI
While not classified as high-risk, generative AI systems like ChatGPT must adhere to specific transparency requirements:
- Disclose when content is AI-generated to prevent deception and maintain user trust
- Design models to prevent illegal content generation, including measures to filter out copyrighted material and harmful content
- Publish summaries of copyrighted data used for training to address intellectual property concerns and provide transparency in the AI development process
Preparing Your Business
Companies should establish governance frameworks