EU AI Act: A Simple Guide for All Stakeholders in 2024

August 1, 2024, marked a significant milestone as the European Union’s AI Act officially came into force. This groundbreaking legislation is set to reshape the landscape of AI development and deployment, not just within the EU, but globally.

But what does this mean for you and your company?

In this article, we’ll cut through the complexity of the AI Act, exploring its real-world implications for businesses of all sizes and sectors. We’ll break down what qualifies as an AI system, the risk-based classification system, who needs to comply, key obligations, implementation timelines, and potential penalties. Whether you’re developing, integrating, or simply using AI, this Act likely affects you.
Let’s dive in and unpack what the EU AI Act means for your business and how you can prepare for this new era of AI regulation.

What is the AI Act?

The AI Act is a groundbreaking piece of legislation from the European Union, and it’s making waves as the world’s first comprehensive attempt to regulate artificial intelligence across various sectors and applications.

At its core, the Act is trying to strike a delicate balance. On one hand, it aims to foster innovation and keep Europe competitive in the global AI race. On the other, it seeks to protect society from potential risks associated with AI technologies. To achieve this, it introduces a unified set of rules for AI across all 27 EU member states, creating a level playing field for businesses operating in or serving the European market.

What makes the AI Act particularly noteworthy is its risk-based approach. Instead of applying a one-size-fits-all regulation, it categorizes AI systems based on their potential risk to society and applies rules accordingly. This tiered system aims to encourage responsible AI development while ensuring appropriate safeguards are in place where needed.

While the Act primarily focuses on the EU, its impact is likely to ripple out globally. In our interconnected world, companies developing or using AI technology may find themselves needing to comply with these regulations, even if they’re not based in Europe. Essentially, the EU is setting a benchmark for AI governance that could influence global standards.

What Qualifies as an AI System Under the EU AI Act?

Let’s cut through the jargon and break down what the EU considers “AI”:

It’s a machine-based system that can:

  1. Operate with some level of independence
  2. Adapt after it’s put into use
  3. Take input and use it to create outputs like predictions, content, recommendations, or decisions
  4. Influence real or virtual environments

The key feature that sets AI apart from regular software? Its ability to “infer” – that means it can learn, reason, or model situations beyond just crunching numbers.

This definition casts a pretty wide net, covering everything from machine learning systems to knowledge-based approaches. So, even if you don’t think of your tech as “AI”, it might still fall under this definition. If it touches EU citizens in any way, it’s worth taking a closer look!

AI Risk Categories Under the EU AI Act

Let’s break down how the EU AI Act classifies AI systems based on their risk levels. This classification determines what rules apply to different types of AI:

Risk Category

What It Means

Examples

What You
Need to Do

Prohibited

These AI systems are too risky and
are banned

Social scoring, emotion recognition at work, predictive policing

Stop using or developing these systems

High-Risk

Allowed, but with strict rules

AI in hiring, biometric surveillance, medical devices, credit scoring

Comply with all AI Act requirements, including
pre-market assessments

Limited Risk

Allowed, but need
to be transparent

Chatbots,
AI-generated "deepfakes"

Disclose that you're using AI and how

Minimal Risk

Allowed with
no extra rules

Photo editors, product recommenders, spam filters

No specific AI Act requirements, but follow general laws

 

Remember, this classification is about balancing innovation with safety. If you’re unsure where your AI system falls, it’s worth taking a closer look.

Who will the AI Act affect?

In short: more businesses and organizations than you might think!

Let’s break it down in simple terms. The AI Act casts a wide net, and here’s why:

  1. It’s not just about EU companies. If your AI system impacts people in the EU, you’re in the scope. It doesn’t matter if your company is based in New York, Tokyo, or anywhere else.
  2. It covers the whole AI ecosystem. Whether you’re developing AI, selling it, or just using it in your business, the Act might apply to you. Think of it as a chain of responsibility from creators to end-users.
  3. It’s not limited to new AI systems. Even if you’ve had an AI system up and running before the Act came into force, you might still need to comply. This is especially true for general-purpose AI systems (think large language models like GPT) and high-risk systems (e.g. medical devices, autonomous vehicles or remote biometric identification systems) or used by public authorities.
  4. Significant updates matter. If you make major changes to an existing AI system, it’s treated as new under the Act. So you can’t just rely on grandfathering in old systems.
  5. It spans all sectors. Whether you’re in healthcare, finance, education, or any other field, if you’re using AI that impacts EU citizens, you need to pay attention.

EU AI Act Exemptions: Who Isn’t Affected by the New Regulations?

Now, let’s dive into who’s exempt from the AI Act:

  • Non-EU public authorities: If they’re cooperating with the EU on law enforcement or judicial matters and have proper safeguards in place, they’re in the clear.
  • Military and defense: AI systems used for these purposes fall outside the EU’s law-making authority, so they’re not covered.
  • Pure scientific research: If you’re developing AI solely for scientific discovery, you’re good to go.
  • AI in development: Systems still in the research, testing, or development phase aren’t affected – as long as they haven’t been put on the market yet.
  • Open-source projects: Free and open-source software generally gets a pass, with a few exceptions. If your open-source AI falls into the prohibited or high-risk categories, or needs to meet transparency requirements, you might still need to comply.

So, if you think you might be exempt, it’s worth double-checking to be sure. The AI landscape is complex, and it’s always better to be safe than sorry when it comes to compliance!

Key Obligations in the EU AI Act: A Quick Overview

The EU AI Act doesn’t just classify AI systems – it also lays out specific obligations for different players in the AI ecosystem. Here’s a quick heads-up:

If you’re developing, importing, distributing, or using AI systems, especially high-risk ones, the Act has detailed requirements for you. These cover everything from risk management and data governance to transparency and human oversight.

Here’s a brief overview:

  • For providers of high-risk AI systems:
    • Pre-market: Conduct conformity assessments before launch.
    • Post-market: Maintain logs and monitor system performance throughout its lifecycle.
  • For deployers, importers, and distributors:
    • Deployers: Implement human oversight and assess fundamental rights impacts.
    • Importers and Distributors: Verify compliance and ensure proper documentation before market entry.
  • Even for minimal-risk AI systems:
    • Ensure transparency: For example, clearly identify chatbots as AI and label AI-generated “deepfakes”.

The Act goes into detail about what’s expected for each category. If you’re involved with AI in any way, it’s worth diving into the specific sections that apply to your role. 

General-Purpose AI Under the EU AI Act

It’s worth knowing that the EU AI Act introduces a nuanced approach to regulating general-purpose AI (GPAI) models. 

Here’s the gist:

GPAI models are defined as AI systems capable of performing a wide range of tasks and easily integrated into various applications. The Act establishes two tiers of regulation:

  1. Base-level tier: All GPAI models face some transparency obligations.
  2. Systemic risk tier: High-impact GPAI models (typically those using massive computing power for training) face more significant requirements.

Key obligations include maintaining technical documentation, respecting copyright law, and sharing information with downstream providers. For high-impact models, additional requirements like model evaluations and risk mitigation apply.

EU AI Act Implementation Timeline: Key Dates and Deadlines for Compliance

The EU AI Act officially entered into force on August 1, 2024, but there’s no need to panic – the rollout is designed to be gradual, giving businesses and organizations time to adapt to the new regulations. The Act will be implemented in stages, with different provisions coming into effect over the next few years.

Here’s a breakdown of the key implementation dates:

Date

Implementation Milestone

August 1, 2024

AI Act enters into force

February 2, 2025

Prohibitions on certain AI practices take effect

August 2, 2025

Requirements for general-purpose AI models come into effect

August 2, 2026

Requirements for high-risk AI systems (classified under uses listed in Annex III) and transparency requirements for certain other AI systems take effect

August 2, 2027

Requirements for high-risk AI systems classified under EU harmonization laws (Annex I) come into effect. General-purpose AI models already on the market before August 2025 must comply by this date

December 31, 2030

Deadline for compliance of high-risk AI systems used by public authorities that were on the market before the Act's entry into force

 

Importantly, the regulation is directly applicable in all Member States. This means that for its effective application, no national legislation will be required. The national regulations that will be issued will serve a specific auxiliary role for the regulation but will not affect its binding force. 

What Will Non-Compliance with the EU AI Act Cost Your Company?

Probably many of you have been wondering this question all along: what happens if I don’t follow these rules? Well, here’s the breakdown of potential penalties:

  1. For the most serious violations, like breaching the Act’s prohibitions, you’re looking at fines up to €35 million or 7% of your global annual revenue, whichever is higher.
  2. If you’re not complying with obligations for high-risk AI or general-purpose AI systems, the penalty could be up to €15 million or 3% of global annual revenue.
  3. Even seemingly minor infractions, like giving incorrect information to authorities, could cost you up to €7.5 million or 1% of global annual revenue.

Small and medium-sized businesses get a bit of leeway – they’ll pay the lower of these amounts. But let’s be clear: these fines are designed to make compliance a top priority for every company working with AI in the EU market.

Navigating the EU AI Act: What It Means for You

We’ve covered a lot about the EU AI Act, but remember, this is just an overview. If you’re directly affected, we strongly recommend reading the full AI Act for complete details. When it comes to complex regulations like this, it’s always best to go straight to the source.

At DLabs.AI, we’ve been deeply involved in AI regulations from the start. We had the privilege of contributing to the Act’s development as part of the Open Loop project, where our CTO, Maciej Karpicz, raised key points about AI complexity that influenced the discussions.

Our experience has shown us the critical importance of this legislation. The EU AI Act will profoundly impact AI across Europe and beyond. Whether you’re developing, selling, or using AI, grasping the risk categories and your obligations is essential.

While the Act’s scope may seem daunting, remember that it’s being implemented gradually. The key is to start preparing now: assess your systems, understand your responsibilities, and plan for compliance.

At DLabs.AI, we’re committed to guiding our clients through these new waters. We’ve been on this journey from the beginning, and we’re here to help you navigate the complexities of the EU AI Act. Together, we can work towards a future where AI is not only powerful but also trustworthy and compliant.


EU AI Act FAQ: Key Questions and Answers

What is the AI Act?

It’s the world’s first comprehensive legislation to regulate AI across various sectors, aiming to balance innovation with societal protection.

When does the AI Act come into effect?

It entered into force on August 1, 2024, but different provisions will be implemented gradually until 2030.

What qualifies as an AI system under the AI Act?

Any machine-based system that can operate independently, adapt, process inputs to create outputs, and influence environments. The key feature is its ability to “infer” or learn.

How does the AI Act classify AI systems?

It uses a risk-based approach, categorizing systems as Prohibited, High-Risk, Limited Risk, or Minimal Risk.

Who needs to comply with the AI Act?

Any company whose AI system impacts EU citizens, regardless of where the company is based.

Are there any exemptions from the AI Act?

Yes, including military and defense AI, pure scientific research, and some open-source projects.

What are the key obligations for high-risk AI systems under the AI Act?

These include pre-market conformity assessments, post-market monitoring, and maintaining technical documentation.

How does the AI Act handle general-purpose AI?

It establishes two tiers: a base level with transparency obligations, and a higher tier for high-impact models with more stringent requirements.

What are the penalties for non-compliance with the AI Act?

Fines can reach up to €35 million or 7% of global annual revenue for the most serious violations.

How can companies prepare for AI Act compliance?

 Start by identifying which category your AI systems fall into, then review the specific requirements for that category. Consider seeking legal advice for detailed compliance strategies.

How will the AI Act interact with existing laws?

Companies must comply with both the AI Act and other relevant EU laws. The Act will be integrated into existing regulations where applicable, with sectoral regulators overseeing enforcement.

Will there be new standards for AI?

Yes, European standards bodies are developing new “harmonized standards” to help with compliance. Until these are ready, companies can use approved codes of practice.

How does the Act support AI innovation? 

The Act mandates AI regulatory sandboxes across the EU, allowing companies to test and validate their AI systems under supervision before market launch.

Who will oversee the AI Act?

Oversight will be shared between national authorities and new EU-level bodies, including an AI Office and an AI Board.

What if my company isn’t based in the EU?

If your AI system impacts people in the EU, you’ll likely need to comply with the Act, regardless of where your company is based.

Read more on our blog