Contact us: +45 32 67 26 26
English

Everything You Need to Know About the New EU AI Act

Arooj Anwar
By: Arooj Anwar | 8 October

The EU AI Act officially launched on August 1, 2024, bringing major changes to how artificial intelligence is regulated in Europe. This new law aims to balance AI innovation with safety and fairness. In this blogpost, we’ll walk you through what the EU AI Act is all about, who it affects, and what you need to do to stay compliant. We’ll cover the main parts of the Act, key deadlines, and what this means for different types of organizations. Let’s dive in!

What is the EU AI Act? 

The AI Act is the first comprehensive set of rules for artificial intelligence worldwide. Its goal is to ensure that AI systems are used responsibly and ethically. By aligning AI technology with fundamental rights and safety standards, the Act aims to foster innovation while also protecting people and minimizing risks. Essentially, it’s about making sure that AI benefits society without compromising on safety or fairness.  

Why do we need AI regulations? 

AI has the power to transform industries and spark big advancements across Europe. But as AI technology evolves, it also brings new risks. These include potential threats to safety, privacy, and even democratic values. For instance, some AI systems could harm user safety or violate fundamental rights if they’re not properly managed. 

Without clear rules, businesses, governments, and individuals could misuse AI. Also, if different countries handle AI regulations differently, it could create inconsistencies and hinder the development of a unified market. 

That’s where the EU AI Act comes in. The AI Act addresses these concerns by establishing clear rules and standards for different types of AI systems, ensuring they are used in a manner that is transparent and fair.  

Who will the AI Act affect? 

The AI Act has different rules for various types of companies involved with AI. It will impact a broad range of organizations, not just those developing AI technology. Here’s who will be affected: 

  • Providers: Companies that sell or offer AI systems or general-purpose AI models in the EU. 
  • Deployers: Companies that use AI systems and are based in the EU. 
  • Users: Any company, whether a provider or deployer, that uses the results of AI systems in the EU. 
  • Importers and distributors: Companies that import or distribute AI systems. 
  • Product manufacturers: Companies that include AI systems as part of their products and sell them under their own name or brand. 

So, it's not just tech companies that develop AI. Any company that uses AI or its results in the EU needs to follow these rules. This includes businesses in all sectors, whether they use AI for specific purposes like credit assessments or general uses like recruitment. 

And remember, it doesn’t matter if a company is based inside or outside the EU. If their AI systems or results are used in the EU, they must comply with the Act. 

What are the four risk categories in the AI Act? 

The AI Act divides AI systems into four risk levels, each with different rules. What you need to do to stay compliant depends on which risk level your AI system falls into. Here’s a quick guide to help you understand these levels: 

  1. Unacceptable risk
    These AI systems are banned because they can seriously harm people’s safety and rights. This includes: 

    Social scoring: AI systems that rank people based on their behavior, such as China’s social credit system, which affects their access to services and opportunities. 

    Dangerous AI toys: AI-powered toys that pose safety risks, like smart dolls that record conversations and raise privacy concerns. 

    Emotional manipulation: AI systems that exploit emotional vulnerabilities, such as mental health apps that may manipulate users’ emotional struggles to keep them reliant on the app. 

    Real-time facial recognition: AI used by law enforcement for real-time facial recognition in public spaces, which is banned except in very rare situations. 

  2. High risk
    This category includes AI systems used in critical areas where mistakes could have significant consequences. Examples are: 

    Healthcare: AI systems that diagnose diseases or assist in medical procedures, due to their potential impact on patient health. 

    Finance: AI used for making decisions about loans or other financial services, where mistakes can have serious economic consequences. 

    Law enforcement: AI that is used for profiling or surveillance, which can impact privacy and civil liberties. 

    Education and employment: AI systems that influence access to education, training, or job opportunities, affecting individuals’ professional and educational futures. 

    High risk systems must meet strict requirements, including thorough risk assessments, high-quality data, detailed documentation, and human oversight. These systems are to be closely monitored to ensure they are safe and fair. 

  3. Limited risk 
    AI systems here, like chatbots, don’t pose major risks but still need to be transparent. For example, if you’re chatting with an AI, it should clearly say that it's not a real person. This helps build trust and keeps things clear. 

    Fun fact: In one of our blogposts, we asked ChatGPT to write a phishing email, and it did! This shows that even well-designed AI can unintentionally be used in risky ways, highlighting the need for better controls and ethical guidelines. 

  4. Minimal or no risk 
    These are low-risk AI systems, like spam filters or video games. They have very few regulations, but it's still a good idea to keep an eye on them to avoid any potential issues. Most everyday AI systems fall into this category. 

To figure out where you fall under the AI Act, start by looking at how your AI is used and the risks it might pose. If it's involved in critical areas like healthcare or finance, it’s probably high-risk and everyday tools like chatbots might be limited risk. Make sure to review the AI Act’s detailed guidelines to understand the specific criteria for each category. It’s also a good idea to conduct a risk assessment to evaluate how your AI system could impact safety or people’s rights.  

Key deadlines for the AI Act 

With August 1, 2024, now behind us, the AI Act has officially taken effect. Here’s what’s coming up next: 

Early 2025: The prohibitions on unacceptable AI systems will start. This means certain high-risk AI uses will be banned, ensuring that only safe and ethical technology is in play. 

May 2025: New codes of practice for general-purpose AI models will be introduced. These guidelines will help ensure that AI tools used across various sectors meet the required standards. 

August 2025: The rules for general-purpose AI models will become mandatory. By this date, full compliance with these new guidelines will be required. 

August 2026: Obligations for high-risk and specific AI categories will come into force. If your AI falls into these categories, make sure you're ready to meet the new requirements. 

August 2027: Full compliance will be required for AI systems embedded in regulated products. This is when the Act’s provisions will fully impact a wide range of AI applications. 

How the AI act will affect different types of organizations  

The EU AI Act will affect many different types of organizations. It will be especially important for those heavily involved in AI or working in high-risk areas. To help you get ready for these changes, here’s a look at groups that will be impacted: 

  • Tech companies like Google, Amazon, and Meta (Facebook) rely on AI for various services, such as chatbots and facial recognition. They will need to ensure their AI systems are accurate, transparent, and safe, which will require significant adjustments. 
  • Industries such as healthcare, finance, and law enforcement use AI for critical decisions, like approving loans or diagnosing medical conditions. These sectors will face stringent requirements due to the high-risk nature of their AI systems, requiring rigorous testing, documentation, and ongoing monitoring. 
  • Small and medium-sized enterprises (SMEs) will also be affected. Although fines may be lower, compliance costs can be substantial. Startups without dedicated legal or compliance teams might struggle with the new requirements, potentially needing to hire additional staff or consultants. 

What could go wrong? 

Some companies are particularly vulnerable under the new regulations: 

  • Amazon and Google could face challenges with their facial recognition and surveillance technologies, which have faced criticism for biases. They might need to revise or halt these technologies in Europe if they don’t align with the new standards. 
  • Meta (Facebook), known for its AI-driven algorithms for ads and content moderation, could face significant penalties if its systems cause harm or spread harmful content without sufficient transparency and control. 
  • Clearview AI, a company known for its controversial facial recognition technology, could be banned from operating in the EU unless it changes its data practices.

Is the AI Act future-proof? 

Yes, the AI Act is designed with the future in mind. Instead of locking down specific technical details, it focuses on setting broad goals for safety and fairness. This way, it can adapt as new technologies and methods emerge. The Act allows for updates through delegated acts, so if new high-risk AI uses come up, they can be added to the regulations. Plus, there will be regular evaluations to keep the Act relevant and effective. The European AI Office will oversee everything, making sure the rules stay up-to-date and continue to support ethical AI development. 

Preparing for the changes ahead 

The launch of the EU AI Act marks a significant step towards ensuring that AI technologies are used responsibly and ethically. As we've explored, the act introduces clear guidelines and standards to help manage the risks associated with AI, balancing innovation with safety and fairness. 

For many of you, this means understanding where your AI systems fit within the new categories, whether they’re deemed unacceptable, high-risk, limited, or minimal risk. Knowing where you stand is key to navigating compliance smoothly and making the most of these regulations. 

Check out our course on AI tools to learn about key precautions and best practices for securely using popular AI tools like ChatGPT, Microsoft Copilot, and Gemini, helping you leverage their potential while minimizing risks. 

While our expertise is in cybersecurity awareness and phishing training, we encourage you to embrace these new regulations as an opportunity to enhance transparency and trust in your AI practices. If you need support in building a culture of awareness and preparedness, we’re here to help with training solutions that strengthen your overall security framework.