
In last week’s post, I discussed a trailblazer in the U.S. comprehensive AI legislative front: Colorado. This week, we tackle the original trailblazer on this front: the European Union’s (EU) Artificial Intelligence Act, Regulation 2024/1689 (commonly referred to as the “EU AI Act” or the “AI Act”). First introduced in April 2021, the EU AI Act entered into force on August 1, 2024. The Act takes a staggered approach to the dates on which specific provisions of the Act are in effect, a somewhat common practice in the AI/Privacy space due to the complexity of requirements set forth, allowing businesses to be compliant in a reasonable time frame. The next set of regulations is fast approaching and set to apply beginning February 2, 2025. These regulations are Chapters I and II of the EU AI Act. These chapters outline the restrictions that are being placed on specific AI systems, which include a comprehensive list of prohibited AI practices and the scope of application. In addition to the restrictions, Chapter I, Art. 4, mandates providers and deployers to take measures to ensure that their staff has a sufficient level of AI literacy.
Landmark legislation usually comes with key (sometimes unique) terms. Some such terms thankfully overlap the Colorado AI Act we discussed last week. The key terms of the Act include:
- “Deployer” – a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
- “Provider” – a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge
- “Importer” – a natural or legal person located or established in the EU that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country
Interestingly, both the Colorado AI Act and EU AI Act use the term “deployer” similarly, but the definition for “provider” under the EU AI Act is simply analogous to Colorado’s definition of “developer.” Unsurprisingly, these three terms remain loyal to terminology established under the EU’s General Data Protection Regulation (“GDPR”). This is because the application of the EU AI Act is not merely limited to those who are established in the EU, but they also apply to providers and deployers of AI systems located in a third country where the output produced by the AI system is used in the EU.
As far as how the EU AI Act classifies AI systems, the Act contains an entire chapter (Ch. III) dedicated to the classification, requirements, and management of high-risk AI systems, including an annex that lists the areas or sectors where specific usage of an AI-system would be considered “high-risk.” In general, high-risk applies to systems that may negatively affect safety or fundamental rights. Aside from the sectors listed in the annex, AI systems that are integrated into a product’s safety system or where the AI system is a product that must follow the EU’s product safety laws, are both denoted as “high-risk” as well. As an example, AI systems within toys, aviation, automobiles (autonomous vehicles), medical devices, and elevators would be considered “high-risk.”
An even stricter designation than high-risk is the “unacceptable risk” category. This includes AI systems that employ social scoring (e.g., classifying people based on behavior, socio-economic status, or personal characteristics), cognitive behavioral manipulation of people, biometric identification and categorization of people, and real-time and remote biometric identification systems. These uses are deemed to be unacceptable and are, therefore, banned.
Below the “unacceptable risk” and the “high-risk” categories are AI systems that pose “limited risks.” Limited-risk AI systems would include systems that perform a narrow procedural task, like classifying incoming documents into categories or a system that is used to detect duplicates in a large number of applications. Finally, at the lowest level are “minimal-risk” AI systems, such as spam filters, which have no obligations under the AI Act.
Later this year, on August 2, 2025, EU member states must designate competent authorities that will serve as the enforcers of the AI Act by overseeing the application of the rules and carrying out market surveillance activities. In addition to that, the AI Act establishes three advisory bodies: the European Artificial Intelligence Board (the Board), which will be the main body; the Advisory Forum, which will provide technical expertise and advise the Board and the European Union Commission; and the Scientific Panel of Independent Experts, who support the enforcement and advise and support the AI Office. Given that at least two major regulations from the EU AI Act will become effective in 2025, with full enforcement to commence in August of 2026 (except for Article 6(1) in 2027), the year 2025 will be a big year for European entities, and even businesses around the world who want to continue to do business in Europe, to establish that their AI-integrated systems are compliant. It would be in their best interest to do so since the legislation that follows in other countries (e.g., the Colorado AI Act) tends to mirror similar requirements.
In the interim, as always, please feel free to contact me if you have further questions, thoughts, or comments. The more you interact with me, the better I will be able to address real world concerns. If you prefer, you can click this link to book a meeting with me if you have any pressing issues you need to get off your chest. Click: https://isazalaw.com/booking
The information you obtain at this site, or this blog is not, nor is it intended to be, legal advice. You should consult an attorney for advice regarding your individual situation. We invite you to contact us through the website, email, phone, or through LinkedIn. Contacting us does not create an attorney-client relationship. Please do not send any confidential information to us until such time as an attorney-client relationship has been established.