The AI Act in a nutshell
The AI Act, proposed by the European Commission, is an ambitious initiative, to say the least. Its goal? To create an environment where AI innovation can thrive while protecting citizens. Nothing less.
In other words, it is about ensuring that AI systems respect the EU's fundamental values, such as privacy, transparency, and non-discrimination.
Classification of risks and obligations
The AI Act classifies AI systems into four risk categories: unacceptable, high, limited, and minimal.
- Systems that pose an unacceptable risk, such as those using subliminal manipulation techniques, are prohibited.
- High-risk systems, such as those used in critical infrastructure or public services, are subject to strict requirements, including compliance assessments and regular audits.
- Systems with limited risk —for example, AI that generates or manipulates images—are required to inform users that they are interacting with an AI system.
- Minimal risk systems are those used for video games or spam filters. They are not subject to any specific requirements, but providers are strongly encouraged to follow a defined code of conduct (focusing in particular on environmental sustainability and accessibility for people with disabilities).
These measures are intended not only to prevent potential abuse, but also to strengthen public confidence in AI.
When will the AI Act come into effect?
The bill, initially introduced in April 2021, went through several phases of consultation and revision. These discussions clarified the requirements for general-purpose AI systems and defined risk categories for AI applications. The entire process culminated on August 1, 2024, when the AI Act came into force in the European Union.
Its implementation will be gradual.
- February 2025: implementation of Chapters I (general provisions) and II (prohibited practices in AI)
- August 2025: application of Chapters III (notifying authorities), V (general-purpose AI models), VII (governance issues), and XII (applicable penalties).
- August 2026: application of the remainder of the regulation, with the exception of obligations concerning "high-risk" AI systems
- August 2027: implementation of the entire regulation.
What are the implications for the AI Act?
An impact on innovation and competitiveness
One of the major challenges of the AI Act? Finding a balance between regulation and innovation. Because while AI presents dangers, it is also a real breeding ground for opportunities and innovation. Players in the field are therefore wondering how these regulations could affect the EU's competitiveness on the world stage.
However, if we think about the long term, a clear regulatory framework could actually attract investment and strengthen the EU's position as a leader in ethical AI. Watch this space...
Mixed reactions
The AI Act was not unanimously approved.
- On the one hand, human rights advocates and consumer protection groups welcome the EU's efforts to protect citizens from the potential risks of AI.
- On the other hand, some technology companies are expressing concerns about the complexity and cost of complying with this new regulation.
In any case, the European Commission remains optimistic: the AI Act will create an environment conducive to responsible, human-centered AI. By setting high standards, the EU hopes not only to protect its citizens but also to influence global AI regulations. A wonderful project!
A complex implementation
The implementation of the AI Act will not be without challenges:
- national authorities will need to be trained and equipped to monitor and enforce the new rules;
- it will be crucial to maintain consistency in the application of regulations across the various EU member states;
- International cooperation will be essential, as many technology companies operate on a global scale.
Last but not least, ensuring that regulations remain relevant in the faceof rapidly evolving technology. AI is a constantly changing field, and laws must be flexible enough to adapt to future innovations—while maintaining a high level of protection.
And in other parts of the world?
Let's finish by taking a step back: the European Union's approach to AI remains very different from that of other regions of the world. In the United States, for example, AI regulation is more fragmented and often left to the initiative of companies. In China, the emphasis is on rapid innovation, sometimes at the expense of privacy protection.
The EU's approach, focused on protecting fundamental rights, could become a model for other countries seeking to balance innovation and ethics...
Conclusion
Ultimately, the AI Act is not just about legal compliance: it is also an opportunity for Europe to shape the future of technology in a way that reflects its values and priorities. By addressing the challenges and seizing the opportunities presented by this regulation, the EU can play a leading role in defining the AI of the future!
And ifartificial intelligence in cyber security is a topic that interests you, take a look at these two articles:
- A comprehensive definition of ISO/IEC 42001
- A list of prompts you can use as a cyber expert to simplify your daily life—safely, of course!
‍



