The European Commission has published
new guidelines on the implementation of the bans provided for in
the AI ;;Act, the European law on artificial intelligence, which
came into force on Sunday 2 February.
The aim is to provide legal certainty to companies that supply
or use AI systems in the EU market, and to national authorities
responsible for supervising and enforcing the rules.
The guidelines concern eight prohibited practices, namely the
use of manipulative and subliminal techniques, the exploitation
of people's vulnerabilities (related to age, disability, social
or economic situation); social scoring; individual predictive
policing based exclusively on profiling people; untargeted
scraping of facial images from the Internet or CCTV footage to
create facial recognition databases; emotion recognition in the
workplace and in schools; biometric categorisation systems using
sensitive characteristics (e.g. political, philosophical,
religious beliefs, sexual orientation and race); and real-time
remote biometric identification of individuals in public spaces
by law enforcement (with limited exceptions).
On governance, National Market Supervisory Authorities and the
Data Protection Authority are responsible for enforcing the
rules and will be able to take enforcement action in relation to
bans.
Member States must designate these authorities by 2 August 2025,
meaning that there will be no real public enforcement until
then.
Companies that violate the rules on bans can face fines of up to
€35 million or 7% of their annual global turnover.
The guidelines are not legally binding, but the Commission
intends to provide support and guidance to companies, especially
SMEs, to ensure consistent implementation of the AI ;;Act across
the EU.
The AI ;;Act Service Desk, which will be operational from summer
2025, will offer an interactive platform with information
material on the rules of the AI ;;Act.
The regulation, which came into force in August 2024, is the
first law in the world on the subject, and adopts a risk-based
approach, providing a series of obligations to suppliers and
developers of AI systems based on the different levels of risk
identified: unacceptable, high, limited and minimal or no risk.
Those AI practices that pose an unacceptable risk to safety and
fundamental rights are prohibited.
The prohibitions are the first provisions of the AI ;;Act to be
applicable, then it will be the turn of the rules on governance
and the obligations for general purpose AI (applicable after 12
months from entry into force) and the obligations for high-risk
AI systems (after 36 months).
The regulation as a whole will be applicable two years from
entry into force, on 2 August 2026.
The guidelines on the definition of AI systems are expected to
be published shortly.
photo: Henna Virkkunen, Executive Vice-President of the European
Commission for Technology Sovereignty and Security
ALL RIGHTS RESERVED © Copyright ANSA