Starting on February, 2nd, 2025 Chapter I and II from the AI Act enter into force emphasising the need for AI literacy and banning certain harmful AI practices, as part of the Act’s phased implementation process. The Regulation is designed to improve the functioning of the internal market and promote the adoption of human-centric, trustworthy AI. It seeks to protect fundamental rights, including democracy, the rule of law, and environmental sustainability, from the potential harmful effects of AI systems.
The Regulation emphasises the critical role of AI literacy for providers, deployers, and individuals interacting with AI systems. This literacy encompasses the skills, knowledge, and understanding needed to make informed decisions about the deployment and use of AI systems. Providers and deployers are tasked with ensuring that their staff and other involved parties have sufficient levels of AI literacy, considering their technical expertise, education, and the specific contexts in which AI systems will be used.
AI literacy is essential to equip relevant actors with the ability to comprehend the opportunities, risks, and potential harm associated with AI systems. It ensures that individuals understand both the technical aspects of AI development and operation, as well as the social and legal implications of AI-driven decisions. Deployers, in particular, must ensure that those assigned to oversee AI systems are adequately trained and possess the authority to fulfill their responsibilities effectively. This includes understanding the operational instructions and maintaining human oversight to mitigate risks and enhance safety.
Furthermore, the Regulation explicitly prohibits certain AI practices deemed unacceptable under Article 5. These practices, which conflict with European values and pose significant risks to fundamental rights, are banned etirely within the EU.
Prohibited practices include:
- Social Scoring: AI systems designed to evaluate individuals’ trustworthiness or behavior using societal or personal metrics.
- Manipulative AI: Systems intended to exploit cognitive or behavioral vulnerabilities.
- Real-Time Biometric Identification: Remote biometric identification systems used in publicly accessible spaces for law enforcement purposes, with exceptions for targeted victim searches.
- Facial Recognition Scraping: AI systems creating or expanding facial recognition databases through the indiscriminate scraping of images from the internet or CCTV footage.
- Emotion Recognition in Sensitive Contexts: AI systems used to analyze emotions in workplaces or educational institutions.
These prohibitions reflect the EU’s commitment to safeguarding fundamental rights, privacy, and dignity, and addressing practices that could undermine trust in AI technologies.
Non-compliance with the prohibitions outlined in Article 5 carries severe consequences. Violations may result in administrative fines of up to €35 million or 7% of the offender’s total worldwide annual turnover, whichever is higher.
The following step forward is set on August 2, 2025, with the implementation of several key provisions designed to ensure robust governance, transparency, and accountability for artificial intelligence systems. These provisions include measures to establish notifying authorities, address general-purpose AI models (GPAI), specially those that may cause systemic risks and enforce penalties for non-compliance. Together, they represent a critical phase in operationalizing the Regulation’s comprehensive framework.
A foundational element of these rules involves the designation or establishment of notifying authorities by each Member State. These authorities will be tasked with assessing, designating, and notifying conformity assessment bodies (CABs) and monitoring their performance. By ensuring these CABs operate with transparency and impartiality, the Regulation seeks to uphold high standards in the assessment and certification of AI systems. This structure will provide essential oversight, ensuring that AI technologies comply with the Regulation’s safety and quality requirements.