The first compliance period of the EU Artificial Intelligence Act (AI Act) has arrived for companies. Starting from February 2, 2025, companies must ensure the implementation of the so-called 'AI literacy' requirements and avoid prohibited artificial intelligence practices. Understanding and implementing the requirements set forth by the AI Act can sometimes pose challenges for companies. Therefore, it is advisable to conduct a compliance check in the field of artificial intelligence-based applications after February 2, when certain elements of the regulation come into effect in practice.
What does AI literacy mean and what obligations does it entail?
AI literacy, in itself, refers to the skills, knowledge, and interpretative abilities that enable companies to deploy artificial intelligence systems (AI systems) with the right information, as well as to understand their opportunities, risks, and the potential harms that may arise from their operation. The related obligation is that companies must provide appropriate training to their employees (and even subcontractors) involved in the operation and use of AI systems before developing, placing them on the market, deploying, or applying these systems. Furthermore, the companies must employ individuals with the appropriate competencies for the concerned job role.
Identifying and avoiding prohibited artificial intelligence practices, whether related to development or application, presents a more complex requirement than the obligations arising from AI literacy.
What practices need to be avoided?
We have compiled a few examples of practices that are considered prohibited under the AI Act, and thus companies must avoid their development, deployment, or use:
- AI systems that use subliminal, manipulative, or deceptive techniques with the intent or effect of distorting the behavior of an individual or group, thereby impairing their decision-making ability, which may cause significant harm or have a reasonable likelihood of causing such harm;
- AI systems that exploit the vulnerability of an individual or group (e.g., age, disability, social or economic status) to distort their behavior, which may cause significant harm or have a reasonable likelihood of causing such harm;
- AI systems that aim to evaluate or classify individuals or groups based on their community behavior over a specified period, known, inferred, or predicted personal characteristics or personality traits, in such a way that this evaluation or classification leads to at least one of the following outcomes:
- disadvantageous or unfavorable treatment in social contexts that are not related to the contexts in which the data was originally created or collected;
- such disadvantageous or unfavorable treatment that is unjustified or disproportionate in relation to their community behavior or the severity of that behavior.
Based on the prohibited practices mentioned above, an AI system that aims to manipulate consumers' purchasing decisions or exploits the vulnerabilities arising from the age of elderly individuals to distort their behaviors is likely to be classified as a prohibited AI practice. We believe it is necessary to emphasize that, there are other prohibited practices included in the AI Act, so it is definitely advisable to subject the given AI system to an examination regarding the prohibitions defined in the regulation before planned development, deployment, or use.
What sanctions can be expected?
In the case of a company, a maximum administrative fine of up to €35,000,000 or an amount not exceeding 7% of the total global annual revenue of the previous financial year may be imposed for violations of the rules regarding prohibited practices. In comparison, the amount of the fine can significantly exceed the maximum fine of €20,000,000 that can be imposed under the GDPR, or an amount not exceeding 4% of the total annual global turnover of the previous financial year.
However, it is worth noting that most provisions regarding the imposition of fines will only be applicable from August 2, 2025.
How can the EY Law Hungary assist?
While EY Hungary can provide comprehensive support to its clients regarding the development, deployment, or use of AI systems, under which EY Law Hungary offers the following legal services to its clients:
- legal interpretation necessary for making business decisions related to AI systems;
- examination of AI systems planned for development or use to determine whether they comply with the requirements set forth by the AI Act (even during the design phase);
- developing measures for identifying, assessing, and mitigating risks related to AI systems;
- supplementing or preparing internal policies, codes of conduct, and guidelines to ensure that the employees or contracting partners of the company use AI systems in a manner that does not jeopardize the company's legal compliance or reputation;
- preparing or reviewing contracts related to the development, deployment, placing them on the market, or use of AI systems to ensure that the intended business objectives are met.
If your company needs professional assistance, please feel free to contact us.