On June 13, 2024, the European Parliament and the European Council adopted Regulation (EU) 2024/1689, which for the first time regulates the legal framework for artificial intelligence as well as the risks arising from it. In this way, the European Union has a leading role at the global level in the regulation of this area.
The Artificial Intelligence Act – Regulation (EU) 2024/1689 sets clear boundaries regarding the requirements and obligations for the use of artificial intelligence, and also reduces the administrative and financial burdens on businesses, especially small and medium-sized enterprises.
Regulation (EU) 2024/1689 is part of a wider package of policy measures to support the development of trusted AI, which also includes the AI Innovation Package and the Coordinated AI Plan. Together, these measures guarantee the security and confidentiality of the fundamental rights of citizens and businesses when it comes to artificial intelligence. They also boost uptake, investment and innovation in AI across the EU.
Artificial intelligence has the potential to change existing ways of working and living and offers huge benefits for citizens, society and the European economy. In addition to the advantages offered by artificial intelligence, it must be noted that it leads to numerous risks. It is for this reason that the adopted Act establishes standards for ensuring minimal risk when using artificial intelligence systems and for their full transparency.
For those artificial intelligence systems that are identified as high-risk, strict requirements, which include, among other things, risk mitigation measures, activity logs, complete documentation, human supervision and cyber security are foreseen. This high-risk group includes, among others, artificial intelligence systems used to determine or assess whether an entity is eligible for credit, as well as algorithms that operate autonomous robots.
The Act prohibits the use of artificial intelligence systems that are considered a “clear threat to basic human rights”, such as applications to manipulate people’s behavior, toys that can encourage the “dangerous behavior” of minors with voice assistance, as well as systems that enable “social rating” by governments or companies. In addition, the use of systems for recognizing emotions in the workplace, for categorizing people or for real-time remote biometric identification is prohibited.
The AI Act introduces transparency obligations for all general purpose AI models to enable a better understanding of these models and additional risk management obligations for highly capable and impactful models. These additional obligations include self-assessment and mitigation of systemic risks, serious incident reporting, conducting model tests and assessments, and cybersecurity requirements.
As artificial intelligence is a rapidly evolving technology, the Regulation takes an approach that allows the rules to adapt to technological changes. For these reasons, the European Office for Artificial Intelligence, established within the Commission in February 2024, oversees the implementation of the Artificial Intelligence Act in the direction of respect for human dignity, rights and trust.