News

/ Taxes and Law in Poland

AI Act: A new legal framework for Artificial Intelligence in Europe

AI Act: A new legal framework for Artificial Intelligence in Europe

/
Date26 Sep 2024
/

The introduction of the EU AI Act (Artificial Intelligence Act) marks a milestone in the regulation of the development and use of artificial intelligence (AI) in Europe. The Act entered into force on 1 August 2024 and aims to establish a legal framework that supports innovation while ensuring the safety and protection of citizens’ rights. The AI Act is the first such comprehensive regulation on AI in the world, making Europe a leader in the responsible development of this technology.


What is meant by the term AI?

Artificial intelligence (AI) is a technology that enables machines and computer systems to mimic human abilities, such as inference, learning, language comprehension, and decision-making. AI operates based on data analysis, making it capable to create predictions, generate recommendations, and perform tasks of varying complexity in both physical and digital environments.

Key features of AI include:

  • Autonomy – AI systems can operate autonomously without the need for direct human supervision.
  • Adaptability – AI learns from data provided, allowing it to improve its performance and adapt its actions to new circumstances.
  • Inference – Based on analysed information, AI creates decisions and predictions.

Not all advanced technological systems meet the criteria to be classified as AI.

Examples of technologies that are not understood as AI include:

  • Algorithms with fixed functions – systems that perform only pre-programmed operations without the ability to learn or adapt independently.
  • Alert systems – technologies that respond automatically to specific stimuli, but cannot infer or adapt their actions based on new data.
  • Process automation – while automation can be complex, systems like manufacturing robots that operate according to set patterns, without the ability to learn, are not considered AI.

Thus, a key feature of AI is its capacity for self-development and adaptation, which is lacking in simpler, static technologies.


What is the AI Act?

The AI Act, or Regulation (EU) 2024/1689, is EU legislation that introduces uniform rules for the creation and use of AI systems. Its main goal is to create a trusted environment for the development of AI that respects human rights and ensures safety. It regulates issues such as ethics, safety, and accountability in the implementation of artificial intelligence.


Why is it needed?

With the rapid development of AI, there are concerns about transparency, discrimination, and accountability related to AI decisions. Examples include automated credit scoring or recruitment processes, where the lack of proper safeguards can lead to unjustified decisions. The AI Act aims to protect citizens from the undesirable effects of AI and support the transparent and responsible use of this technology.


Who will be affected by the AI Act?

The AI Act will impact a wide spectrum of businesses, not just those directly related to the IT industry or AI development. The regulations will affect suppliers, importers, distributors, and users of AI systems. Importantly, the regulation applies not only to companies based in the EU, but also to companies outside the EU if the outcomes of their AI systems are used within the EU. This means that even global companies must adapt their technologies to EU standards if their solutions are used in Europe.

Exemptions from the AI Act include AI systems used solely for military, defence, or national security purposes, as well as technologies that are not available on the EU market and are not used in the Union for those purposes.


New obligations for entrepreneurs

The AI Act imposes a number of obligations on businesses to develop, market and use AI systems. Providers of AI systems, particularly those deemed high-risk, are required to assess risks, provide detailed technical documentation, and ensure transparency and safety of their systems. They must regularly conduct compliance audits and update their technologies to meet the requirements of the AI Act.

Distributors and importers are tasked with ensuring that the AI systems they bring to market meet all legal requirements, and may face similar liability to suppliers in the event of violations. Users, on the other hand, are obliged to monitor the operation of systems, report problems, and ensure that the technologies are used according to their intended purpose and supplier recommendations.


Risk classification according to the AI Act

The AI Act introduces four levels of risk associated with the use of artificial intelligence systems. The higher the risk level, the more stringent the regulatory requirements.

  1. Unacceptable risk – includes AI systems that have been deemed particularly dangerous. These are completely prohibited, such as social scoring or manipulating people’s behaviour with subliminal techniques. There are some exceptions in this category that allow the use of artificial intelligence, mainly in the context of law enforcement actions by state authorities.
  2. High risk – includes AI systems that may affect the health, safety, and rights of citizens, such as systems used in healthcare, transportation, or employee recruitment. There are numerous requirements that must be met for high-risk AI systems. These include conducting a thorough analysis and assessment of potential risks and implementing a comprehensive AI risk management system.
  3. Limited risk – applies to systems that may have negative impact, but to a lesser degree. Examples include chatbots and generative AI (e.g., text and graphic creation). For low (limited) risk systems, providers must ensure an appropriate level of transparency – users must be aware that they are interacting with artificial intelligence.
  4. Minimal risk – includes the majority of AI systems, such as spam filters or computer games. These systems can be freely used without additional requirements.

AI Act’s entry into force

The AI Act came into force on 1 August 2024, but its provisions will be implemented gradually to give businesses and member states time to adjust to the new regulations. Key dates and milestones for the implementation of the regulations are:

  • 1 August 2024 – The AI Act goes into effect, but enforcement of the regulations does not begin on this date. Instead, voluntary compliance with the regulations is encouraged.
  • 2 February 2025 – Regulation on the scope, definitions, principles on AI education, and prohibited practices take effect.
  • 2 August 2025 – Rules on reporting, GPAI models, enforcement issues, and financial penalties will take effect.
  • 2 August 2026 – The transition period for high-risk AI systems ends, and the main provisions of the AI Act take effect.
  • 2 August 2027 – Provisions for high-risk systems covered by Article 6(1) of the AI Act take effect.
  • 2 August 2030 – The transition period for high-risk AI systems intended for use by public authorities ends.

Businesses should already start preparing for the upcoming AI Act regulations. Key steps include classifying AI systems according to the EU taxonomy and assessing the impact of regulations on current solutions to develop appropriate business strategies and implement AI management frameworks.


Sanctions under the AI Act

The AI Act introduces severe financial penalties for entities that fail to comply with the new AI regulations. The penalty structure is designed to be proportional to the size of the company, while being a deterrent and ensuring effective enforcement of the regulations.

The key penalty rules are:

  • Violation of prohibited AI practices: Companies using technologies banned by the AI Act, such as manipulating user behaviour or real-time remote biometric identification, may face fines of up to €35 million or 7% of the company’s annual global turnover, whichever is higher.
  • Violation of other requirements: Companies that fail to comply with other AI Act requirements, such as rules on risk management or technical documentation, may be subject to fines up to €15 million or 3% of annual global turnover, whichever is higher.
  • Providing incorrect information: Companies that provide incorrect, incomplete, or misleading information to notified bodies or national authorities may face fines of up to €7.5 million or 1% of annual global turnover, whichever is higher.

These sanctions are aimed at ensuring compliance with the regulations by all entities that use artificial intelligence, as well as to protect users’ rights and public interests.


Summary

The AI Act is the world’s first comprehensive regulation of artificial intelligence, aimed at ensuring that its development and use comply with principles of ethics, safety, and the protection of human rights. Enacted on 1 August 2024, the AI Act covers a wide range of entities – from AI technology providers to companies that use artificial intelligence systems in their operations. The regulations apply not only in the European Union, but also to enterprises outside the EU if their AI systems affect EU citizens.

The AI Act aims to create a balanced environment for the development of innovations while ensuring that artificial intelligence is developed in a responsible, transparent, and safe manner for society. For companies, this means the necessity to carefully analyse the compliance of their systems with the new regulations and take action to ensure full compliance before all.

You can find the full text of the AI Act at: eur-lex.europa.eu

If you have any questions regarding this topic or if you are in need for any additional information – please do not hesitate to contact us:

Ask a question »

CUSTOMER RELATIONSHIPS DEPARTMENT

ELŻBIETA<br/>NARON - GROCHALSKA

ELŻBIETA
NARON-GROCHALSKA

Head of Customer Relationships
Department / Senior Manager
getsix® Group
pl en de

***

This publication is non-binding information and serves for general information purposes. The information provided does not constitute legal, tax or management advice and does not replace individual advice. Despite careful processing, all information in this publication is provided without any guarantee for the accuracy, up-to-date nature or completeness of the information. The information in this publication is not suitable as the sole basis for action and cannot replace actual advice in individual cases. The liability of the authors or getsix® are excluded. We kindly ask you to contact us directly for a binding consultation if required. The content of this publication iis the intellectual property of getsix® or its partner companies and is protected by copyright. Users of this information may download, print and copy the contents of the publication exclusively for their own purposes.

Our Recommendations

Our Memberships

Our Certification

Wojskowe Centrum Normalizacji Jakości I KodyfikacjiTÜV NORDTÜV RHEINLAND

Our Partnerships

Competencies