EUROPE’S AI ACT

By Cianan Clancy

The EU AI Act: All You Need to Know 

The EU AI Act is the first comprehensive regulatory framework for artificial intelligence (AI) globally. This law was enacted to ensure transparency and accountability in the deployment of AI systems within the European Union. It imposes obligations on companies introducing AI technologies to the EU market or whose systems impact the EU, irrespective of their development or deployment location.

Proposed by the European Commission in April 2021, the AI Act was formally adopted by the European Parliament on March 13, 2024, with overwhelming support. The agreed-upon text is anticipated to receive final adoption in April 2024. 

In this article, we explain how the EU AI Act applies and offer important tips for reducing risks associated with the new regulations.    

What Are the Goals of the AI Act? 

The primary objectives of the EU AI Act are to address the inherent risks and opportunities associated with AI across various domains such as health, safety, fundamental rights, democracy, rule of law, and the environment in the EU. Moreover, it aims to stimulate innovation, growth, and competitiveness within the European Union’s internal AI market.

With businesses increasingly turning to AI, particularly through emerging technologies like Generative AI (GenAI), there’s a growing need to establish responsible and controlled AI practices. 

Who Falls Under the AI Act?

The EU AI Act affects businesses involved in creating or utilising AI systems, as well as those engaged in selling, distributing, or importing such systems. It extends to both entities within the EU and entities outside the EU if their systems have an impact within the Bloc.

The AI Act takes a risk-based approach, categorising AI systems based on their potential use and their potential impact on individuals and society.

Providers of general-purpose AI models, including large GenAI models like ChatGPT, are subject to certain obligations. However, providers of free and open-source models are mostly exempt from these obligations, although this exemption does not cover providers of general-purpose AI models with systemic risks. For instance, if a GenAI model is used in a process leading to outputs deemed high-risk, the use case is treated as high-risk.

The obligations do not extend to research, development, and prototyping activities preceding market release, nor do they apply to AI systems exclusively used for military, defense, or national security purposes, regardless of the entity involved in these activities.

The Different Risk Categories Under the AI Act 

The European Commission follows a risk-based approach to establish appropriate and enforceable regulations for AI systems, comprising four risk categories: Unacceptable risk, High risk, Limited risk, and Minimal risk. These categories are determined based on the AI system’s intended purpose, the potential risk of harm to individuals’ fundamental rights, the severity of potential harm, and the likelihood of occurrence. The Act specifies specific transparency requirements and identifies systemic risks. 

Unacceptable Risk

Examples falling under the “Unacceptable risk” category include:

  • Social scoring for both public and private purposes.
  • Exploitation of individuals’ vulnerabilities and the use of subliminal techniques.
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement, with limited exceptions.
  • Biometric categorization of individuals based on sensitive traits such as race, political opinions, religion, or sexual orientation, unless filtered datasets are used specifically for law enforcement purposes.
  • Individual predictive policing.
  • Emotion recognition in the workplace and educational settings, except for medical or safety purposes (e.g., monitoring pilot fatigue levels).
  • Untargeted scraping of the internet or CCTV for facial images to create or expand databases.

High Risk

Examples categorised as “High risk” include:

  • Critical private and public services, such as financial institutions, which employ credit-scoring models that could deny individuals access to loans.
  • Employment and worker management, including the use of CV-sorting software in recruitment processes.
  • Critical infrastructures like transportation systems, which could jeopardise citizens’ safety and health.
  • Educational or vocational training that impacts access to education and professional paths, such as exam scoring.
  • Safety components of products, such as AI applications in robot-assisted surgery.
  • Law enforcement activities that may infringe upon individuals’ fundamental rights, such as evaluating the reliability of evidence.
  • Systems designed to influence decisions regarding individuals’ eligibility for health and life insurance.
  • Migration, asylum, and border control management, such as verifying the authenticity of travel documents.
  • Administration of justice and democratic processes, including applying the law to specific sets of circumstances.

Before introducing a high-risk AI system to the EU market or putting it into operation, providers must undergo a conformity assessment. This process verifies compliance with mandatory requirements for trustworthy AI, including data quality, documentation, transparency, human oversight, accuracy, cybersecurity, and robustness. This assessment must be repeated if significant modifications are made to the system or its purpose.

Additionally, providers of high-risk AI systems must establish robust AI governance, particularly focusing on quality control and risk management, to ensure ongoing compliance and minimise risks for users and affected individuals, even post-market release.

High-risk AI systems deployed by public authorities or their affiliates must be registered in a public EU database.

Limited Risk 

In the “Limited risk” category, compliance obligations primarily revolve around transparency, with lighter requirements. Users must be notified when engaging with an AI system unless it’s readily apparent that the outputs are AI-generated. Examples include:

  • Informing users when interacting with a chatbot.
  • Use of deep-fakes that are easily identifiable as such.

Minimal Risk 

AI systems that do not fall into the aforementioned categories are exempt from compliance under the EU AI Act. Technology providers will primarily focus on addressing high-risk and limited-risk categories. Other AI systems can be developed and utilised under existing legislation without additional legal obligations. One example of minimal risk includes the use of AI within video games.

Additional risks to consider are:

Specific transparency risk: Certain AI systems, like chatbots, require specific transparency requirements due to the potential risk of manipulation. Users should be informed when interacting with a chatbot.

Systemic risks: General-purpose AI models, such as large GenAI models, pose systemic risks. These models, used for various tasks, could lead to serious accidents or be exploited for widespread cyberattacks. Harmful biases propagated by these models across multiple applications could affect numerous individuals.

Penalties for AI Act Infringements

Penalties for non-compliance are stringent and will be enforced by the designated AI authority within each EU member state. The Act outlines the following thresholds:

  • Up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year for violations related to prohibited practices or non-compliance with data-related requirements.
  • Up to €15 million or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with other regulations or obligations, including breaches of rules concerning general-purpose AI models.
  • Up to €7.5 million or 1.5% of the total worldwide annual turnover of the preceding financial year for supplying incorrect, incomplete, or misleading information to notified bodies and competent national authorities in response to requests.

For each infringement category, small and midsize enterprises (SMEs) will face the lower of the two amounts, while larger companies will face the higher amount. The Commission, with input from the EU AI Board, will develop guidelines to harmonise national rules and practices in determining administrative fines.

When Will the AI Act be Fully Operational?

After being adopted by the European Parliament and European Council, the EU AI Act will become effective 20 days following its publication in the Official Journal. It will be fully operational 24 months after this date, with a phased implementation as outlined below:

  • Six months: Prohibited systems must be phased out within this timeframe.
  • 12 months: Obligations regarding general-purpose AI governance will take effect.
  • 24 months: All regulations outlined in the AI Act, including obligations for high-risk systems listed in Annex III, will be enforced.
  • 36 months: Obligations for high-risk systems listed in Annex II (harmonisation legislation) will be applicable.

Key Steps Businesses Can Take to Mitigate Risk 

As businesses grapple with the complexities of AI governance and strive to meet the requirements laid out in the EU AI Act, it’s crucial to adopt tangible steps that facilitate compliance and responsible AI deployment. In the following section, we’ll outline actionable measures that organisations can implement today to navigate these challenges effectively.

Establish Clear Communication Channels

Ensure that there are clear channels for communication and reporting regarding AI-related issues within the organisation. Encourage employees to report any concerns or potential risks associated with AI systems promptly. Establishing a culture of transparency and accountability helps in identifying and addressing AI-related risks effectively.

Regularly Review and Update Policies

Continuously review and update AI governance policies and procedures to adapt to evolving regulatory requirements and technological advancements. Regular audits of AI systems and governance structures help in identifying gaps or areas for improvement, ensuring ongoing compliance and effective risk management.

Develop an AI Exposure Register

Assessing the risks associated with AI use in your organisation requires establishing a baseline of your current AI exposure. This includes native AI systems, updated existing systems now incorporating AI, and AI usage by third-party providers offering services like Software-as-a-Service (SaaS). An AI exposure register enables a thorough evaluation of all AI-related risks.

Conduct Risk Assessments for Identified Use Cases

Utilise the EU AI Act Risk Assesment Framework to evaluate each use case listed in your AI exposure register. Take necessary measures to mitigate identified risks and ensure the implementation of governance and appropriate controls for managing these risks.

Establish Effective AI Governance Structures

Aligning with the EU AI Act mandates, implement robust AI governance and risk management structures. This responsibility is shared across the organisation and requires a well-defined operational environment that integrates with existing enterprise governance frameworks, ensuring the seamless embedding of AI governance practices.

Initiate Upskilling Initiatives and Awareness Sessions

Roll out training programs and awareness sessions to equip stakeholders with the knowledge necessary for responsible AI use and oversight. Enhancing the understanding of AI capabilities and limitations ensures your organisation effectively harnesses its benefits while mitigating potential risks.

Foster Collaboration and Knowledge Sharing

Encourage collaboration and knowledge sharing among departments involved in AI deployment, including IT, legal, compliance, and business units. Facilitate cross-functional training sessions and workshops to enhance awareness of AI risks and compliance requirements. Foster a collaborative culture where teams can share insights and best practices for mitigating AI-related risks effectively.

Implement Robust Data Privacy Measures

Prioritise data privacy by implementing robust measures to protect sensitive information used in AI systems. Ensure compliance with relevant data protection regulations, such as the General Data Protection Regulation (GDPR), and establish protocols for secure data storage, processing, and sharing. Conduct regular audits to assess data privacy practices and address any vulnerabilities proactively.

Conclusion

The EU AI Act represents a significant milestone in global AI regulation, providing a comprehensive framework to ensure transparency, accountability, and responsible deployment of artificial intelligence within the European Union. Enacted to address both the risks and opportunities associated with AI across various domains, this legislation imposes obligations on businesses operating within the EU, regardless of their location of development or deployment.

By following the key steps outlined in this article, businesses can establish a solid foundation for responsible AI deployment and ensure compliance with the EU AI Act. From developing AI exposure registers to initiating upskilling initiatives, these actionable measures empower organisations to harness the benefits of AI while minimising associated risks.

As the EU AI Act moves towards full implementation, businesses must remain vigilant and proactive in adapting to evolving regulatory requirements. By prioritising compliance and responsible AI use, organisations can not only mitigate risks but also drive innovation and competitiveness in the European Union’s internal AI market.

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *