Article

The AI Act and the role of artificial intelligence in digital transformation and sustainability

23 May 2024 | Applicable law: EU | 3 minute read

Artificial intelligence (AI) is at the core of the technological and digital revolution, with potential benefits spanning across a multitude of sectors and industries. However, the unethical use of AI can have serious consequences for society and the fundamental rights of its citizens. To handle the risks associated with this technology, the European Union has introduced the AI Act to promote and develop trustworthy, human-centred, environmentally, socially, and governance-responsible AI systems.

What is the AI Act? 

The AI Act is the first specific legislation on artificial intelligence and applies to suppliers, users, importers and distributors of AI systems positioned or used in the EU, regardless of their registered office. This legislation sets out different rules and obligations based on the AI systems level of risk, with the aim of protecting citizens’ health, safety and fundamental rights, while promoting environmental sustainability and social responsibility. 

Risk levels and obligations 

Unacceptable risk AI systems posing an unacceptable risk to health, safety or fundamental rights are prohibited. Some examples of such systems include emotional recognition in the workplace or social credit rating systems. 

For high-risk AI systems, such as those using biometric systems, toys, medical devices, hiring processes, the AI Act requires compliance with key requirements, including, a compliance assessment, certain data quality, documentation and traceability, transparency, human oversight, security and cybersecurity measures. 

The timing of application 

The European Parliament adopted the final text of the AI Act in March 2024, entering into force on the 20th day subsequent to its publication in the Official Journal of the European Union (likely during the summer period). The deadlines for the application of the rules are phased: within 6 months, bans on AI systems at unacceptable risk will come into force; within 12 months, the obligations for general-purpose AI systems; within 24-36 months, all the rules of the AI Act, including the obligations for high-risk AI systems, will become effective. 

Recommended actions for businesses 

Companies should take several steps now to ensure compliance with the AI Act and bolster trust in AI technology, with a focus on environmental, social and governance responsibility. Here are some recommended actions that our firm can assist you with: 

  • Identification and Cataloging of AI Systems: Assistance in identifying and classifying AI systems based on risk.
  • Impact Assessment and Compliance: Assessing the impact of AI systems on fundamental rights and regulatory compliance, including the implementation of bias-free algorithms and development practices that ensure accessibility and non-discrimination.
  • Legal Risk Management: Assess and manage the legal risks associated with AI systems, including intellectual property regulations, GDPR, and product liability, promoting solutions that minimise environmental impact.
  • Tech Due Diligence: Assistance in technological due diligence for extraordinary transactions, investment agreements and strategic contracts concerning AI.
  • Protection of Intellectual Property Rights: Assistance in the protection and enhancement of intellectual property rights relating to AI systems, software and algorithms.
  • Drafting and Negotiation of Contracts: Support in the drafting and negotiation of contracts for the development, licensing, acquisition and distribution of AI systems.
  • Data Protection Impact Assessment: Assistance in data protection impact assessment, governance and data security measures in case of use of AI for the processing of personal data.
  • Transparent Governance: Adoption of transparency and accountability policies, including traceability of automated decisions to ensure that they can be understood or duly challenged.
  • Training and Company Policies: Training and drafting of policies for the ethical and responsible use of AI systems within companies and organisations. 

Adopting these practices will not only ensure regulatory compliance and the protection of your AI investments, but also help build trust in AI technology, which is essential for its effective and responsible development and use.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.

Share

Related experience

As a full-service law firm, we are able to provide advice and information about a wide range of other issues. Here are some related areas.

Join the club

We have lots more news and information that you'll find informative and useful. Let us know what you're interested in and we'll keep you up to date on the issues that matter to you.