Last week, the European Parliament decided to adopt its negotiating position on the Artificial Intelligence (AI) Act (the “AI Act”) with 499 votes in favour, 28 against and 93 abstentions ahead of talks with EU member States on the final shape of the law. Amid warnings of artificial intelligence (“AI”) by many around the world, most notably Elon Musk,
1 the European Parliament’s vote received a lot of attention.
We take this opportunity to shed light on the AI Act and what the take-aways for the Malaysian lawmakers could be:
What is the meaning of “AI” according to the AI Act?
We can only talk about the regulation of AI when we all agree what AI means. Numerous different meanings may exist; the one agreed on by the EU Parliament is as follows: “
machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
2
This definition corresponds at large with that of the Organisation for Economic Co-operation and Development (OECD).
3 It is more restrictive than the EU Commission’s draft of the AI Act,
4 which is from approximately two years ago, and could be a reflection of the recent warnings on the dangers of AI.
Why is there a need for regulation of AI in the first place?
Arguably, human rights are not protected as much anywhere in the world as they are in Europe. This is greatly reflected in the AI Act. To use the words of the EU Parliament, “
[t]he rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing.”
5
The AI Act thus follows a risk-based approach based on the risks posed by an AI system to its users, as well as third parties.
What does the risk-based approach in the AI Act look like in practice?
The rules follow a risk-based approach and establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate.
6 The applicable three-tier risk model contains the following categories:
The AI Act does not cover systems that pose a minimal risk or no risk at all. The European institutions see no reason to regulate these systems. Therefore, these systems – which would include AI in video games – are not covered by the scope of the AI Act.
When will the AI Act enter into force?
Although the adopted position by the European Parliament was celebrated as a milestone, it will take a very long time before binding rules on AI apply across the European Union. The European institutions will now enter into negotiations to reach an agreement on the final text.
The AI Act is a regulation and, once passed, will be directly applicable in all EU Member States according to Art 288(2) sentence 2 of the Treaty on the Functioning of the European Union. However, the AI Act provides for a transitional period, which currently stands at 24 months (Article 85(2) of the AI Act. This means that as things stand now, even if the European institutions were to act very swiftly and reach a conclusion soon, we will not see the AI Act apply before late 2025.
What can companies already do today?
Already today, companies should prepare for the impact of the AI Act. This applies in particular when AI systems will be used in essential business operations or when considerable investment costs are associated with the introduction of AI systems. If later, for any reason, transpires that the AI system is categorised as a high-risk or even prohibited AI system, their use might only be permitted to either a limited extent or not at all from the date of application of the AI Act. Furthermore, additional costs for fulfilling the requirements of the AI Act could arise.
Even without the AI Act, companies should consider the legal implications of using an AI system. These include in particular:
Industrial property rights: Many AI systems require large amounts of training-data, which are regularly obtained from publicly accessible sources. It is not always guaranteed that the corresponding use of said training-data is also permissible. In addition, companies should bear in mind that, at least in Germany, no intellectual property rights can be obtained for the results produced by generative AI systems, which should be considered depending on the intended use.
Confidentiality: Insofar as AI systems require the user to provide information (e.g. by entering prompts), the protection of the data must either be ensured by technical-organisational measures or the use of the AI systems must be prohibited (e.g. by a instruction by the employee) at least to the extent that no confidential information or business secrets are processed by the AI.
Data protection: If personal data are processed by AI systems, the requirements of the GDPR and local member state laws must be complied with. In addition to determining the specific roles of the parties involved in terms of the GDPR and compliance with the associated obligations (such as ensuring appropriate legal bases for all processing operations and agreeing on and implementing appropriate technical and organisational measures), this also includes carrying out data protection impact assessments or, when personal data is being transferred to a third country, a transfer impact assessment. In addition, consideration should be given at an early stage to how the rights of the data subjects will be respected. Since the AI Act only contains isolated rules for data protection (cf. e.g. Art. 10 AI Act), the interplay of the GDPR and the AI Act will play an important role in the future, for example in the case of a "double commitment".
Terms of use: Before using AI systems, the underlying terms of use should be carefully checked, for example with regard to the details of the performance, liability, data protection documentation or confidentiality. Especially in the case of publicly accessible AI systems, where the company does not (yet) have to conclude a contract with the AI provider, this aspect is often forgotten, and companies use systems that contain legally risky contractual provisions and, in many cases, contradict the company's internal guidelines.
Take-aways for Malaysian lawmakers
Clearly, the age of AI is fast-approaching, especially considering the EU Parliament’s adoption of this negotiating position. While AI news has dominated Malaysian headlines in 2023,
7 Malaysia has yet to pass any legislation to regulate AI (though the Malaysian Government is reportedly developing a framework for AI regulation).
8 There are several take-aways for the Malaysian Government here that would be useful for drafting any future legislation, as follows:
6 In the words of the AI Act: “
It is therefore necessary to prohibit certain unacceptable artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.” (
cf. Amendment 36).