Regulating the risk of artificial intelligence – take-aways from the European Parliament’s negotiating position for Malaysia?

Last week, the European Parliament decided to adopt its negotiating position on the Artificial Intelligence (AI) Act (the “AI Act”) with 499 votes in favour, 28 against and 93 abstentions ahead of talks with EU member States on the final shape of the law. Amid warnings of artificial intelligence (“AI”) by many around the world, most notably Elon Musk,1 the European Parliament’s vote received a lot of attention.
 
We take this opportunity to shed light on the AI Act and what the take-aways for the Malaysian lawmakers could be:
 
What is the meaning of “AI” according to the AI Act?
We can only talk about the regulation of AI when we all agree what AI means. Numerous different meanings may exist; the one agreed on by the EU Parliament is as follows: “machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”2
 
This definition corresponds at large with that of the Organisation for Economic Co-operation and Development (OECD).3 It is more restrictive than the EU Commission’s draft of the AI Act,4 which is from approximately two years ago, and could be a reflection of the recent warnings on the dangers of AI.
 
Why is there a need for regulation of AI in the first place?
Arguably, human rights are not protected as much anywhere in the world as they are in Europe. This is greatly reflected in the AI Act. To use the words of the EU Parliament, “[t]he rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing.5
 
The AI Act thus follows a risk-based approach based on the risks posed by an AI system to its users, as well as third parties.
 
What does the risk-based approach in the AI Act look like in practice?
The rules follow a risk-based approach and establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate.6 The applicable three-tier risk model contains the following categories:
  1. unacceptable risk, which among others includes social scoring, real-time biometric identification systems in public places and harmful behavioural manipulation. This category is prohibited (Art. 5 of the AI Act);
  2. high risk, which as the name indicates are acceptable, but only permitted when they meet various conditions. High risk systems are thus subject to extensive regulation and far-reaching obligations under the AI Act, which can among others be seen that their scope was dealt with under 46 provisions (Art. 6 through 51 of the AI Act);
  3. limited risk, which pose an acceptable level of risk and thus are permitted to interact with humans directly, as would be the case in a chatbot. These systems are permitted under Art. 52 of the AI Act subject to compliance with certain transparency obligations (i.e., end-users must be made aware that they are interacting with a machine).
The AI Act does not cover systems that pose a minimal risk or no risk at all. The European institutions see no reason to regulate these systems. Therefore, these systems – which would include AI in video games – are not covered by the scope of the AI Act.
 
When will the AI Act enter into force?
Although the adopted position by the European Parliament was celebrated as a milestone, it will take a very long time before binding rules on AI apply across the European Union. The European institutions will now enter into negotiations to reach an agreement on the final text.
 
The AI Act is a regulation and, once passed, will be directly applicable in all EU Member States according to Art 288(2) sentence 2 of the Treaty on the Functioning of the European Union. However, the AI Act provides for a transitional period, which currently stands at 24 months (Article 85(2) of the AI Act. This means that as things stand now, even if the European institutions were to act very swiftly and reach a conclusion soon, we will not see the AI Act apply before late 2025.
 
What can companies already do today?
Already today, companies should prepare for the impact of the AI Act. This applies in particular when AI systems will be used in essential business operations or when considerable investment costs are associated with the introduction of AI systems. If later, for any reason, transpires that the AI system is categorised as a high-risk or even prohibited AI system, their use might only be permitted to either a limited extent or not at all from the date of application of the AI Act. Furthermore, additional costs for fulfilling the requirements of the AI Act could arise.
Even without the AI Act, companies should consider the legal implications of using an AI system. These include in particular:

Industrial property rights: Many AI systems require large amounts of training-data, which are regularly obtained from publicly accessible sources. It is not always guaranteed that the corresponding use of said training-data is also permissible. In addition, companies should bear in mind that, at least in Germany, no intellectual property rights can be obtained for the results produced by generative AI systems, which should be considered depending on the intended use.

Confidentiality: Insofar as AI systems require the user to provide information (e.g. by entering prompts), the protection of the data must either be ensured by technical-organisational measures or the use of the AI systems must be prohibited (e.g. by a instruction by the employee) at least to the extent that no confidential information or business secrets are processed by the AI.

Data protection: If personal data are processed by AI systems, the requirements of the GDPR and local member state laws must be complied with. In addition to determining the specific roles of the parties involved in terms of the GDPR and compliance with the associated obligations (such as ensuring appropriate legal bases for all processing operations and agreeing on and implementing appropriate technical and organisational measures), this also includes carrying out data protection impact assessments or, when personal data is being transferred to a third country, a transfer impact assessment. In addition, consideration should be given at an early stage to how the rights of the data subjects will be respected. Since the AI Act only contains isolated rules for data protection (cf. e.g. Art. 10 AI Act), the interplay of the GDPR and the AI Act will play an important role in the future, for example in the case of a "double commitment".

Terms of use: Before using AI systems, the underlying terms of use should be carefully checked, for example with regard to the details of the performance, liability, data protection documentation or confidentiality. Especially in the case of publicly accessible AI systems, where the company does not (yet) have to conclude a contract with the AI provider, this aspect is often forgotten, and companies use systems that contain legally risky contractual provisions and, in many cases, contradict the company's internal guidelines.

Take-aways for Malaysian lawmakers
 
Clearly, the age of AI is fast-approaching, especially considering the EU Parliament’s adoption of this negotiating position. While AI news has dominated Malaysian headlines in 2023,7 Malaysia has yet to pass any legislation to regulate AI (though the Malaysian Government is reportedly developing a framework for AI regulation).8 There are several take-aways for the Malaysian Government here that would be useful for drafting any future legislation, as follows:
 
  1. Clearly define AI – The first thing any regulation concerning AI must do is clearly define the scope of the technology it is aimed at regulating. This provides clarity to stakeholders and users of the technology;

  2. Develop a system of categorisation for AI – Obviously, not all AI tools or products are alike, and there can be a great degree of variance in their use-cases. The EU uses a risk-based categorisation system, as outlined above, but this is just one way to do. Malaysia need not adopt a similar system. Instead, Malaysia can set out general requirements for all AI systems, with further specific requirements for common AI systems, such as chatbots or image generation tools;

  3. Regulate information-handling – Malaysia must ensure that all data and information processed by the AI is subject to the Personal Data Protection Act 2010, to ensure potentially sensitive information is not retained by the AI and is processed safely; and

  4. Be flexible – As AI is a fast-evolving field, any legislation aiming to regulate it must be flexible in scope and application, to ensure the legislation continues to be effective as the field continues to evolve and expand.

  5. Cultivate growth of the AI industry rather than stifle its development – While the importance of regulation cannot be understated, that regulation should not be so extensive as to stifle the growth of the AI industry in Malaysia. Given the inevitable advent of AI, Malaysia ought to embrace AI development and aim to foster developments in the field.
Skrine regularly advises on cutting-edge technological developments in Malaysia, including developments in AI. For further information, please contact the authors of this note.

For further information on this topic please contact Dr. Harald Sippel, Head of Skrine’s European Desk (harald@skrine.com, +60 18 211 4958) or Vishnu Vijandran, Associate (vishnu.v@skrine.com, +60 12-677 4794).
 

1 For instance, see The Star, Elon Musk repeats call for artificial intelligence regulation, 16th June 2023, available at www.thestar.com.my/tech/tech-news/2023/06/16/elon-musk-repeats-call-for-artificial-intelligence-regulation (last accessed on 2023-06-17).
2 Article 3(1) first point of the proposed text. For details, see European Parliament, Artificial Intelligence Act – Texts Adopted, available at www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf (last accessed on 2023-06-17).
3 For details, see OECD, Recommendation of the Council on Artificial Intelligence, available at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (last accessed on 2023-06-17).
4 For details, see European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts, 21st April 2023, available at https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF (last accessed on 2023-06-17).
5 European Parliament, MEPs ready to negotiate first-ever rules for safe and transparent AI, 14th June 2023, available at www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai (last accessed on 2023-06-17).
6 In the words of the AI Act: “It is therefore necessary to prohibit certain unacceptable artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.” (cf. Amendment 36).
7 See Free Malaysia Today, How artificial intelligence is dividing the world of work, 30th June 2023, available at www.freemalaysiatoday.com/category/leisure/2023/06/30/how-artificial-intelligence-is-dividing-the-world-of-work and Human Resources Director, HR minister: Malaysia needs to retrain 50% of workforce amid AI rise, 28th June 2023, available at www.hcamag.com/asia/specialisation/learning-development/hr-minister-malaysia-needs-to-retrain-50-of-workforce-amid-ai-rise/450731 (last accessed on 2023-06-30) for example.
8 Malay Mail, Putrajaya working towards framework to regulate AI, 7th June 2023, available at www.malaymail.com/news/malaysia/2023/06/07/putrajaya-working-towards-framework-       to-regulate-ai/73129 (last accessed on 2023-06-29)

This alert contains general information only. It does not constitute legal advice nor an expression of legal opinion and should not be relied upon as such. For further information, kindly contact skrine@skrine.com.