And trustworthy - European model of the development of artificial intelligence

The dynamic development of artificial intelligence technology (AI) and its growing impact on social, economic and political life meant that the European Union attempted to create the first comprehensive legal framework regulating this field. The result is the Regulation of the European Parliament and of the Council (EU) 2024/1689 of 13 June 2024 on the establishment of harmonized provisions on artificial intelligence and amendments to Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 I (EU) 2019/2144 and Directive 2014/90/EU, (EU) 2016/797 I (EU) 2020/1828 (act on artificial intelligence), hereinafter: AI Act. AI ACT goals can be included as an attempt to balance innovation with responsibility and reconcile the protection of fundamental rights with promoting the competitiveness of the European market. AI ACT sets boundaries where, in the opinion of European legislators, technology can threaten man. The European Union wants to be a world leader in the field of ethical standards promoting artificial intelligence trustworthy. It is worth noting that the European Union, the USA and China are currently taking radically different development and regulatory models regarding AI.

The European vision is based on a hard regulatory approach, which, based on values, primarily included in the EU's basic rights card, recognizes the primacy of the protection of the individual and the public interest, forces classification of AI systems, sets many requirements for them, including those related to accuracy, reliability
and cyber security, and also prohibits a series of practices related to e.g. social scoring or social scoring manipulation. The United States, although more and more aware of the threats that may be associated with AI, still adhere to the principle of minimal regulation and maximum innovation, while China intensively supporting the AI ​​development at the state level, see in it a tool to realize the national interest and improving social control mechanisms. The coexistence of these such different models raises a question about the possibility of global harmonization of AI standards and the future of the international digital order - will the law, market or strength win?

Coming out of the assumptions described above, AI Act prohibits specific practices in the field of artificial intelligence, mentioning in art. 5 Those that are considered in the European Union unacceptable due to a violation of fundamental rights and high risk to society. This applies, for example:

  • subliminal, manipulative or misleading techniques (an example could be based on AI advertising systems using unconscious visual or sound stimuli that would be able to influence consumer decisions without their knowledge, leading to adverse purchasing decisions),
  • social scoring systems, granting citizens points for "good behavior" (e.g. compliance with regulations) and limiting access to public services to people with a "low result",
  • assessing the risk of committing a crime only on the basis of profiling, i.e. systems that would provide a tendency to commit a crime based on demographic or social features, without specific evidence,
  • Inferences on emotions in the workplace and educational institutions, i.e., for example, systems monitoring the emotions of call center employees using voice or facial expressions to assess their performance.

At the same time, relative to high -risk AI systems, described in art. 6 and Annex III to AI ACT, i.e. those that, due to their potential impact on basic rights, health, security or civic freedoms, are subject to the most stringent regulatory requirements, AI Act introduces many obligations. Art. 15 AI ACT, according to which high -risk AI systems are designed and developed in such a way that they achieve an adequate level of accuracy, reliability and cybersecurity, and that they work consistently in these respects throughout the life cycle. So they must be:

  • solid, and therefore resistant to errors, defects or inconsistencies, and thanks to technical and organizational measures, such as redundancy or system transition to a safe state (so-called "fail-safe"), they must be able to function properly despite interference and failures,
  • accurate, i.e. the algorithm should maintain a high level of accuracy of the results, adequate to its purpose and context of use,
  • resistant to cybergration, by implementing means of preventing attacks aimed at manipulating training data set (data poisoning) or elements used in training, which have been pre -marked (poisoning the model), entering input data that is aimed at causing an error in the AI ​​model (adversarial examples or bypassing the model), attacks on the model measures in detecting these threats, responding to them, solving and controlling them.

In practice, this means that manufacturers and suppliers of AI systems undertake a number of activities, including e.g. pre -transmission testing, introduction of audit mechanisms, monitoring and safe management of training and test data, as well as keeping comprehensive technical documentation

Assessment of the cost of implementation of regulatory requirements, as well as the possibility of correctly interpreting general standards, which AI ACT uses, will decide whether the balance between innovation
and regulation will be maintained, and whether too strict requirements will not finally limit the competitiveness of the European AI sector.

As Jerry Kaplan, an American entrepreneur, IT specialist, AI expert
and futurologist in our book "Generative AI", forecast, we are currently on the edge of the new Renaissance, a great cultural change focused on machine, which will make "in the future, when we look for the most professional, objective and trusted advice, we will turn to machines, not people." In his opinion, the AI ​​revolution will be experienced primarily by the healthcare system, legal system, education, software engineering and creative competitions. According to its mission, the Set Security-Energy-Technology Foundation will support innovations, according to security requirements, animating the debate and inter-sectoral cooperation, as well as providing professional expertise.

Authors:

Dr. Karolina Grenda

Member of the Foundation Council

Share: