And trustworthy - European model of the development of artificial intelligence

The dynamic development of artificial intelligence (AI) technology and its growing impact on social, economic, and political life has led the European Union to attempt to create the world's first comprehensive legal framework regulating this field. This resulted in Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139, (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (the Artificial Intelligence Act), hereinafter referred to as the AI ​​Act. The goals of the AI ​​Act can be described as an attempt to balance innovation with responsibility, and to reconcile the protection of fundamental rights with the promotion of the competitiveness of the European market. The AI ​​Act thus sets boundaries where, in the judgment of European legislators, technology may pose a threat to humans. The European Union aims to be a global leader in ethical standards that promote trustworthy artificial intelligence. It is worth noting that the European Union, the United States, and China are currently adopting radically different development and regulatory models for AI.

The European vision is based on a tough regulatory approach that, based on values ​​primarily enshrined in the EU Charter of Fundamental Rights, recognizes the primacy of individual protection and the public interest. It enforces the classification of AI systems, imposes numerous requirements, including those related to accuracy, robustness
, and cybersecurity, and prohibits a number of practices related to, for example, social scoring and manipulation. Although increasingly aware of the risks associated with AI, the United States still adheres to the principle of minimal regulation and maximum innovation, while China, while intensively supporting AI development at the state level, sees it as a tool for pursuing national interests and improving social control mechanisms. The coexistence of these diverse models raises questions about the possibility of global harmonization of AI standards and the future of the international digital order – will law, the market, or force prevail?

Based on the assumptions described above, the AI ​​Act prohibits certain artificial intelligence practices, listing in Article 5 those deemed unacceptable in the European Union due to their violation of fundamental rights and high risk to society. This includes, for example:

  • subliminal, manipulative or misleading techniques (an example would be AI-based advertising systems using unconscious visual or audio stimuli that could influence consumers' decisions without their knowledge, leading to unfavorable purchasing decisions),
  • social scoring systems, which award points to citizens for "good behavior" (e.g. compliance with regulations) and limit access to public services to people with a "low score",
  • assessing the risk of committing a crime solely on the basis of profiling, i.e. systems that would predict the propensity to commit a crime based on demographic or social characteristics, without specific evidence,
  • inferring emotions in the workplace and educational institutions, for example, systems monitoring the emotions of call center employees using voice or facial expression analysis to assess their performance are prohibited

At the same time, the AI ​​Act imposes numerous obligations on high-risk AI systems, as defined in Article 6 and Annex III of the AI ​​Act—those that, due to their potential impact on fundamental rights, health, safety, or civil liberties, are subject to the most stringent regulatory requirements. Of fundamental importance here is Article 15 of the AI ​​Act, which stipulates that high-risk AI systems must be designed and developed to achieve appropriate levels of accuracy, robustness, and cybersecurity, and to perform consistently in these respects throughout their lifecycle. Therefore, they must be:

  • robust, and therefore resistant to errors, faults or inconsistencies, and thanks to technical and organizational measures such as redundancy or transition of the system to a safe state (so-called "fail-safe"), they must be able to function correctly despite disruptions and failures,
  • accurate, i.e. the algorithm should maintain a high level of accuracy of results, appropriate to its purpose and context of use,
  • Resilient to cyber threats by implementing measures to prevent, detect, respond to, resolve and control attacks against training datasets (data poisoning) or pre-trained training elements (model poisoning), inputs intended to cause errors in the AI ​​model (adversarial examples or model bypassing), confidentiality attacks or model flaws.

In practice, this means that producers and suppliers of AI systems must undertake a number of activities, including pre-implementation testing, introducing audit mechanisms, monitoring and secure management of training and test data, as well as maintaining comprehensive technical documentation

The assessment of the cost of implementing regulatory requirements, as well as the possibility of correctly interpreting the general standards used in the AI ​​Act, will determine whether the balance between innovation
and regulation will be maintained and whether too stringent requirements will ultimately limit the competitiveness of the European AI sector.

As Jerry Kaplan, an American entrepreneur, computer scientist, AI expert
, and futurologist, predicts in his book "Generative AI," we are currently on the verge of a new renaissance, a massive cultural shift toward machines that will mean that "in the future, when we seek the most expert, objective, and trusted advice, we will turn to machines, not humans." In his opinion, the AI ​​revolution will primarily impact the healthcare system, the legal system, education, software engineering, and creative professions. In line with its mission, the SET Security-Energy-Technology Foundation will support innovation in line with security requirements, fostering debate and cross-sector collaboration, and providing professional expertise.

Authors:

DR. KAROLINA GRENDA

Członek Rady Fundacji

Share: