Service

  • Data and AI
  • Strategy and Transformation
  • Technology and Engineering

Industry

  • Government

Article

The first law regulating Artificial Intelligence. A guide to the EU AI ACT

The rapid advance of Artificial Intelligence (AI) suitable for everyday use marks a decisive moment in the history of humanity. Many of our familiar processes need to be rethought and redefined. The Bank of America even named AI “the new electricity”. We, as human beings, have managed to develop a technology capable of understanding our language, capturing our ways of communicating, learning through our data, and generating functions that were only previously imaginable in science fiction. 

An industrial revolution is taking place in a context of worldwide interconnectedness through the internet, facilitating the generation of new ideas and improving cooperation mechanisms. This creates an unprecedented environment for innovation with an impetus that is second to none. We live in an exciting era that we, as a company, are actively shaping. 

For this reason, it is essential to critically question the change processes that have been initiated: What does the world we are building look like? What influence are we allowing AI to have on our processes? How will we shape the relationship between AI and the people of tomorrow? The answers to these and many other questions will shape our lives in the coming decades. 

EU AI Act

Act on Artificial Intelligence

Recently, the European Union presented an innovative proposal called theEU AI Act(AIA) to lay the foundations for the development and deployment of reliable AI systems in the EU. These new rules are in line with Nortal’s vision for development and deployment: Artificial Intelligence is developed and implemented responsibly and ethically. 

Key messages and their impact on AI users and developers

A Harmonised Legal Framework

The proposal aims to create a harmonised legal framework for developing, marketing, and using AI products and services across the EU. For clients, this means a standardised approach to AI usage regulations. Compliance promotes trust and reliability in the AI solutions used.

Safety and Regulations

The AIA ensures that AI systems and solutions marketed in the EU are safe and compliant with the EU’s legal framework. This strengthens the security of the AI applications themselves and creates legal certainty for their developers and users. The extent to which these regulations encourage or do not encourage innovation and investment in a secure AI is the subject of current debates.

AI Governance and Fundamental Rights

Improving governance and effectively implementing EU law on fundamental rights and security requirements for AI systems are critical. Therefore, clients who use AI are obliged to employ ethical and responsible AI practices as described in the AIA.

Definition of AI Systems

Article 3(1) of the AIA defines a system of artificial intelligence as software developed using (specific) techniques and approaches that can generate output such as content, predictions, recommendations, or decisions for select human-defined objectives that affect the environment with which it interacts.

This definition includes a variety of software-based technologies, including machine learning, logic and knowledge-based systems, and statistical approaches. This provides AI users with clarity about the risk level of the AI solution they prefer and the regulations and legal responsibility associated with its scope of application.

The proposal recognises the complexity of artificial intelligence and, therefore, formulates a legal definition of “AI systems” and their influence, divided into four risk levels.

Risk-Based Approach

Understanding the unique risks associated with AI, the AIA follows a risk-based approach and categorises AI systems by risk level. For clients who use AI, this approach ensures that regulatory interventions correspond to the specific risks that arise from their applications.

  1. Unacceptable Risk: Prohibits harmful AI practices that pose clear threats to security and rights, such as subliminal manipulative techniques or the exploitation of vulnerable groups.
  2. High Risk: Regulates high-risk AI systems that adversely affect security or fundamental rights, including those used as security components in various products and in certain critical areas such as law enforcement and education.
  3. Limited Risk:  AI solutions in this risk group are subject to a more flexible and straightforward regulatory approach. This category includes solutions that interact with people, such as a chatbot, automatic generation of document summaries or reporting, or tools for automatic translation in international communication.
  4. Low or minimum risk: AI systems with low or minimum risks are also subject to minimum regulatory requirements. Clients in this category must adhere to the smallest number of regulations, promoting innovation and development.
Graph about risk approach

Naturally, this new regulatory framework raises many questions. In particular, how to approach the development of new AI solutions for those involved in the development of artificial intelligence and those who want to use it as (partial) solutions for their target groups.

Reading the definitions of the European Union carefully, it becomes clear that AI innovations are significantly influenced by the risk levels to which they and/or their use cases are assigned. Among other things, it plays a vital role in how your AI solution will communicate with people, whether it is used only internally, is accessible to everyone on the internet, or what expenses will be generated with it.

Ethics, transparency, trust, and risk mitigation are concepts that we have always applied in our work with highly sensitive data and in developing and implementing solutions. This type of development depends not only on the technical capabilities but also on how these projects are thought and planned and on evaluating the impact the solution will have at an early stage.

Does your current or planned use of AI solutions meet the requirements of the EU AI Act? What regulations do you need to consider? How can these be efficiently integrated into your business models and processes?

Contact us to learn how your company can safely implement the EU AI Act.

Related content

Article

  • Data and AI
  • Government

It’s time to exploit the next generation of innovative public service solutions

With rising demand, the cost of living crisis, and broader geopolitical instability, it’s clear we need to adopt new approaches to build trust in the government’s ability to use modern technology effectively.

Case study

Riigikogu
  • Data and AI
  • Government

How AI accelerates the legislative power of the Parliament of Estonia

The Parliament of Estonia is the legislative body of the country and the legal department of the chancellery of the Parliament uses GenAI to speed up their research to support the MPs in their work.

Article

  • Digital Trust
  • Strategy and Transformation
  • Enterprise

Customer Onboarding ABC: The Art of Crafting Your Services

Customer onboarding is more than just introducing a new user to your product—it's as important as security and privacy from your company’s perspective. Estonia has been using innovative solutions for decades, offering important lessons on how seamless onboarding can shape customer success.

Get in touch

Let us offer you a new perspective.