Service

  • Data and AI
  • Strategy and Transformation
  • Technology and Engineering

Industry

  • Government

Article

The first law regulating Artificial Intelligence. A guide to the EU AI ACT

The rapid advance of Artificial Intelligence (AI) suitable for everyday use marks a decisive moment in the history of humanity. Many of our familiar processes need to be rethought and redefined. The Bank of America even named AI “the new electricity”. We, as human beings, have managed to develop a technology capable of understanding our language, capturing our ways of communicating, learning through our data, and generating functions that were only previously imaginable in science fiction. 

An industrial revolution is taking place in a context of worldwide interconnectedness through the internet, facilitating the generation of new ideas and improving cooperation mechanisms. This creates an unprecedented environment for innovation with an impetus that is second to none. We live in an exciting era that we, as a company, are actively shaping. 

For this reason, it is essential to critically question the change processes that have been initiated: What does the world we are building look like? What influence are we allowing AI to have on our processes? How will we shape the relationship between AI and the people of tomorrow? The answers to these and many other questions will shape our lives in the coming decades. 

EU AI Act

Act on Artificial Intelligence

Recently, the European Union presented an innovative proposal called theEU AI Act(AIA) to lay the foundations for the development and deployment of reliable AI systems in the EU. These new rules are in line with Nortal’s vision for development and deployment: Artificial Intelligence is developed and implemented responsibly and ethically. 

Key messages and their impact on AI users and developers

A Harmonised Legal Framework

The proposal aims to create a harmonised legal framework for developing, marketing, and using AI products and services across the EU. For clients, this means a standardised approach to AI usage regulations. Compliance promotes trust and reliability in the AI solutions used.

Safety and Regulations

The AIA ensures that AI systems and solutions marketed in the EU are safe and compliant with the EU’s legal framework. This strengthens the security of the AI applications themselves and creates legal certainty for their developers and users. The extent to which these regulations encourage or do not encourage innovation and investment in a secure AI is the subject of current debates.

AI Governance and Fundamental Rights

Improving governance and effectively implementing EU law on fundamental rights and security requirements for AI systems are critical. Therefore, clients who use AI are obliged to employ ethical and responsible AI practices as described in the AIA.

Definition of AI Systems

Article 3(1) of the AIA defines a system of artificial intelligence as software developed using (specific) techniques and approaches that can generate output such as content, predictions, recommendations, or decisions for select human-defined objectives that affect the environment with which it interacts.

This definition includes a variety of software-based technologies, including machine learning, logic and knowledge-based systems, and statistical approaches. This provides AI users with clarity about the risk level of the AI solution they prefer and the regulations and legal responsibility associated with its scope of application.

The proposal recognises the complexity of artificial intelligence and, therefore, formulates a legal definition of “AI systems” and their influence, divided into four risk levels.

Risk-Based Approach

Understanding the unique risks associated with AI, the AIA follows a risk-based approach and categorises AI systems by risk level. For clients who use AI, this approach ensures that regulatory interventions correspond to the specific risks that arise from their applications.

  1. Unacceptable Risk: Prohibits harmful AI practices that pose clear threats to security and rights, such as subliminal manipulative techniques or the exploitation of vulnerable groups.
  2. High Risk: Regulates high-risk AI systems that adversely affect security or fundamental rights, including those used as security components in various products and in certain critical areas such as law enforcement and education.
  3. Limited Risk:  AI solutions in this risk group are subject to a more flexible and straightforward regulatory approach. This category includes solutions that interact with people, such as a chatbot, automatic generation of document summaries or reporting, or tools for automatic translation in international communication.
  4. Low or minimum risk: AI systems with low or minimum risks are also subject to minimum regulatory requirements. Clients in this category must adhere to the smallest number of regulations, promoting innovation and development.
Graph about risk approach

Naturally, this new regulatory framework raises many questions. In particular, how to approach the development of new AI solutions for those involved in the development of artificial intelligence and those who want to use it as (partial) solutions for their target groups.

Reading the definitions of the European Union carefully, it becomes clear that AI innovations are significantly influenced by the risk levels to which they and/or their use cases are assigned. Among other things, it plays a vital role in how your AI solution will communicate with people, whether it is used only internally, is accessible to everyone on the internet, or what expenses will be generated with it.

Ethics, transparency, trust, and risk mitigation are concepts that we have always applied in our work with highly sensitive data and in developing and implementing solutions. This type of development depends not only on the technical capabilities but also on how these projects are thought and planned and on evaluating the impact the solution will have at an early stage.

Does your current or planned use of AI solutions meet the requirements of the EU AI Act? What regulations do you need to consider? How can these be efficiently integrated into your business models and processes?

Contact us to learn how your company can safely implement the EU AI Act.

Related content

Article

woman walking on the street using smartphone
  • Strategy and Transformation
  • Citizen-Centric Personalized Digital Government
  • Government

How human-centered design builds exceptional proactive services

Proactive public services have the potential to transform societies. Examples of what a PPS-enabled future can look like include tax declarations that take seconds, a new driver’s license dropped into your mailbox when an expiration date gets close, or social benefits that get automatically deposited into your bank account.

Article

Man using a computer and talking on phone
  • Strategy and Transformation
  • Citizen-Centric Personalized Digital Government
  • Government

The era of personalized end-to-end digital services

Proactive public services have the potential to transform societies. Examples of what a PPS-enabled future can look like include tax declarations that take seconds, a new driver’s license dropped into your mailbox when an expiration date gets close, or social benefits that get automatically deposited into your bank account.

Article

Team sitting together in the office
  • Strategy and Transformation
  • Citizen-Centric Personalized Digital Government
  • Government

Strategies promoting interoperability across systems

Proactive public services have the potential to transform societies. Examples of what a PPS-enabled future can look like include tax declarations that take seconds, a new driver’s license dropped into your mailbox when an expiration date gets close, or social benefits that get automatically deposited into your bank account.

Get in touch

Let us offer you a new perspective.