Welcome to the AI Ethica wind blog



The AI Ethica wind blog - the wind that brings you news and thoughts on emerging technologies and ethics! We will start here with a series on the new EU artificial intelligence regulation that was published end of April 2021. This regulation will influence AI technologies and deployments in Europe for years to come, and probably also globally. In this blog we will continously present thoughts and facts about the new regulation.
The new EU artificial intelligence regulation is globally the first comprehensive law on artificial intelligence, and this is without a doubt a good thing. This regulation is still a proposal and must be adopted by the member states of the EU, which may take some years to become reality.

Published end of April 2021 the proposal will be a point of discussion in the AI industry generally and in AI ethics specifically. Is the proposal  comprehensive and sufficient? Is it also practical in order not to burden AI SMEs and startups with additional costs that would prevent AI innovation?

No regulation is perfect, and there will always be cases that are not covered satisfactorily by the regulation. Yet if the regulation is able provide more trust in AI for consumers, citizens and regulators on one side, without stiffling innovation and progress in the industry on the other side, it should be adopted by the member states.

What is the new EU regulation for artificial intelligence?

On a fundamental level, the regulation is founded on a risk-based approach to AI applications. The proposal identifies high-risk areas for Ai applications, for example in critical infrastructure (transport), educational training, safety products, employment, public services, law enforcement, migration management and administration of justice. 

In these areas new AI applications will almost certainly have to undergo a compliance process and register their application in EU database for potential coontrol and regress. Strict obligations wil be applied,  for example the installation of adequate risk assessment and mitigation systems, high quality of datasets, logging of activities and traceability of results. In addition, a detailed documentation, clear user information as well as appropriate human oversight are further obligations for developers of high-risk applications.

The proposal also assumes that the vast majority of AI applications will fall under the category of minimimal risk applications, for example video games or spam filters. So-called limited risk applications will have to meet specific transparency obligations. The goal of the proposal is a human-centric, sustainable, secure, inclusive and trustworthy AI. In this blog we will cover some interesting topics of the proposing in following blog posts.

Created with