EU AI regulation - good start with adaptations needed

Last month the EU commission released a proposal for AI regulation, laying down harmonised rules on artificial intelligence for all EU member states. First reactions were mixed, endorsing on one side the intention of providing legal guidelines for AI in the EU, and the other side pointing to weak points and imprecisions that have to be adapted in the run up to the full adoption by the member states in the estimated time of two years.
First commentators agree that an EU AI regulation is urgently needed and the proposal is in time to provide guidelines for the further development of the promising technology. Stakes are high with AI being integrated in many industries and politicized cases like the Facebook personal data leakage being center stage in the public opinion.

Basically, the regulation draws on the distinction between "high-risk" and "low-risk" application that are treated differently by the regulation. Some high-risk applications are banned outright.

High-risk and low-risk applications

  • The key points of the regulation are the following:High-risk AI systems including those used to manipulate human behaviour, conduct social scoring or for indiscriminate surveillance will all be banned in the EU – although exemptions for use by the state or state contractors could apply.


  • Special authorisation from authorities will be required for “remote biometric identification systems” such as facial recognition in public spaces.


  • “High-risk” AI applications will require inspections before being deployed to make sure systems are trained on unbiased data sets, and with human oversight. These include those that pose a safety threat, such as self-driving cars, and those that could impact someone’s livelihood, e.g. hiring algorithms.


  • People have to be told when they’re interacting with an AI system, unless this is “obvious from the circumstances and the context of use”.


  • A database of “high-risk” AI systems will contain the data regarding high-risk AI systems, and be accessible to the public. Public sector systems would be exempt.


  •  A post-market monitoring plan will evaluate the continuous compliance of AI systems with the requirements of the regulation. These rules apply to EU companies and those that operate in the EU or impact EU citizens.


  • Some companies will be permitted to carry out self-assessments, but others will be subject to third-party checks.


  •  A “European Artificial Intelligence Board” will be created, comprising representatives from every nation-state, to help the commission define “high-risk” AI systems. 


  • AI in the military is exempt from the regulation. 


Understandably, the regulation tries to rely as much as possible on self assessment activities of AI providers to reduce the adminstrative overhead. Yet some commentators hint at the inherent dangers of excessive self-assessment, although some applications still need the approvement of third parties. Other commentators criticize that the regulation claims not to obstruct innovation and progress in AI, but that there are no active measures to support AI innovation, especially what regards SME's and startups. Yet admittedly, it is a difficult trade-off between preventing misuse and one side and promoting a technology on the other side.

In contrast to some expectations, facial recognition in the public realm - the chinese social scoring AI obviously being the elephant in the room - is not abolished or banned outright, but subject to regulatory measures and exceptions of urgency. Other points to be scrutinized before the adoption are the definition of AI in the regulation that seems to many observers too much technology driven and in the future made obsolet by technology change.

If it is in general a viable strategy at all to compete in the global AI market with regulatory innovation, is another comment made by experts. It has to been seen which critical points will lead to amendments and which ones will be left by the side.
Created with