Unacceptable risk
All AI systems considered a clear threat to the safety, livelihoods and rights of people are banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
High risk
AI systems identified as high-risk include AI technology used in:
- critical infrastructures (e.g. transport), that could put the life and health of citizens at risk
- educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams)
- safety components of products (e.g. AI application in robot-assisted surgery)
- employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures)
- essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
- law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
- migration, asylum and border control management (e.g. automated examination of visa applications)
- administration of justice and democratic processes (e.g. AI solutions to search for court rulings)
High-risk AI systems are subject to strict obligations before they can be put on the market:
- adequate risk assessment and mitigation systems
- high quality of the datasets feeding the system to minimise risks and discriminatory outcomes
- logging of activity to ensure traceability of results
- detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
- clear and adequate information to the deployer
- appropriate human oversight measures to minimise risk
- high level of robustness, security and accuracy
All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited.
Narrow exceptions are strictly defined and regulated, such as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.
Those usages aresubject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.
Limited risk
Limited risk refers to the risks associated with lack of transparency in AI usage. The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. Providers also must ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes.
Minimal or no risk
The AI Act allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.