Why
The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes.
For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.
Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.
How
Once an AI system is on the market, authorities are in charge of market surveillance, deployers ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and deployers will also report serious incidents and malfunctioning.
A solution for the trustworthy use of large AI models
More and more, general-purpose AI models are becoming components of AI systems. These models can perform and adapt countless different tasks.
While general-purpose AI models can enable better and more powerful AI solutions, it is difficult to oversee all capabilities.
There, the AI Act introduces transparency obligations for all general-purpose AI models to enable a better understanding of these models and additional risk management obligations for very capable and impactful models. These additional obligations include self-assessment and mitigation of systemic risks, reporting of serious incidents, conducting test and model evaluations, as well as cybersecurity requirements.