The EU’s Artificial Intelligence Act (AI Act) covers a broad range of actors, including providers and deployers of AI systems, as well as importers and distributors. The subject of this post is the obligations imposed on public authorities.
Public authorities and entities acting on their behalf are not exempted from any of the obligations of the Act. In fact, they are subject to additional obligations.
Under the AI Act, public authorities can be providers and/or deployers. While both providers and deployers have clear, concurrent obligations, providers bear the largest share of obligations under the Act. In most cases, public authorities are expected to act as deployers of AI; however, deployers may be categorised as “providers” of a system if they:
- Make a substantial modification to a high-risk AI system such that it remains high-risk;
- Modify the intended purpose of the AI system, including General Purpose AI, that was not formerly high-risk but becomes so;
- Put their trademark/name on the high-risk AI system, e.g. a local authority using their name – whether registered as a trademark or not – on a high-risk AI system integrated to their products or services, without prejudice to contractual arrangements to the contrary.
When we consider public authorities as deployers of high-risk AI systems, there are two sets of obligations to be concerned about: a) general obligations for deployers and b) obligations that apply to deployers that are specifically public authorities.
The general obligations that public authorities acting as deployers must observe are varied in terms of complexity and resourcing. The most resource-intensive obligations for deployers are the following:
- Follow instructions for use, exercise human oversight over high-risk AI systems, and report on risks;
- Ensure input data is sufficiently representative;
- Observe approval processes for uses of biometric identification used for law enforcement purposes;
- Provide individual notifications in cases of high-risk AI use in support of decision-making;
- Provide explanations on decision-making upon request;
- Disclose the use of AI systems.
Additional obligations apply specifically for deployers that are public authorities:
- Refrain from using a high-risk AI system not already registered in the EU database;
- Undertake a fundamental rights impact assessment;
- Submit specific information to a regional database of high-risk AI systems.
Checklist for deployers
On the basis of the above, before deploying an AI system, public authorities should take a number of steps.
- They need to establish whether the AI system being considered is high-risk under the AI act[1].
- They must verify that the high-risk AI being considered for adoption has been registered by the provider in the EU-wide database.
- They need to ensure that instructions of use for high-risk AI systems are sufficiently clear and robust.
- They must undertake a fundamental rights impact assessment for high-risk AI systems (FRIA)[2].
- They need to assess institutional readiness for human oversight to be properly conducted.
- They must consider the future availability and effectiveness of mechanisms ensuring that individuals are sufficiently informed and are able to exercise their rights.
The decision to deploy an AI system is a complex one for any actor even in the absence of the AI Act, not least due to the importance of ensuring a system’s effectiveness and cost-efficiency, and the need to ensure compliance with existing legal frameworks such as the General Data Protection Regulation.
In light of the obligations imposed by the AI Act, public authorities should be prepared to take a very careful approach regarding the deployment of AI, focusing on likely risks and necessary mitigations, as well as on those institutional requirements pertaining to robustly and effectively implement the processes required by the AI Act.
[1] AI systems will be considered as high-risk if they are listed in Annex III or if they are otherwise used as a safety component for a product – or the AI itself is a product – covered by legislation in Annex I and independently subject to a conformity assessment.
[2] This FRIA should be reported not only be to the relevant market surveillance authority: a summary of its findings must be included in the EU database of high-risk AI systems.