Under the European Artificial Intelligence Act (AI Act), deployers of high-risk AI systems are required to carry out a fundamental rights impact assessment. We explain what this entails and how such an assessment is intended to promote responsible AI.
What is fundamental rights impact assessment?
The fundamental rights impact assessment is regulated in Art. 27 of the AI Act. It takes a preventive and risk-based approach. The objective is to identify and assess, prior to deployment, any potential adverse impacts on the fundamental rights of natural persons arising from the use of high-risk AI systems.
On this basis, appropriate technical and organisational measures such as excluding particularly sensitive data or ensuring human oversight must be implemented to reduce the risks to an acceptable level.
Who is required to conduct a fundamental rights impact assessment?
The implementation of such an impact assessment pursuant to Art. 27 AI Act is specifically aimed at and is the responsibility of the deployers of AI systems. This is because potential impacts on the fundamental rights of natural persons can only be identified in the specific context of deployment and can therefore be fully assessed only at that stage.
The material scope of this obligation generally covers high-risk AI systems as referred to in Art. 6(2) AI Act. The only exceptions are those systems listed in Annex III, point 2, namely high-risk AI systems used in the field of critical infrastructure. The legislator apparently assumes that, in this case, the primary risk does not concern potential interferences with fundamental rights.
A further restriction is that the obligation is limited to specific categories of deployers: These include, first and foremost, public sector entities, such as public authorities and bodies governed by public law. In addition, private entities that provide public services – such as healthcare providers – are also subject to the obligation. Deployers in the insurance and financial sectors (as referred to in Annex III, point 5(b) and (c)) are also obliged to comply.
The obligation to carry out a fundamental rights impact assessment therefore arises from the cumulative fulfilment of three criteria. It is required only where all three are met:
- The deployment of a high-risk AI system pursuant to Art. 6 (2) AI Act.
- The system does not fall under the exclusion in Annex III, point 2 (AI used in critical infrastructure); and
- The specific deployers of the high-risk system falls under Art. 27(1) AI Act.
Procedure of the fundamental rights impact assessment
Timing of the assessment
The impact assessment must, in principle, be carried out before the high-risk AI system is put into operation by the deployer.
However, Art. 27 AI Act allows deployers, in similar cases, to rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by the provider in similar cases. However, the assessment must always be updated if elements that need to be included in the assessment have changed or are no longer up to date.
Conducting of the assessment
The mandatory assessment points for the impact assessment are set out in Art. 27(1) AI Act.
- According to this, a description of the deployer´s procedures must first be provided. This description should explain how the system is used in accordance with its intended purpose, i.e. a description of how and for what purpose the AI system is deployed in practice.
- The period of use, including the frequency of use of the high-risk system, must also be specified. This assessment may, for example, determine the classification of the intensity of infringements of fundamental rights.
- In addition, the categories of persons and groups of persons who could be particularly affected by the use of the AI system must be listed.
- In addition, the specific risks of harm that may affect the groups of persons typically affected must be discussed. In doing so, the information provided by the provider pursuant to Art. 13 AI Act must be taken into account.
- Furthermore, a description of the implementation of human oversight in accordance with the operating instructions must be provided. This requirement stems from the concept of the AI Act that qualified human oversight of AI systems must always take place.
- Finally, deployers must specify the measures to be taken in the event of such risks materialising, including the arrangements for internal governance and complaint mechanisms.
After the impact assessment has been carried out, deployers of the systems must notify the competent market surveillance authority of the results, unless an exemption pursuant to Art. 46 AI Act applies in exceptional cases. A standardised form is to be used for the notification requirement. The European AI Office is developing a template questionnaire for this purpose to support deployers in their obligations under Art. 27 AI Act and enable them to carry out the impact assessment efficiently. The European AI Office is an entity of the European Commission. Its responsibilities include the regulation and supervision of AI systems within the EU, in particular the monitoring and enforcement of the requirements laid down in the AI Act.
Relationship to data protection impact assessment
The relationship between fundamental rights impact assessments and data protection impact assessments (DPIA) under Art. 35 GDPR is also relevant, as the operation of high-risk AI systems inevitably involves the frequent processing of personal data. Such processing is therefore additionally subject to the requirements of the GDPR.
The relationship between the two impact assessments is governed by Art. 27(4) AI Act According to this, although the requirements of Art. 27(4) AI Act cannot be fully met by conducting a DPIA, but it may partially cover certain obligations under this Article. In such cases, the fundamental rights impact assessment must be supplemented by the additional aspects specific to fundamental rights.
A DPIA therefore remains necessary when personal data are processed, and a fundamental rights impact assessment is additionally mandatory for high-risk AI systems. Both instruments may overlap, but they do not replace each other.
Conclusion
Essentially, the fundamental rights impact assessment makes it clear that responsible use of AI means, above all, consistently ensuring that fundamental rights are upheld.
Although Art. 27 AI Act will only apply from August 2026, deployers of high-risk AI systems should already start taking action to adapt their internal processes, responsibilities, and documentation structures. Those who prepare organisationally at an early stage will avoid implementation pressure and lay the groundwork for a smooth transition to the new legal requirements.