Developing an Ontology for AI Act Fundamental Rights Impact Assessments

Workshop
ConventicLE on Artificial Intelligence Regulation (CLAIRvoyant) - co-located with International Conference on Legal Knowledge and Information Systems (JURIX)
Tytti Rintamaki* , Harshvardhan J. Pandit
Description: An ontology for creation and management of FRIA and use of automated tool in its various steps
Presented at Workshop, finalising revised version (in-press) 🔓open-access archives: harshp.com
📦resources: repo

Abstract: The recently published EU Artificial Intelligence Act (AI Act) is a landmark regulation that regulates the use of AI technologies. One of its novel requirements is the obligation to conduct a Fundamental Rights Impact Assessment (FRIA), where organisations in the role of deployers must assess the risks of their AI system regarding health, safety, and fundamental rights. Another novelty in the AI Act is the requirement to create a questionnaire and an automated tool to support organisations in their FRIA obligations. Such automated tools will require a machine-readable form of information involved within the FRIA process, and additionally also require machine-readable documentation to enable further compliance tools to be created. In this article, we present our novel representation of the FRIA as an ontology based on semantic web standards. Our work builds upon the existing state of the art, notably the Data Privacy Vocabulary (DPV), where similar works have been established to create tools for GDPR's Data Protection Impact Assessments (DPIA) and other obligations. Through our ontology, we enable the creation and management of FRIA, and the use of automated tool in its various steps.

Introduction

The European Union’s recently published Artificial Intelligence Act (AI Act) [1] is the first of its kind regulation that governs AI systems with a particular focus on harms to health, safety, and fundamental rights. A key and novel requirement established in AI Act Article 27 is the Fundamental Rights Impact Assessment (FRIA) which requires deployers of AI systems to assess the risks and impacts of their AI systems on fundamental human rights. The FRIA is a structured process following existing procedures for impact assessments, and was developed based on the similar procedure under the General Data Protection Regulation (GDPR) [2] for Data Protection Impact Assessments (DPIA). As development of AI technologies has progressed rapidly within the last decade, and the AI Act itself is a new legal framework, conducting and using a FRIA poses significant challenges not only regarding legal compliance, but also from the perspectives of data governance to identify and maintain relevant information and information systems to develop tools that can support and enhance these processes.

The current conventional method for implementing FRIA is to identify obligations linked to specific clauses in the AI Act and find the steps needed to complete them. For organisations, such tasks are primarily human-oriented activities that utilise word processor software (e.g. MS Word) and document formats (e.g. PDF) that contain unstructured information and are not suitable for developing automated procedures and tooling. Further, organisations commonly have several departments or organisational units that can contain different technologies and practices, making a combined legal assessment of the organisation as a whole a challenging and complicated affair. The AI Act in Article 27-5 foresees such challenges and requires the AI Office, the EU body responsible for the implementation of the AI Act, to create a ‘questionnaire’ to support organisations in meeting the obligations for a FRIA. The AI Act, in the same article, also states that such a questionnaire should be provided with an automated tool - though it does not clarify what automation or support mean in this context.

Based on prior work regarding the use of knowledge engineering and information systems to support and enhance the DPIA process and its use for complying with the GDPR [3], we identify the need for a formal representation of the FRIA process that can aid the creation of questionnaires and automated tools and support the documentation involved by providing a consistent, structured, and interoperable approach. By developing an ontology for FRIA, this paper thus aims to bridge the gap between the high-level legal requirements of the AI Act and the technical, procedural steps necessary for effective compliance, risk management, and governance of AI systems.

We use the following research question to guide our work: “How can we represent the FRIA as an organisational process through a standards-based, machine-readable, and interoperable ontology?”. Here, standards refers to the W3C semantic web standards of RDF for representing information in a machine-readable format, and RDFS and OWL to create an ontological representation. The contribution of this article is thus an OWL ontology that enables the operational implementation of a FRIA within an organisation to meet the AI Act’s obligations.

Rationale

To develop the ontology, we follow the Linked Open Terms (LOT) ontology engineering methodology [4] which has been used successfully in several projects, and is based on the NeOn ontology engineering methodology which has been used in the creation of similar legally relevant ontologies including for GDPR’s DPIA [3]. The first step in LOT is the development of an ontology requirements specification that outlines “why the ontology is being built and to identify and define the requirements the ontology should fulfil”. For this, LOT recommends the use of competency questions which are a well established practice in the ontology engineering community. We reused the template provided by LOT to generate the ontology requirements specification, which is presented in Table 1.

FRIA Ontology
1. Purpose
The purpose of this ontology is to model the FRIA as an information process.
2. Scope
The scope of this ontology is limited to modelling the FRIA as defined in AI Act Article 27.
3. Implementation Language
W3C semantic web standards - OWL, RDFS, SKOS
4. Intended End-Users
Organisations who create and use FRIA i.e. deployers of the AI systems and AI Act authorities
5. Intended Uses
Use 1. To document obligations regarding FRIA.
Use 2. To document outcomes of FRIA.
Use 3. To notify authorities regarding FRIA.
6. Ontology Requirements
a. Non-Functional Requirements
NFR 1. Interoperability: The ontology must extend existing legal compliance ontologies (e.g. DPV)
NFR 2. Scalability: The ontology should be adaptable/extensible for future developments.
NFR 3. Usability: The ontology should support use by legal and non-legal stakeholders.
b. Functional Requirements: Groups of Competency Questions
CQG1. Related to AI Act obligations CQG2. Related to Organisational Governance
CQ1. When was the FRIA conducted?
CQ2. What is the Intended Purpose of the AI system?
CQ3. What are the risks, consequences, impacts?
CQ4. What are the mitigation measures?
CQ5. What is the outcome of the FRIA process?
CQ6. What fundamental rights are affected?
CQ7. What authorities are notified for the FRIA?
CQ8. What documentation/tools were used for FRIA?

State of the Art

Within ontology engineering methodologies based on semantic web, including LOT, the reuse of existing ontologies is heavily recommended. Therefore, following the requirements specification, we explore the state of the art to identify relevant resources and assess the extent to which they can be reused to implement our FRIA ontology. In this, we divide the existing literature in two categories: first, existing ontologies that directly and explicitly address FRIA, or if not, then the AI Act; and second, ontologies that address similar mechanisms such as DPIA under GDPR, or impact assessments based on established procedures such as ISO standards.

Existing Ontologies for FRIA and AI Act

Given the recency of the AI Act in terms of development and finalisation, few approaches have emerged that provide ontologies modelling it. Golpayegani et al. were one of the first approaches to model the requirements of the AI Act as an ontology through the AI Risk Ontology (AIRO) [5] - an OWL2 ontology based on an early draft version of the AI Act. AIRO provides a risk management ontology based on the requirements of the AI Act and ISO standards, and acts as the upper ontology for Vocabulary of AI Risks (VAIR) [6]. Golpayegani et al. have demonstrated the use of AIRO and VAIR to model the high-risk use-cases defined in AI Act Annex III as a logical group of semantic concepts and showed that logical reasoning or validation approaches such as SHACL can be used to determine whether the use-case is high-risk under the AI Act [6]. Golpayegani et al. have also developed the AI Card [7] as a visual approach for documenting the AI system through the use of AIRO and VAIR as its machine-readable representation. AIRO and VAIR provide concepts required in the AI Act such as stakeholders, AI processes, and risk management, but do not contain a modelling of the FRIA.

Hernandez et al. developed the Trustworthy AI Requirements Ontology (TAIR) [8] which models the clauses of the AI Act and of relevant ISO standards as a series of requirements and compares them to identify where such standards would be useful for compliance with the AI Act. This work addresses the requirement for using ‘Harmonised Standards’ in the AI Act, and includes concepts regarding impact assessments and risk management, albeit these are based on a draft version of the AI Act.

Though not providing an ontology or directly addressing the final AI Act requirements, these works provide an exploration of the FRIA requirements by identifying information involved in conducting a FRIA: the FRIA template produced in the ALIGNER h3020 project [9], an analysis of the FRIA requirements in AI Act by Mantelero [10], the rights impact assessments for algorithmic systems by Gerards et. al [11], the algorithmic impact assessment published by the Govt. of Canada [12], a quantified risk score for impact on fundamental rights by Inverardi et. al. [13], a method for assessing the severity of impacts on fundamental rights by Malgieri and Santos [14], and an interpretation of the draft AI Act’s FRIA requirements by Janssen et. al [15]

Existing Ontologies for DPIA, GDPR, and Impact Assessments

In comparison to the AI Act, the GDPR has been in effect for 6 years, and has been addressed through several surveys on compliance approaches and developed ontologies [16][18]. Notable approaches in these include the ontology of privacy requirements by Gharib et al. [19], The core ontology for privacy requirements (CoPri), [20] which provides a framework for modelling legal processes and requirements, Privacy Ontology for Legal Reasoning (PrOnto) [21] which provides concepts with the aim of modelling legal norms and assessing them through deontic reasoning, and the Data Privacy Vocabulary (DPV) [22] which is an output of the W3C Data Privacy Vocabularies and Controls Community Group (DPVCG), and provides an extensible collection of vocabularies for modelling legal concepts associated with data and technologies.

Of these, only the DPV has the community and infrastructure supporting the continuos development and refinement of the resource, and is also the only resource that had modelled GDPR which has been expanded to also model the AI Act using the same core concepts [22]. The DPV is also the only resource we know of that provides rich taxonomies to represent real-world concepts associated with the ontological concepts e.g. for purposes and data categories. The DPV features a TECH extension which provides concepts for modelling the technology lifecycle, stakeholders, and documentation, the RISK extension for modelling risk assessments, and the AI extension which extends these to provide AI-specific concepts. In DPV, the legal concepts derived from specific regulations are provided in a separate namespace from these other ‘core’ vocabularies, and DPV provides such legal extensions for EU GDPR and the EU AI Act. The DPV’s GDPR extension provides concepts modelling the DPIA process based on [3]. At the moment, the DPVCG is integrating AIRO [5] and VAIR [6] in to the AI and AI Act extensions.

FRIA Ontology

Our objective is to develop an ontology to model the FRIA as defined in the AI Act Article 27 as an information process through which stakeholders such as deployers and authorities can create automated technological tools to support the compliance activities. For ensuring our ontology is interoperable and extensible, we utilise semantic web standards such as RDF to represent it, SKOS to create a vocabulary or thesauri, and RDFS and OWL2 for knowledge representation. We follow the Linked Open Terms (LOT) [4] as the methodology for ontology engineering, which strongly recommends reusing existing ontologies where relevant. For this, from Section 3, we identified AIRO [5] and VAIR [6] as the most relevant ontologies for the AI Act, and the DPV [22] as a useful resource for practical use of legal concepts. Since AIRO and VAIR are being integrated for the upcoming DPV version 2.1, we aim to support this integration by identifying the concepts not present in these existing ontologies.

DPV is provided with RDFS+SKOS semantics as the ‘default serialisation’, with a separate namespace used for OWL2 semantics to support the use of concepts beyond the strict requirements of logical constraints when using OWL2. Following this, we provide the concepts necessary to model the FRIA in this article which can then be expanded as more information is available to represent real-world constraints and logical assertions using OWL2 or another method.

The concept for Fundamental Rights Impact Assessments (FRIA) already exists within the main DPV as dpv:FRIA, and is extended as eu-aiact:FRIA in AI Act extension to represent the FRIA as defined within the AI Act. This concept represents the FRIA as both an activity and as an artefact (e.g. a document). Therefore, in our FRIA ontology, we create separate explicit concepts for modelling the information and steps involved in the DPIA process by expanding this central concept.

Based on the interpretation of the FRIA in AI Act Article 27, and by using existing work interpreting the DPIA in GDPR as a series of steps that are represented through an ontology [3], we identified the following groups of concepts for our ontology:

  1. FRIA Metadata: concepts representing relevant metadata regarding the FRIA such as when it was conducted, by whom, for which AI systems, etc.;

  2. FRIA Necessity: concepts representing the step where a necessity for conducting a FRIA is identified as per AI Act Article 27-1;

  3. FRIA Inputs: concepts representing the inputs required in a FRIA as per AI Act Article 27-1, and the reuse of a DPIA as per AI Act Article 27-4;

  4. FRIA Outcomes: concepts representing the outcomes identified from conducting a FRIA as per AI Act Article 27-1;

  5. FRIA Notifications: concepts representing the step where a FRIA to be communicated to an authority as per AI Act Article 27-3;

  6. FRIA Automated Tools: the use of questionnaire and/or automated tools as per AI Act Article 27-5.

We use the following namespaces and prefixes in describing our proposed ontology:

  • Our proposed ontology: https://example.com/FRIA# with prefix fria:.

  • DCMI Metadata Terms: http://purl.org/dc/terms/ with prefix dct.

  • DPV: https://w3id.org/dpv# with prefix dpv:.

  • DPV TECH extension: https://w3id.org/dpv/tech# with prefix tech:.

  • DPV RISK extension: https://w3id.org/dpv/risk# with prefix risk:.

  • DPV AI extension: https://w3id.org/dpv/ai# with prefix ai:.

  • DPV EU AI Act extension: https://w3id.org/dpv/legal/eu/aiact# with prefix eu-aiact:.

Metadata for FRIA

A FRIA, as a documentation requirement under the AI Act, is expected to be regularly updated as per Article 27-2 “the deployer shall take the necessary steps to update the information”. Therefore it is necessary to indicate temporal information and provenance associated with the FRIA. For these, we reuse prior work establishing the reuse of DCMI terms for GDPR’s DPIA [3] regarding temporal information (dct:created, dct:modified, dct:dateSubmitted, dct:dateAccepted, dct:temporal, dct:valid), conformance e.g. codes of conduct (dct:conformsTo), descriptions (dct:title, dct:description), identifier or version (dct:identifier, dct:isVersionOf), and subject or scope (dct:subject, and dct:coverage).

To record provenance, we suggest reusing dct:publisher to record the organisation responsible for conducting the FRIA, dct:contributor to denote the personnel and entities involved, and dct:provenance to refere to a log of changes. Additionally, dct:creator can record the specific entity or tool used to ‘create’ the resource - which is relevant as the AI Act Article 27-5 explicitly provides for the use of automated tools in the FRIA process.

Concepts to represent necessity of FRIA

AI Act Article 27-1 describes the conditions under which a FRIA is necessary. An organisation therefore has the obligation to first assess whether it must conduct a FRIA. We represent this process through the concept fria:FRIANecessityAssessment which extends the existing eu-aiact:FRIA concept and can be associated with a FRIA using the relation dpv:hasAssessment. To represent the specific outputs of this process, we create the concepts fria:FRIANecessityStatus which is associated with the assessment using the relation dpv:hasStatus. To represent specific outcomes, we create the instances fria:FRIARequired and fria:FRIANotRequired.

Concepts to represent inputs of FRIA

We represent the process of carrying out the FRIA as the concept fria:FRIAProcedure which extends the existing eu-aiact:FRIA concept and can be associated with a FRIA using the relation dpv:hasAssessment. AI Act Article 27-1 describes the information which must be included when conducting a FRIA, which we interpret as follows:

  1. Article 27-1a description of the deployer’s processes: represented by extending dpv:Process as the concept fria:AIProcess, and associated using the relation dpv:hasProcess. This follows the DPV’s modelling of similar processes for GDPR where a dpv:Process provides a way to combine other concepts such as purposes, data, technologies, and entities in specific roles.

  2. Article 27-1a intended purpose: represented by the existing eu-aiact:IntendedPurpose whose parent is dpv:Purpose, and is assocaited using the relation dpv:hasPurpose. Note that the AI Act’s intended purpose is a broad concept that goes beyond DPV’s modelling of purpose as referring to the objective or goal, whereas intended purpose includes details such as the AI technique and data involved. Therefore, we suggest also modelling eu-aiact:IntendedPurpose as the subclass of dpv:Process.

  3. Article 27-1b period of time: represented using dpv:Duration and associated using the relation dpv:hasDuration. DPV provides enumerated concepts for different durations such as endless, fixed, temporal, until event, and until time.

  4. Article 27-1b frequency: represented using dpv:Frequency and associated using the relation dpv:hasFrequency. DPV provides enumerated concepts for different frequencies such as continous, often, singular, and sporadic.

  5. Article 27-1b intended to be used: represented as fria:IntendedUse, where we interpret this concept to be different from aiact:IntendedPurpose as the ‘purpose’ is a declaration of why the AI system is needed or to be used, and ‘use’ is the contextual application of that purpose in specific scenarios or ‘deployments’. The same AI system with one intended purpose can thus have different intended uses in separate scenarios e.g. through variance in input and output data (including decisions produced), human subjects involved, hardware and software being used. DPV already contains the concept tech:IntendedUse, which should be the parent of this concept.

  6. Article 27-1c categories of natural persons and groups: which can be represented using the existing concept dpv:DataSubject provided in DPV with a taxonomy of categories such as tourists, adults, minors. Though, data subject in DPV is currently defined as per the GDPR in terms of natural persons whose data is being processed, which is not compatible with our intended concept here. AIRO has AISubject which is defined as natural persons subjected to the use of AI which is more in line with what we want to model. Therefore, to avoid duplication of these categories under AISubject, and to avoid backwards incompatible changes to DPV as it is being actively used, we propose either changing the definition of data subject to include ‘data and AI subjects’ - following a similar change made in DPV 2.0 where the term remains the same dpv:DataSubject but its use now encompasses any data or technology including AI. Or, to create a new concept called dpv:HumanSubject as the parent of dpv:DataSubject and airo:AISubject, and move the taxonomy of data and AI subjects under it.

  7. Article 27-1c likely to be affected by its use in the specific context: where likely is represented by dpv:Likelihood and associated using dpv:hasLikelihood. Affected is interpreted as referring to a dpv:Consequence taking place with its subcategory dpv:Impact - which are associated using dpv:hasConsequence and dpv:hasImpact respectively. The entity or thing being affected is associated using dpv:hasConsequenceOn and dpv:hasImpactOn respectively. In DPV, consequence is the general term for referring to events such as failure of equipment, and impact is the preferred term for referring to entities being (significantly) affected such as through physical harm or loss of resources.

  8. Article 27-1d risks of harm: are represented by using dpv:Risk for the risk, with the concept risk:Harm to refer to harm. The RISK extension provides further concepts for modelling different categories of harms e.g. risk:PhysicalHarm. It also models these concepts as dpv: to enable their use in different roles across use-cases e.g. risk:Harm as a risk source, risk, consequence, or impact through the use of relevant relations.

  9. Article 27-1e human oversight measures: represented using dpv:HumanInvolvementForOversight along with other dpv:HumanInvolvementConcepts. DPV also provides additional taxonomies to represent whether the entity can perform some activity (dpv:EntityPermissiveInvolvement) such as correcting outputs or cannot perform an activity (dpv:EntityNonPermissiveInvolvement) such as not being able to opt out.

  10. Article 27-1e instructions for use: this is represented using eu-aiact:InstructionsForUse which extends from tech:Documentation.

  11. Article 27-1f measures to be taken in the case of the materialisation of those risks: this is represented using dpv:RiskMitigationMeasure, where the specific measures mentioned in this clause include - arrangements for internal governance and complaint mechanism, represented by dpv:GovernanceProcedures and its more specific form dpv:IncidentManagementProcedures and dpv:IncidentReportingCommunication.

In addition to these, Article 27-2 mentions the FRIA can “rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider”, which we interpret as the case where previous FRIA are used as inputs. Therefore, we suggest reusing dpv:hasData to indicate when existing FRIA act as inputs to the current FRIA process.

Similarly, Article 27-4 refers to the FRIA ‘complementing a DPIA’ where the DPIA covers some obligations related to the FRIA. As before, we also interpret this case as providing for a DPIA to be reused within the FRIA process, which can be expressed by using the relevant DPV concepts to represent DPIA and associating it with a FRIA through the relation dpv:hasData.

In the above, we have modelled our concepts based on the necessity to document the information as required within the AI Act. An alternative method to document these obligations is through the use of the PROV-O ontology [23] where each step is an activity with specific inputs and outputs, and where the provenance of activities and input/output artefacts is to be maintained as logs.

Concepts to represent outcomes of FRIA

AI Act’s Article 27-1 states the FRIA’s objective is to produce an “assessment of the impact on fundamental rights that the use of (AI) system may produce”, which in the simplest interpretation implies a boolean categorisation as to whether there is or isn’t an impact on fundamental rights. We therefore represent the process of determining the outcome of a FRIA process as the concept fria:FRIAOutcome, which extends the existing eu-aiact:FRIA concept and can be associated with a FRIA using the relation dpv:hasAssessment. And to represent the outcomes, we model these as statuses through the concept fria:FRIAOutcomeStatus which can be associated by using the relation dpv:hasStatus.

We also create instances to represent the specific outcomes possible as per Article 27:

  1. fria:FRIAOutcomeUnacceptableRisk: FRIA outcome status indicating that there is an unacceptable risk to fundamental rights, implying the AI system should not be used.

  2. fria:FRIAOutcomeHighResidualRisk: FRIA outcome status indicating high residual risk to fundamental rights which are not acceptable for continuation.

  3. fria:FRIAOutcomeRisksAcceptable: FRIA outcome status indicating residual risks to fundamental rights remain and are acceptable for continuation.

  4. fria:FRIAOutcomeRisksMitigated: FRIA outcome status indicating (all) risks to fundamental rights have been mitigated and it is safe for continuation.

As part of the FRIA procedure and the outcome process, it is essential to identify the relevant fundamental rights which might be impacted. Therefore, we reuse the DPV’s extension modelling the EU Charter of Fundamental Rights and Freedoms where each article within the charter is represented as an instance of dpv:Right. To represent which right is impacted, we reuse the concept risk:ImpactToRights along with the relevant instance of fundamental right, and associate it by using the relation dpv:hasImpact.

To enable the granular investigation of impact on rights as required in the FRIA process, we identify the need to to create impact concepts for each right in a manner that allows directly stating that right has been impacted e.g. Impact on Right of Non-Discrimination. We also argue that it would be useful to further represent such impacts at an even more granular level by creating concepts representing impacts on specific requirements within the right e.g. to state there has been an impact on this right due to discrimination based on a specific category such as sex, race, gender, etc. as mentioned in Article 21 of the Charter. We propose such concepts be added to the DPV’s extension modelling fundamental rights so that they can be readily used with the rest of DPV’s risk and impact assessment concepts.

Concepts to represent notification of FRIA

AI Act Article 27-3 states that upon completion of a FRIA, a deployer “shall notify the market surveillance authority of its results”. We represent this step as the concept fria:FRIANotificationAssessment, which extends the existing eu-aiact:FRIA concept and can be associated with a FRIA using the relation dpv:hasAssessment. This steps requires an assessment of whether the notification is required to be sent, or if there is an exception as Article 27-3 also states “In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify”.

To represent whether a notification is needed and has been communicated, or is exempt, we create the concept fria:FRIANotificationStatus which extends dpv:Status and its instances:

  1. fria:FRIANotificationNeeded for when a notification has been identified as being needed, and requires further assessment for whether it is required to be sent or is exempt;

  2. fria:FRIANotificationNotSent for when a FRIA notification is identified as being required but it not sent yet;

  3. fria:FRIANotificationSent for when the notification is sent; and

  4. fria:FRIANotificationExempt for which the obligation to notify is exempt as per Article 46-1. As each market surveillance authority will have the ability to create exemptions, we also note the possibility to expand this concept for different jurisdictions based on the DPV’s modular legal framework.

DPV contains several notices for obligations under GDPR such as for privacy notice, data breach reporting, rights exercise notices - and also provide guidance on modelling the information and metadata involved. We therefore reuse this method of using ‘notices’ to indicate communication of information between entities, and represent information in a FRIA notification the concept fria:FRIANotice by extending dpv:Notice, which can be associated using dpv:hasNotice.

FRIA Questionnaire and Automated Tool

AI Act Article 27-5 states the existence of a questionnaire based on a template that the deployers can use to complete the FRIA obligations, such as in Article 27-3 for communicating the FRIA to market surveillance authorities by submitting the filled out questionnaire template. The AI Act Article 27-5 further states that the questionnaire, “including through an automated tool”, is intended “to facilitate deployers in complying with their obligations under this Article” - which means that the FRIA questionnaire and documentation in some part can be based on automated tools.

To represent these processes, we find two interpretations based on the word ‘template’ having two meanings. First interpretation consists of three artefacts - a template questionnaire used to create a questionnaire, which is then used by deployers to create a filled out questionnaire. The second interpretation consists of two artefacts - a questionnaire is used by the deployer to create a filled out questionnaire. We follow it is the second interpretation, and represent it through the concepts fria:FRIAQuestionnaire to represent the template questionnaire that is given to be filled out by the deployer, and extend it as aiact:FRIACompletedQuestionnaire to represent the filled out questionnaire which the deployer can send in their notice to the market surveillance authority.

To represent the involvement of tools as per Article 27-5, we create the conecpt aiact:FRIATool which extends dpv:Technology. This enables the reuse of DPV TECH extension concepts regarding the modality (e.g. service, product), stakeholders (e.g. developer, user), and other relevant data to be modelled using existing concepts. The tool can be represented as being used in relevant steps of the FRIA by associating it with the dpv:isImplementedUsingTechnology relation.

We do not think it is necessary to explicitly define the tool as being automated given the purpose of the ontology is to create information systems which by definition use automation in some form. That being said, automated here can be interpreted to have different meanings within the context of “to facilitate deployers in complying with their obligations under this Article in a simplified manner”. The tool can be used to determine necessity, to assist in collecting and organising input information, to determine - manually or through reasoning - whether there is an impact on rights, and to support notification to authority aiact:FRIANotificationProcedure.

Conclusion & Future Work

Our work represents the first ontology for modelling the Fundamental Rights Impact Assessments (FRIA) under the AI Act in terms of the information involved as well as the procedure for conducting FRIA itself. As we utilised well-established standards in the legal domain - namely the semantic web standards - our ontology enables the use of machine-readable information that is interoperable, extensible, and well-structured by default. Through this, we enable the creation of automated tools to assist with the FRIA processes that can use our ontology to structure and use information in a consistent manner across use-cases. By being a semantic web ontology, our approach permits using existing standards for information retrieval such as SPARQL, and for SHACL for validation. It also facilitates use of logical semantic reasoning to infer additional information - such as specific risks and impacts, and to ensure completeness and correctness of information.

We developed our ontology by utilising and extending the Data Privacy Vocabulary (DPV), which is a state of the art resource that is continuously developed, and provides a modular approach to representing legal concepts across jurisdictions. This enables the development of practical tools that not only address the FRIA, but are also compatible with the existing and potential uses of DPV in legal processes associated with compliance, documentation, and communication. Our future work therefore consists of integrating our proposed concepts in DPV by participating within the W3C Data Privacy Vocabularies and Controls Community Group (DPVCG), developing a prototype FRIA questionnaire and automated tool that uses and maintains information based on our ontology and DPV, and performing experiments to understand its utility for different stakeholders such as organisations, auditors, market surveillance authorities, and the AI Office.

This work was funded by the ADAPT SFI Centre for Digital Media Technology, which is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant#13/RC/2106_P2. Harshvardhan J. Pandit is the current chair of the W3C Data Privacy Vocabularies and Controls Community Group (DPVCG) and the editor/maintainer of Data Privacy Vocabulary (DPV).

References

[1]
“Regulation 2024/1689 Of The European Parliament And Of The Council of 13 June 2024 laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act).” Jul-2024.
[2]
“Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation),” Official Journal of the European Union, vol. L119, May 2016.
[3]
H. J. Pandit, “A Semantic Specification for Data Protection Impact Assessments (DPIA),” Towards a Knowledge-Aware AI, pp. 36–50, 2022, doi: 10.3233/SSW220007.
[4]
M. Poveda-Villalón, A. Fernández-Izquierdo, M. Fernández-López, and R. Garcı́a-Castro, “LOT: An industrial oriented ontology engineering framework,” Engineering Applications of Artificial Intelligence, vol. 111, p. 104755, 2022.
[5]
D. Golpayegani, H. J. Pandit, and D. Lewis, “Airo: An ontology for representing ai risks based on the proposed eu ai act and iso risk management standards,” in Towards a knowledge-aware AI, IOS Press, 2022, pp. 51–65.
[6]
D. Golpayegani, H. J. Pandit, and D. Lewis, “To be high-risk, or not to be—semantic specifications and implications of the AI act’s high-risk AI applications and harmonised standards,” in Proceedings of the 2023 ACM conference on fairness, accountability, and transparency, 2023, pp. 905–915.
[7]
D. Golpayegani et al., AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act,” in Privacy Technologies and Policy, 2024, vol. 14831, pp. 48–72, doi: 10.1007/978-3-031-68024-3_3.
[8]
J. Hernandez, D. Golpayegani, and D. Lewis, “An open knowledge graph-based approach for mapping concepts and requirements between the EU AI act and international standards,” arXiv preprint arXiv:2408.11925, 2024.
[9]
Fundamental Rights Impact Assessment (FRIA) aligner.” https://aligner-h3020.eu/fundamental-rights-impact-assessment-fria/, 2021.
[10]
A. Mantelero, “The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template,” Computer Law & Security Review, vol. 54, p. 106020, Sep. 2024, doi: 10.1016/j.clsr.2024.106020.
[11]
J. Gerards, M. T. Schaefer, A. Vankan, and I. Muis, “Fundamental Rights and Algorithms Impact Assessment.” 2022.
[12]
Government of Canada, “Algorithmic Impact Assessment tool.” May-2024.
[13]
N. Inverardi et al., “Fundamental Rights and AI Impact Assessment: A proposal for a new quantitative approach,” in 2024 International Joint Conference on Neural Networks (IJCNN), 2024, pp. 1–8, doi: 10.1109/IJCNN60899.2024.10650347.
[14]
G. Malgieri and C. Santos, Assessing the (Severity of) Impacts on Fundamental Rights.” Social Science Research Network, Rochester, NY, Jun-2024.
[15]
H. Janssen, M. Seng Ah Lee, and J. Singh, “Practical fundamental rights impact assessments,” International Journal of Law and Information Technology, vol. 30, no. 2, pp. 200–232, Jun. 2022, doi: 10.1093/ijlit/eaac018.
[16]
B. Esteves and V. Rodrı́guez-Doncel, “Analysis of ontologies and policy languages to represent information flows in GDPR,” Semantic Web, vol. 15, no. 3, pp. 709–743, 2024.
[17]
A. Kurteva, T. R. Chhetri, H. J. Pandit, and A. Fensel, “Consent through the lens of semantics: State of the art survey and best practices,” Semantic Web, vol. 15, no. 3, pp. 647–673, 2024.
[18]
N. A. Zaguir, G. H. Magalhães, and M. M. Spinola, “Challenges and enablers for GDPR compliance: Systematic literature review and future research directions,” IEEE Access, 2024.
[19]
M. Gharib, P. Giorgini, and J. Mylopoulos, “An ontology for privacy requirements via a systematic literature review,” Journal on Data Semantics, vol. 9, pp. 123–149, 2020.
[20]
M. Gharib, P. Giorgini, and J. Mylopoulos, “COPri v. 2—a core ontology for privacy requirements,” Data & Knowledge Engineering, vol. 133, p. 101888, 2021.
[21]
M. Palmirani, M. Martoni, A. Rossi, C. Bartolini, and L. Robaldo, “Pronto: Privacy ontology for legal reasoning,” in Electronic government and the information systems perspective: 7th international conference, EGOVIS 2018, regensburg, germany, september 3–5, 2018, proceedings 7, 2018, pp. 139–152.
[22]
H. J. Pandit, B. Esteves, G. P. Krog, P. Ryan, D. Golpayegani, and J. Flake, “Data Privacy Vocabulary (DPV) – Version 2.” arXiv, Apr-2024 [Online]. Available: https://arxiv.org/abs/2404.13426. [Accessed: 01-May-2024]
[23]
T. Lebo et al., “Prov-o: The prov ontology,” W3C recommendation, vol. 30, 2013.