This tool provides a potential risk-level for the use of AI systems. Select one or more concepts and then click the Check Risk Level
button. You can test various combinations or try pre-selected examples below or select random values. The definitions of terms are provided at the document end.
Inputs are described in terms of the following concepts. For more details on the creation of the tool, methodology, and discussion - refer to the peer-reviewed publication.
Domain
: the domain or sector or area within which the AI system is or will be deployed; e.g. Health, EducationPurpose
: the purpose or end-goal for which the AI system is or will be used to achieve; e.g. Patient Diagnosis, Exam AssessmentCapability
: the capability or application for what the AI system is or will be used to provide; e.g. Facial Recognition, Sentiment AnalysisUser
: the user or operator who is or will be using the AI system; e.g. Doctor, TeacherSubject
: the subject or individual or group which the AI system is or will be used towards; e.g. Patients, StudentsThe outputs represent a potential risk level that represents whether the selected combination satisfies the AI Act Annex III:
This implementation is by Harshvardhan J. Pandit based on the below work. Source code is available via GitHub repo under a permissive license.
Cite this work as: Delaram Golpayegani, Harshvardhan J. Pandit, and Dave Lewis. "To Be High-Risk, or Not To Be—Semantic Specifications and Implications of the AI Act’s High-Risk AI Applications and Harmonised Standards" Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 2023. https://doi.org/10.1145/3593013.3594050