Risk Assessment Tool for AI Act

This tool provides a potential risk-level for the use of AI systems. Select one or more concepts and then click the Check Risk Level button. You can test various combinations or try pre-selected examples below or select random values. The definitions of terms are provided at the document end.


Domain:
Purpose:
Capability:
User:
Subject:

Results

results will appear here

Definitions

Inputs are described in terms of the following concepts. For more details on the creation of the tool, methodology, and discussion - refer to the peer-reviewed publication.

The outputs represent a potential risk level that represents whether the selected combination satisfies the AI Act Annex III:


Acknowledgements

This implementation is by Harshvardhan J. Pandit based on the below work. Source code is available via GitHub repo under a permissive license.

Cite this work as: Delaram Golpayegani, Harshvardhan J. Pandit, and Dave Lewis. "To Be High-Risk, or Not To Be—Semantic Specifications and Implications of the AI Act’s High-Risk AI Applications and Harmonised Standards" Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 2023. https://doi.org/10.1145/3593013.3594050