The AI Act Mitigation Initiative

The initiative provides an interdisciplinary approach to assessing the impact of the EU Artificial Intelligence Act (AIA), drawing on the expertise of its network from academia and industry. The initiative strives to provide policy makers advise on the upcoming implementation phase of the AI Act (AIA), which lacks a discernible methodology for evaluating AI risks in practical contexts.

This endeavor entails formulating real-world risk scenarios and integrating the AIA with a clear methodology for the assessment of these risks in concrete situations.

The supranational legislator expects the regulation of AI to increase legal certainty in this field and to promote a well-functioning internal market: reliable for consumers, attractive for investment, and technologically innovative.

The AIA relies on the traditional conception that risk is the likelihood of converting a source of hazard into actual loss, injury, or damage. Sources of danger are those uses of AI that are most likely to compromise safety, health, and other values.

Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. The initiative aims to suggest a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios with a nuanced analysis of AI risk by exploring the interplay between (a) risk determinants (b), individual drivers of determinants, and (c) multiple risk types.

AI4People aids in the evaluation of risk significance, providing deployers with the opportunity to reassess the risk levels of their AI systems, which could lead to reduced regulatory burdens.

This initiative and introductory text were inspired by the paper AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act


For more  information please mail to: TheAIActMitigation.initiative@ai4People.org