Research & Impact

RESEARCH FROM AI4PEOPLE INSTITUTE

The AI4People’s Ethical Framework for a Good AI Society reports the findings of AI4People, an Atomium – EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations – to assess, to develop, to incentivise, and to support good AI – which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

AI4People’s second year of activities has focused on applying – concretely, in real world scenarios and through appropriate governance – those ethical principles of AI announced by AI4People in 2018. The 2019 White Paper gives shape to – whilst establishing priorities and critical issues – 14 Priority Actions, a Model of S.M.A.R.T. Governance and a Regulatory Toolbox, to which governments and businesses alike can refer to – immediately and efficiently.

Following its past work on AI ethics (with the “AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”) and on AI governance (with the “AI4People Report on Good AI Governance: 14 Priority Actions, a S.M.A.R.T. Model of Governance, and a Regulatory Toolbox”), in 2020 AI4People has identified seven strategic sectors (Automotive, Banking & Finance, Energy, Healthcare, Insurance, Legal Service Industry, Media & Technology) for the deployment of ethical AI, appointing 7 different committees to analyze how can trustworthy AI be implemented in these sectors: the AI4People’s 7 AI Global Frameworks are the result of this effort.

AI4PEOPLE’S TOWARDS AN ETHICS BY DESIGN APPROACH FOR AI

The AI4People Institute “Towards an Ethics by Design Approach for AI” Whitepaper outlines a so called “Ethics by Design ” approach aimed to provide guidance to organizations, public or private, small or large enterprises, in proactively designing, developing and maintaining Artificial Intelligence (AI) systems  in accordance with the laws, ethical, moral principles and values that underpin the European Union.
The paper provides a step-by-step description of an Ethics by Design process and criteria to assess lawfulness and ethical principles  to embed them into all AI system development lifecycle phases, by adopting pragmatic and operational Ethics by Design requirements, ensuring these are developed in an accountable manner.

RESEARCH FROM OUR NETWORK

In this article, the prospects for stronger global AI governance are assessed, considering potential pathways forward. The paper maps nascent landscape of international regimes focused on AI governance, revealing a governance deficit attributed to the inadequacy of existing initiatives, gaps in the landscape, and difficulties in reaching agreement over more appropriate mechanisms.

As China and the United States strive to be the primary global leader in AI, their visions are coming into conflict. This is frequently painted as a fundamental clash of civilisations, with evidence based primarily around each country’s current political system and present geopolitical tensions. However, such a narrow view claims to extrapolate into the future from an analysis of a momentary situation, ignoring a wealth of historical factors that influence each country’s prevailing philosophy of technology and thus their overarching AI strategies.

The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to the collection of partial data, rare updates, and significant demands on resources.

Artificial intelligence (AI) assurance is an umbrella term describing many approaches–such as impact assessment, audit, and certification procedures–used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance focusses on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual.