The Policy Recommendations from the new AI4People Institute’s Report “Towards an Ethics by Design Approach for AI”

By Patrice Chazerand, Director Public Affairs AI4People Institute

 

The AI4People Institute was set up to help steer the eponymous technology towards the good of society, everyone in it, and the environments we share. AI is by no means another ‘regular’ utility that needs to be regulated once it is mature. A powerful driver of a new form of smart agency across the board, it is already reshaping our lives, our social interactions, and our environments.

Our activity started in 2018 with the AI4People’s Ethical Framework for a Good AI Society: a collaborative effort to propose a series of recommendations for the development of a Good AI Society. It synthesised three main items: the opportunities and associated risks that AI technologies offer for fostering human dignity and promoting human flourishing; the principles that should undergird the adoption of AI; and twenty specific recommendations that, if adopted, will enable all stakeholders to seize the opportunities, to avoid or at least minimise and counterbalance the risks, to respect the principles, and hence to develop a Good AI Society.

The AI4People Summit that took place in May 2024 -where we presented the last reportthus provided a timely opportunity to take stock of the fact that the legislative works aimed to regulate AI have more or less come to conclusion, whether in Brussels, Strasbourg or other places around the world concerned about making the most of AI for all. Time to move on to effective governance, this high-level debate seemed to agree, bearing in mind a cardinal objective spelt out in the early works of the AI4People Institute, i.e. ‘enhancing human agency without removing human responsibility’.

While the EU has wasted no time to start executing the AI Act, as illustrated by the opening and manning of the world’s first AI Office, the AI4People Summit also made clear that the world of ethics may easily be caught jarring with a data-powered, survey-driven, statistics-focused society: whereas the ability to tell right from wrong will not lend itself to any kind of quantification, it nevertheless keeps inspiring the line of reasoning of lawmakers and judges on a global scale, if only because this vital set of skills is instrumental to drawing a line between what is human and what is artificial.

Accordingly, the analogy coined in the first report of AI4People with a view to paint the difference between mere compliance and a truly ethical approach has never been so handy: “It is the difference between playing according to the rules, and playing well, so that one may win the game.” In hindsight, isn’t our species’ strong, compelling and enduring inclination to ‘play well’, or to ‘win the game’ the single most important feature that took us where we are, i.e. well beyond mere survival?

Furthermore, the new AI4People Institute’s Report “Towards an Ethics by Design Approach for AI released last June outlines in actionable detail an innovative “Ethics by Design “ approach. It provides guidance to organizations, public or private, small or large enterprises, in proactively designing, developing and maintaining AI systems in accordance with the laws, ethical, moral principles and values that underpin the European Union.

There is a potent reason why this approach provides valuable assistance to anybody who wants to comply with the EU AI Act: while pretty much useless in Court (judges will decide cases according to applicable laws, not to ethical principles), ethics – intended as a reflection of moral values prevailing within a society at a given time – does not only inspire law making at every step of the way but pervades almost every decision we make in our daily life as members of civilized communities. Indeed, our research shows that the clear value to any organisation – or to society as a whole – of the dual advantage of an ethical approach to AI amply justifies the expense of engagement, openness, and contestability that such an approach requires.

This philosophy has spurred the works of AI4People Institute from the outset. Back in 2018, we set out to ensure that AI systems do not only comply with the letter of the law, but with the spirit of ethics and trustworthiness too. It is through this twofold compliance that AI systems can contribute to the promotion and protection of the fundamental rights of people and the common good of society. The above-mentioned analogy suggested in our 2018 report keeps guiding our efforts to help the EU AI Act gain quick and widespread traction by virtue of what was then defined as the “dual advantage” of adopting an ethical approach to AI: “On one side, ethics enables organisations to take advantage of the social value that AI enables. This is the advantage of being able to identify and leverage new opportunities that are socially acceptable or preferable. On the other side, ethics enables organisations to anticipate and avoid or at least minimise costly mistakes.”

Core principles having been agreed with amazing speed and consistency around the world, the AI era has then gone through a flurry of legislation, arguably not always in sync with the global nature of the technologies concerned, due to differences in culture, our “Human, all-too-Human” legacy, to borrow a line from Nietzsche. True to human logic though, the next season of the AI era spells urgent and flawless implementation of the relevant legislative frameworks. That sounds all fine and dandy, but it is easier said than done, given that the technologies that power AI “move fast, break things” on occasion, and travel instantly around the globe. This is where ethics comes into play, with particular efficiency for those deciding to follow the lines of the “Ethics by Design” approach.

The report is worth a relatively heavy investment of time to appropriate the whole gamut of checks and guidance aimed to ensure not only compliance with applicable law, but walking the extra mile, so to speak, i.e. the ethical mile whose reward consists of winning the game.

It is worth noting that the third era of AI will also be one when the main mission of regulators will shift from that of police officers spotting and punishing each and every breach of law to that of facilitators making legal intricacies look simple – hence violations less frequent – and engaging more economic agents towards making the most of leading-edge technology. By the way, we should take heart from the fact that such a sentiment, now widespread across the EU, echoes longstanding works led by the government of Japan under the heading of “Agile Governance”. It therefore comes as no surprise that the “Ethics by Design” approach will need full support from EU institutions if it is to effectively help the EU AI Act meet with the quick, resounding success it deserves.


Our Policy Recommendations:

Not to distract stakeholders from the truly educational experience of perusing the report thoroughly while bearing in mind their own AI-related context, it might not harm though to keep in perspective the main thrust of the Recommendations made to EU institutions:

1/ Evangelizing:

  • Promote the awareness and the education of AI stakeholders on the Ethics by Design process via information, guidance, training, and additional resources on the Ethics by Design process and its benefits and challenges.
  • Encourage ever more AI stakeholders to participate and engage in the Ethics by Design process via dedicated platforms, forums, networks, and events aimed to facilitate dialogue, consultation, co-creation, and co-operation among the AI stakeholders on the Ethics by Design process and its outcomes and impacts.

2/ Monitoring and building trust:

  • Support the implementation and the evaluation of the Ethics by Design Process by way of tools, methods, frameworks, and indicators such as KPIs that enable and facilitate application, assessment, improvement, and verification of the process and its results and effects.
  • Promote an evidence-based approach by collecting and sharing best practice of Ethics by Design processes and effective policies to support them. The resulting database listing use cases and outputs of ethical assessments could encourage companies to build tailor-made applications upon the main drivers identified, how effective mitigation actions have proved, etc.

3/ Incentivizing:

  • Ethics by Design relies on input received from diverse stakeholders in the development process, including by groups who due to economic, structural or historical disadvantages are often excluded from public debate, or left lagging behind. For them, active participation in the process can be particularly burdensome, while delivering priceless value for the developers, especially in the field of social apps. Their work therefore needs to be adequately funded and rewarded, and their contribution to the development process publicly acknowledged if the EU is serious about spreading “AI for All”.
  • For the sake of increased effectiveness, legal consultants and experts should have appropriate access to help organizations validate all applicable legal requirements.

4/ Coherence and consistency:

  • Ensuring the coherence and the consistency of the Ethics by Design process with the EU policies and initiatives on AI cannot but help secure fast and effective implementation of the EU AI Act. This critical goal calls for aligning and integrating the Ethics by Design process with the EU legal and ethical frameworks and regulations, the EU AI strategy and action plan, and the EU’s future AI initiatives and projects. Needless to say, this particular Recommendation will take on additional value as both the EU AI Act and the Ethics by Design process strive harder and harder to meet inevitable global challenges.

 

Comments are closed.