Bridging Regulatory Gaps in Generative AI Initiative

The initiative is enlisted to tackle the compelling challenges presented by generative artificial intelligence, covering both legal and technical dimensions. The rise of generative AI, particularly through advanced models like ChatGPT and its iterations, signifies a transformative shift in the landscape of artificial intelligence. These cutting-edge models exhibit unprecedented capabilities in manipulating various data formats, thereby expanding their range of applications. However, the complexities of these systems pose several challenges, particularly in terms of predictability and compliance with legal standards.

Often, Generative AI models are marked by their wider scope and greater autonomy in extracting patterns within large datasets. In particular, LLMs’ capability for smooth general scalability enables them to generate content by processing a varying range of input from several domains.

The 33% of firms view “liability for damage” as the top external obstacle to AI adoption, especially for LLMs, only rivalled by the “need for new laws”, expressed by 29% of companies. A new, efficient liability regime may address these concerns by securing compensation to victims and minimizing the cost of preventive measure.

Generative AI models are exposed to privacy and data protection violations due to pervasive training on (partially) personal data, the memorization of training data, inversion attacks interactions with users, and the output the AI produces.

Next to data protection concerns, Generative AI presents various legal challenges related to its  creative outputs. Specifically, contents generated by LLMs result from processing text data such as websites, textbooks, newspapers, scientific articles, and programming codes. Viewed through the lens of intellectual property (IP) law, the use of LLMs raises a variety of theoretical and practical issues.

To promote a responsible and secure implementation of generative models, it becomes essential to carefully examine the legal and regulatory implications of generative AI within the regulatory framework of the European Union. Ai4People is committed to promoting a safe and compliant future in the implementation of generative artificial intelligence, focusing on the challenges posed by these systems in terms of responsibility, data privacy, intellectual property rights, and cybersecurity.

The initiative, by fostering collaboration between proposals from academia and the decision-making bodies, is dedicated to identifying and studying any regulatory gaps in this sector, with the aim of finding effective solutions to address them.

This initiative and introductory text were inspired by the paper Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity

For more information please mail to: